AI Agents  

Fermi's Agent Part 2: Build your AI Clone

If you’re a software engineer, you’ve probably heard these questions a thousand times:

“You’re a software engineer? Can you fix my printer?”

“Are you a real engineer or a virtual engineer?”

“Can you hack into my ex’s Instagram?”

These same questions get asked over and over, and it gets exhausting. But what if there was a solution? What if you could build an AI version of yourself to handle these repetitive conversations? You could finally get some peace.

This article walks you through building a personal AI chatbot that talks like you, knows everything about you, and can answer questions on your behalf. This isn’t going to take over the world. It’s just a helpful chatbot that sounds like you. The system doesn’t care who built the brain. You can use any AI provider OpenAI, Google Gemini, Anthropic Claude. The code is almost identical, so if one becomes too expensive, you can switch to another.

Enrico Fermi once looked at the sky and asked , where is everybody? The universe is vast, the odds are favorable, and yet, silence. We call it the Fermi Paradox , the gap between what should exist and what we can actually find.

AI agents live in that same gap.

Fermi’s Agents is a series on building AI agents. We start from scratch and go all the way to systems that can reason, plan, and act on their own.

If you want to understand what AI is before you build with it, Schrödinger’s AIhas every answer you need.

GitHub - RikamPalkar/fermis-agents

Part 2: Build Your AI Agent in Python From Scratch

Let’s build this

If you’d rather see this in action instead of reading through all the steps, I’ve got you covered. Check out the

I Built an AI Version of Myself So People Stop Bothering Me

It’s the same walkthrough with screen recordings, live demos, and all the confusing parts explained in real-time. No judgment either way, this article works great as a reference while you code along with the video, or go solo with just the text. Your call.

What You Need

Before we start, let’s talk about the requirements. If I can do this, you can do this. My grandma could probably do this, though she’d probably call me to ask how to do it, which is exactly why we’re building this AI.

1. VS Code

First, you’ll need VS Code. It’s free, and you can download it from code.visualstudio.com. Think of it as Microsoft Word, but it compiles.

2. Python

Next, you need Python, not the snake, but the programming language.

For Mac users: Open Terminal (yes, it sounds scary, but it’s just a text box where you type commands and pretend you’re a hacker). Type:

python3 - version

If you see a version number, congratulations! You have Python. If you see an error, don’t panic. Install it using Homebrew. First, visit brew.sh if you don’t have Homebrew installed. Then run:

brew install [email protected]

Restart your terminal, and you’re good to go.

For Windows users: You have two options, because Microsoft loves giving you choices so you can blame yourself later.

Option one: Go to python.org, download the installer, and run it. Important: Check the box that says “Add Python to PATH.” If you don’t check this, you will have a bad time. Trust me, I didn’t check it once, and I cried.

Option two: Use the Microsoft Store. Search “Python,” install it, and you’re done. Sometimes the easy way is the right way.

3. An API Key

Finally, you’ll need an API key, it’s an API-Key, meaning in order to access an API you need a key, this is that key.

OpenAI: We’re using OpenAI because their models are amazing and I’m basic like that. But you can use whatever you want.

Go to platform.openai.com. Sign up or log in. Yes, you need another account. Yes, I know we all have 47 accounts for everything now.

Click your profile icon, then select “API Keys.” Click “Create new secret key.” Name it something memorable like “my-ai-clone” or “the-ai-that-answers-so-i-dont-have-to.” Copy this key carefully, you’ll only see it once. It’s like a Snapchat message. Save it somewhere safe.

A quick heads up: OpenAI charges per use, but it’s cheap. For this entire project, you’ll spend about 10 cents testing. A coffee costs more. You’ll survive. Plus, they give free credits when you sign up, so you might not pay anything. And Hey, Free is everyone’s favorite price.

Alternative Providers:

Don’t want OpenAI? That’s fine, I won’t take it personally.

Google Gemini: Go to aistudio.google.com. They have a free tier.

Anthropic Claude: Visit console.anthropic.com.

The code we’ll build works with all of them. Just change one line, and I’ll show you how.

Setting Up Your Project in VS Code

This part looks technical, but it’s really just creating folders , something you do on your desktop all the time.

Step 1: Create a Folder

Open VS Code. Create a new folder somewhere convenient. Name it something like “my-ai-agent” or get creative with “AI-me” or “Digital-twin.”

Step 2: Create a Virtual Environment

Big fancy term, simple concept, it’s a sandbox for your Python code. Whatever happens in there stays in there. Like Vegas.

Open the terminal in VS Code by going to View → Terminal, or press Control+backtick (that weird key next to the 1 that nobody uses).

Type this command:

python3.12 -m venv .venv

This creates your sandbox. Now activate it:

source .venv/bin/activate

On Windows,

.venv\Scripts\activate

You’ll see “.venv” appear in your terminal prompt — that means it worked. You’re in the sandbox now. Let’s play.

Step 3: Create Your Files

We need three things:

  • A .env file : this is where we hide your API key. It’s like a secret drawer for your secrets.

  • A kb folder: kb stands for “Knowledge Base.” Fancy term for “stuff about you.” We’ll put your LinkedIn PDF and bio here.

  • An agent.ipynb file: this is where the magic happens.

Your file structure should look like this:

my-ai-agent/
├── .env
├── kb/
│   ├── linkedin.pdf
│   └── aboutme.txt
└── agent.ipynb

Beautiful. Look at that organization. If only my life was this organized.

Configuring Your API Key

Open the .env file. This is where we put your API key where nobody can see it. Well, you can see it.

Add this line:

OPENAI_API_KEY=your-key-here

Paste your actual API key after the equals sign. No quotes, no spaces.

If you’re using a different provider:

# For Google Gemini:
GOOGLE_API_KEY=your-key-here

# For Anthropic Claude:
ANTHROPIC_API_KEY=your-key-here

This file is secret — never, ever commit it to GitHub. Unless you want random strangers using your API and sending you a surprise bill. Don’t ask me how I know this.

Adding Your Knowledge Base

Now we get to the fun part, teaching the AI about you. This is where your AI becomes YOUR AI.

1. Create Your Bio (aboutme.txt)

In the kb folder, create a file called aboutme.txt. Write a short bio, who you are, what you do, your skills. Don't be humble. This is your AI's brain about you.

Here’s short snippet:

I'm Rikam Palkar, a software engineer specializing in iOS development. 
 I have 10 years of experience building mobile apps.
 I'm passionate about AI and teaching others to code.

2. Add Your LinkedIn PDF

Here’s a shortcut: go to your LinkedIn profile. Click “More” → “Save to PDF.”

Press enter or click to view image in full size

Drop that PDF in the kb folder. The AI will read this and know your entire work history, skills, education, everything. It’s like handing someone your resume, but they actually read it.

Optional: Add More Files

You can add more, your resume, blog posts, project descriptions, or that award you won in third grade. Whatever you want the AI to know about you. The more context you give it, the smarter your AI-you becomes. Garbage in, garbage out. Gold in, gold out.

Understanding Jupyter Notebooks

Before we write code, let’s understand what we’re using: a Jupyter Notebook.

Setting Up Jupyter in VS Code

In VS Code, you need the Jupyter extension. Go to Extensions (the puzzle piece icon on the left, the one that looks like Tetris). Search for “Jupyter” and install the one from Microsoft. It has millions of downloads. If millions of people use it, it’s probably good.

Press enter or click to view image in full size

If you’ve never seen one, think of it like a Google Doc but for code. Just like in Google Docs you can write paragraphs, here you write code in chunks called “cells”. You can run each cell separately and see what happens. Break something? Fix just that cell. No need to run everything again.

Once installed, creating a notebook is easy. Right-click in your file explorer, click “New File,” and name it with the .ipynb extension. (By the way, ipynb stands for "Interactive Python Notebook.")

Name yours agent.ipynb.

Now you have an empty notebook. See the “+ Code” button? Click it to add a new cell, and you’re ready to write code.

Press enter or click to view image in full size

See those boxes? Those are cells. You write code in them, press Shift+Enter, and it runs. The output shows right below it.

Selecting Your Python Kernel

One more thing, see that button in the top right that says “Select Kernel”? Click it, choose “Python Environments,” and then select the .venv we created earlier. This tells the notebook: “Use this specific Python environment with all our packages.”

Alright, now we’re ready to code. We’ll do this step by step, one cell at a time. Each cell does one thing. Simple.

Writing the Code

Cell 1: Install Packages

First, install the libraries we need.

%pip install openai python-dotenv gradio pypdf

Here’s what each one does:

  • openai: talks to the AI. This is the messenger.

  • python-dotenv: loads our secret API key from that hidden file.

  • gradio: creates the chat interface. One line of code, beautiful UI.

  • pypdf: reads your LinkedIn PDF. Because PDFs are weird and we need help.

Run this cell. It takes a minute. Grab a snack. Actually, don’t, by the time you’re back, it’ll be done.

Quick tip: This is a one-time thing. Once these packages are installed, they stay installed. Next time you open this notebook, skip this cell.

Cell 2: Import Libraries

Next, we import everything we just installed.

from pathlib import Path
from dotenv import load_dotenv
from openai import OpenAI
from pypdf import PdfReader
import gradio as gr

print("All libraries imported successfully!")

You should see “All libraries imported successfully!” that means everything loaded. If you see an error, something went wrong in Cell 1. Go back and run it again.

Cell 3: Set Up OpenAI Client

Now we connect to OpenAI. This is where your API key becomes useful:

load_dotenv()
client = OpenAI()
MODEL = "gpt-4o-mini"

print(f"OpenAI client ready! Using model: {MODEL}")

You should see “OpenAI client ready! Using model: gpt-4o-mini” we’re connected to the mothership.

load_dotenv() reads your .env file and loads the API key automatically. OpenAI() creates a client using that key. We don't even have to type the key, it just knows. gpt-4o-mini is fast, cheap, and smart enough for what we're doing. Perfect for testing. You can always upgrade to gpt-4o later when you're feeling fancy.

Using a different provider? No problem. Here’s how you’d change this:

#For Google Gemini:

import os
import google.generativeai as genai
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
model = genai.GenerativeModel('gemini-pro')

#For Anthropic Claude:
import os
from anthropic import Anthropic
client = Anthropic(api_key=os.getenv("ANTHROPIC_API_KEY"))
MODEL = "claude-3-sonnet-20240229"

See? Almost identical structure. AI companies want you to switch easily. Smart business.

Cell 4: Load Your Knowledge Base

Now let’s load all those files you created about yourself. This is where the AI starts learning who you are:

KB_FOLDER = Path("kb")

# Read LinkedIn PDF
pdf_reader = PdfReader(str(KB_FOLDER / "linkedin.pdf"))
linkedin_text = "\n".join(page.extract_text() for page in pdf_reader.pages if page.extract_text())

# Read about me text
about_me = (KB_FOLDER / "aboutme.txt").read_text()

print(f"Loaded {len(linkedin_text)} chars from LinkedIn, {len(about_me)} chars from aboutme.txt")

The print statement tells you how many characters it loaded. If you see zeros, check that your files are in the kb folder and aren’t empty. We’re basically vacuuming up everything about you and preparing to feed it to the AI.

Cell 5: Build the System Prompt

Now we build something called a “system prompt.” This is the most important part. Let me explain.

When you talk to ChatGPT or any AI, there are actually three types of messages happening behind the scenes:

  1. System message: This is like secret backstage instructions. The user never sees this, but it shapes how the AI behaves.

  2. User message: That’s what you type. The question, the request.

  3. Assistant message: That’s what the AI responds with.

The system prompt is where the magic happens. This is where we tell the AI: “You are ME. Here’s everything about me. Now act like me.”

NAME = "Rikam Palkar"  # Change this to YOUR name

SYSTEM_PROMPT = f"""You are a digital twin of {NAME}.
Respond as if you ARE {NAME} — use first person ("I", "my"), match their tone, and answer ONLY from the context below.
If the answer isn't in the context, say "I'm not sure — ask {NAME} directly."

### CONTEXT

[LinkedIn Profile]
{linkedin_text}

[About Me]
{about_me}
"""

print(f"System prompt: {len(SYSTEM_PROMPT)} characters")

NAME is just a variable with your name. Change “Rikam Palkar” to your actual name. This is important. Unless you want your AI to pretend to be me. Which would be weird.

Inside the prompt, we’re being very specific: “You ARE this person. Use first person. Only use the context I gave you. If you don’t know something, admit it.”

Cell 6: Create the Chat Function

This function handles the actual conversation. Every time someone asks a question, this function runs:

def chat(message, history):
    messages = [{"role": "system", "content": SYSTEM_PROMPT}]
    
    # Add conversation history
    for msg in history:
        messages.append({"role": msg["role"], "content": msg["content"]})
    
    # Add current user message
    messages.append({"role": "user", "content": message})
    
    # Call OpenAI
    response = client.chat.completions.create(model=MODEL, messages=messages)
    return response.choices[0].message.content

print("Chat function defined! Ready to launch.")

You should see “Chat function defined! Ready to launch.”

Here’s what’s happening:

  • First, we create a messages list starting with the system prompt, that's all the context about you.

  • Then we loop through the history. History is all the previous messages in the conversation. This is how the AI remembers what you talked about earlier.

  • Then we add the new user message, the question someone just asked.

  • Finally, we send everything to OpenAI and return whatever it says.

This function is the brain of the operation. Simple, but powerful.

Cell 7: Launch the Chat UI

Finally, the moment of truth. One line of code launches everything:

gr.ChatInterface(fn=chat, title="Rikam AI").launch()

That’s it. One line. Gradio handles the web server, the UI, the message history, everything. You just tell it which function to call (our chat function) and give it a title.

Run this cell and BOOM! Your personal AI is live.

“I’ll ask a few questions:”

Press enter or click to view image in full size

Question 1: “‘Who are you?’”

Press enter or click to view image in full size

Question 2: “‘What’s your professional background and experience?’”

So we just built an AI that talks like you. What now? Well, this is just the beginning. There’s so much more we can do.

In upcoming articles, I’ll show you:

“One, How to add web scraping, so your AI can automatically read your blog posts, articles, tweets, anything you write online.”

“Two , How to add an evaluator model that checks if the AI’s response is actually good, and retries if it’s not. Quality control for your AI.”

“Three, How to build a React frontend for this, so it looks professional and not like a school project.”

“Four, How to host this on your personal website, so ACTUAL people can chat with your AI. Imagine recruiters talking to AI-you at 3am while you’re sleeping. The future is weird and wonderful.”

Thanks for reading. Now go build your AI clone and finally get some peace from your friends and family. I’ll see you in the next one.

Fermi never got his answer. Maybe we will. There are more layers to uncover and the silence is getting louder.

GitHub - RikamPalkar/fermis-agents

Fermi never got his answer. Maybe we will.

There are more layers to uncover and the silence is getting louder.

Previous: Part 1: Build Your AI Agent in Python From Scratch