Prompt Engineering  

How Prompt Engineering Changed the Way I Write Code

Why I Took a Class on Prompt Engineering

AI tools like ChatGPT, Claude, and Gemini are quickly becoming part of everyday life, not just for casual use, but as serious tools in the workflow of developers, researchers, and creatives. But here's the catch: most people don’t know how to use them well. They throw in a question or a vague task and hope for the best. The results? Inconsistent, unreliable, and often frustrating.

That’s why I decided to take a class focused entirely on prompt engineering and what’s now being called vibe coding — a style of coding that relies heavily on natural language interactions with large language models (LLMs). I wanted to move past surface-level use and actually learn how to think in prompts.

In this article, I’ll walk through the core ideas and techniques I learned in the class, and how I’ve started using them in real coding environments like Cursor. The shift in mindset has been significant — not just in how I write prompts, but in how I approach coding with AI overall.

What Is Prompt Engineering, Really?

At its core, prompt engineering is the skill of writing clear, structured instructions that guide an AI model toward the output you want. It sounds simple — and sometimes it is — but like writing good code, it becomes more complex and more powerful the deeper you go.

What makes it "engineering" is the intentional design behind the prompt. You're not just tossing questions into a chat box and hoping for the best. You're thinking about how the model interprets input, how it handles ambiguity, and how to format your prompt so it can reason through what you want in a repeatable way.

The class broke down prompt engineering as a kind of bridge between human intent and machine behavior. You’re essentially translating goals into a form the model understands — a hybrid of instruction, context, and input data. When done right, the difference in results is immediate. You get clearer answers, cleaner code, better structure, and far less “hallucination” or irrelevant output.

It’s been described as “programming in English,” and that’s not far off. Instead of writing functions and loops, you’re writing layered, natural-language instructions. But the mindset is similar: think like a builder, not just a user.

How a Good Prompt Is Structured?

One of the first principles we covered in class was that great prompts aren’t written – they are engineered. Every prompt has a structure, and once you understand the architecture, you can start producing results that are reliable and reusable.

The format we followed throughout the course broke prompts into three essential components: 

Component Description
Instruction What the model should do.
Context Background, tone, constraints, or role.
Input The actual content the model should process.

This format isn’t just for show, as it will affect how the model interprets your request and how accurately it delivers the result.

Example: Bug Analysis in Production Code

We used examples like this one in class to explore how different prompt structures changed the model’s reasoning.

Unstructured Prompt

Can you look at this function and tell me if there are any bugs?

Ambiguous. The model might give a surface-level response or miss edge cases.

Structured Prompt

You are a senior backend engineer. Identify any logic bugs in the following Python function.  

Assume this function is part of a production billing system.

def apply_discount(price, code):

    if code == "VIP":
        return price * 0.2
    return price

The model catches the issue: the function returns the discounted amount, not the final price after discount.

By explicitly defining the role (senior engineer), context (production billing), and task (identify logic bugs), the model is far more likely to give a useful, high-confidence response. These are the kinds of scaffolds we were taught to bake into every prompt, no matter the use case – whether for debugging, summarization, classification, or planning.

Prompting Styles: Zero, One, and Few-Shot

Another key concept covered was how to “prime” the model using examples. The style you choose depends on how complex or ambiguous the task is.

Style Description Use Case Example
Zero-shot Just give the instruction, no examples. “Classify the sentiment of this review.”
One-shot Add a single example to guide the model. “Here’s one labeled review. Now classify this.”
Few-shot Include several examples to establish a pattern. Used for tasks like regex extraction, formatting

I ran dozens of experiments testing these styles side by side. The takeaway was simple: the more ambiguous or complex the task, the more valuable a few good examples become.

Chain-of-Thought Prompting

One of the most useful techniques explored was Chain-of-Thought (CoT) prompting, which involves guiding the model to reason through problems step by step, instead of jumping straight to an answer.

Without CoT, LLMs often guess. With CoT, they explain their thinking, which improves both accuracy and transparency.

Example: Decision-Making with Reasoning

Basic Prompt

Which job offer should I take?

The model might respond with a vague or biased answer.

Chain-of-Thought Prompt

I have three job offers:  

  • A startup (high growth, long hours)  
  • A stable enterprise (great benefits, 9–5)  
  • A mid-size remote company (good salary, flexible)

Evaluate the pros and cons of each and recommend the best option based on work-life balance and long-term career growth. Think step by step.

The model breaks down each offer logically and explains the recommendation.

This technique generalizes well, and I have used it for code review, architecture decisions, and even debugging workflows. If you want better answers, ask the model to show its work.

Structured Prompting and Simulated Memory

Midway through the course, we moved into more advanced territory with structured prompting systems, specifically something called GSCP (Gödel’s Scaffolded Cognitive Prompting). It’s a method for building prompts that simulate layered human reasoning and memory, without needing external APIs or real memory storage.

What GSCP Does?

GSCP prompts are designed to break down a task into logical steps, including:

  • Input normalization
  • Emotion and sentiment analysis
  • Intent resolution
  • Hypothesis generation
  • Confidence scoring
  • Final classification and response

It sounds heavy, but the effect is simple: instead of the model giving a one-shot answer, it goes through a thought process that mirrors how a human might solve a problem or clarify a vague request.

Simulated Memory

We also studied how prompts can simulate memory:

  • Working memory holds temporary values (e.g., the last recommended service)
  • Declarative memory encodes known data (e.g., a list of available actions or services)
  • Episodic recall compares current input to past interactions

This is how a prompt can ask: “Do you still want to continue with X, or switch to something else?” all without any true long-term memory. It’s a powerful technique that leads to more context-aware, consistent outputs, especially in multi-step flows.

Vibe Coding vs. Prompt-Oriented Development

Let us look at two distinct approaches to working with LLMs, both of which I now use depending on the situation.

Vibe Coding

This is fast, intuitive, and intentionally informal. You describe what you want in natural language and let the model take the lead. If the result is close, you tweak the prompt or rerun it until it works. It's especially useful for quick experiments or generating boilerplate code.

Write a script that parses job listings and outputs a CSV — just give me something that runs.

No deep planning, no layers, just iterate until it clicks.

Prompt-Oriented Development (POD)

POD is more deliberate. You break the task into parts, write prompts for each step, and structure everything for clarity and repeatability. It’s better suited for when you care about quality, testability, or sharing code with others.

First generate the HTML scraper. Now validate it. Now convert the output to structured JSON. Add error handling.

The course made it clear that both styles are valid. It is more about using the right one for the task. I’ve found myself leaning into ‘Vibe Coding’ during rapid prototyping, and switching to POD when building something I’ll revisit later or that I need to deploy.

How I’m Using These Skills Today

Since finishing the class, I’ve integrated prompt engineering into my actual coding workflow mostly through Cursor, which functions as an AI-powered code editor with natural language support baked in.

When writing or reviewing code, I now prompt with intent. Instead of vague questions like “what’s wrong with this?”, I’ll use structured instructions:

You’re a Python linter. Check this function for edge cases or anti-patterns. This code is part of an async data pipeline.

The difference in output is immediate, and the model gives targeted, context-aware responses that save time and catch things I’d likely miss on a first pass.

The biggest shift has been in mindset. I now approach LLMs more like collaborators. They are not just tools I query, but systems I guide. And that only really clicked after learning the actual mechanics of how prompts work under the hood.