Using a Coding Agent

Chat interfaces make you copy-paste. Coding agents edit your files directly. Understanding the difference — and the three modes they offer — changes how you prompt.

March 30, 20264 min read1 / 6

There are two ways to work with AI on code: chat and integrated agents. Most people start with chat — they type a prompt, read the response, copy the code, and paste it into their editor. It works. But there is a better way.

Coding agents (Copilot, Cursor, Claude Code) live inside your editor. They can see your open files, understand your project structure, and edit code directly. No copy-paste. The AI writes into your files while you watch.

The difference matters more than it sounds. When you paste code into a chat and ask for help, the model has no idea what else is in your project. When a coding agent is open in your editor, your files are part of the conversation.

Three Modes: Agent, Ask, and Edit

Most coding agents give you three distinct modes. Knowing which one to use is itself a prompting skill.

Agent mode is the most powerful. You describe what you want to build and the agent builds it — generating files, writing code, running commands. Use this when you have a clear goal and want to move fast.

Plain text
// Agent mode prompt Build a save and delete feature for my Prompt Library app. The data should persist in localStorage. I run this with Live Server in VS Code — no extra dependencies.

Ask mode is for questions. You point it at code and ask what it does, why it works, or how to approach a problem. The agent does not write any code. Use this when you are exploring unfamiliar code or trying to understand a bug before fixing it.

Plain text
// Ask mode prompt What is this useEffect doing, and why does it run twice on mount?

Edit mode makes targeted changes. You highlight a specific function or block and describe the change you want. It stays focused on that selection instead of rewriting the whole file. Use this for precise corrections after a broader build.

Plain text
// Edit mode prompt Rename this variable to something more descriptive. Add a try/catch around the localStorage call.

A common workflow: use agent mode to build the first version, ask mode to understand anything unexpected, and edit mode to clean up specific pieces.

Your First Prompt Sets the Stage

When you start a new session with a coding agent, your first message shapes everything that follows. One line that pays for itself every time:

Tell the AI how you run your code.

If you are using Live Server in VS Code, say so. If you are running a Node server, say so. If you want a single index.html with no build step, say so.

Plain text
// Without this, the agent might add a Python server, a Webpack config, // or a package.json you did not ask for. I'm building a plain HTML, CSS, and JavaScript app. I run it with Live Server in VS Code — no build tools, no additional dependencies.

This is not about being overly prescriptive. It is about giving the model enough context to avoid assumptions that will cost you five minutes of cleanup.

The same idea applies to coding style, naming conventions, and file structure. If you care about consistency, say it once at the start rather than correcting it six times throughout the session.

Context: What the Agent Can See

Coding agents have access to files in your editor, but not everything automatically. Most tools follow this pattern:

  • The currently open file is always in context
  • Other files in your project need to be explicitly added
  • You can add files using @filename or #filename syntax depending on your tool

The instinct to add everything "just in case" is counterproductive. More context is not always better — it is more tokens competing for the model's attention, and key instructions can get lost in the middle.

Paste the two files directly relevant to the current task. Leave everything else out.

Switching Models

One overlooked advantage of tools like Copilot and Cursor: they give you access to multiple model providers in one place. You can switch between GPT, Claude, Gemini, and smaller models without logging into different services.

A practical rule of thumb:

TaskModel to try
Simple bug fix, renaming, formattingSmaller model (Haiku, GPT-4o mini)
Writing a new component or featureMid-tier (Sonnet, GPT-4o)
Architecture decisions, complex refactorsMost capable available

The smaller models are faster and cheaper. The larger ones are better at holding complex context and generating coherent multi-file changes. Match the model to the task.

One more thing: switching models mid-session is fine. If you start a feature with a mid-tier model and hit something complex, switch up. If you just need a quick fix after a big build, switch down. The session history carries over.


Further Reading and Watching

Practice

0/5 done

Enjoyed this? Get more like it.

Deep dives on system design, React, web development, and personal finance — straight to your inbox. Free, always.