Pair Programming with LLMs: My Evolving Workflow with Cursor

I've been using Cursor with Claude to engineer software across both existing codebases and new projects. My workflow has evolved significantly over the past eight months, and documenting it now reveals both emerging trends and fundamental patterns in LLM-assisted development.

My goal is to inspire you to try LLM-enabled coding flows yourself. The best way to understand boundaries is to experiment! You'll discover that some limitations are temporary, while others represent foundational constraints. Building process around these foundational elements is key to effective LLM-assisted development.

My Current LLM-Assisted Coding Process

KEY TAKEAWAY: Effective LLM-assisted development mirrors high-functioning human teams, with clear design documents, actionable to-do lists, and regular checkpoints.

Initial Feature Discussion

When implementing a feature, I start with an open-ended conversation with the LLM, similar to how I'd begin pair programming with a human colleague. This sets the stage for collaboration rather than simply delegating tasks.

[EXAMPLE PLACEHOLDER: Insert brief transcript snippet of initial conversation with LLM about implementing a feature]

Design Document Creation

KEY TAKEAWAY: Having the LLM create and iterate on a design document saves time while ensuring alignment on approach before any code is written.

Instead of jumping straight to implementation, I first have the LLM write a design document that goes into a docs directory in the project. This becomes an artifact we can iterate on together.

Step 1: Request initial design document I ask the LLM to write a detailed design for the feature, including technical approach and implementation phases.

Step 2: Question and critique Once written, I review and ask pointed questions: "What about cases where XYZ is handled?" or "Can you document the types you'll use with the external API?" This reveals gaps in the LLM's knowledge and whether I need to provide additional documentation.

Step 3: Explore alternatives I often ask, "Is there a simpler implementation?" or "What are three alternative approaches?" This prevents settling on the first workable solution.

[EXAMPLE PLACEHOLDER: Insert snippet of a design document showing the structure and level of detail]

To-Do List Generation

KEY TAKEAWAY: A detailed, well-structured to-do list helps the LLM maintain focus, prevents context drift, and ensures incremental progress toward working software.

Once I'm satisfied with the design document, I have the LLM break it down into actionable tasks.

Step 1: Create phased to-do list I typically start with "Make a phase one to-do list" rather than tackling the entire document at once. This becomes another document in the docs directory.

Step 2: Review and refine tasks The to-do list often reveals misunderstandings—perhaps the LLM plans to use a specific npm library when it should write custom code, or it misunderstands our project patterns. We iterate on these issues before writing any code.

Step 3: Optimize task ordering I frequently need to remind the LLM to prioritize getting to working software quickly. When it organizes tasks depth-first (completing each component fully before moving to the next), I have it reorganize to build an end-to-end working version first, then extend each component incrementally. This mirrors how humans work: start with a small working thing first and then gradually add complexity over time.

Step 4: Commit documentation I check in both the design document and to-do list to create git checkpoints before any code changes.

[EXAMPLE PLACEHOLDER: Insert portion of a to-do list showing task breakdown and organization]

Implementation Phase

KEY TAKEAWAY: The implementation process should involve regular testing, documentation updates, and version control checkpoints to maintain quality and allow easy recovery from mistakes.

Now we begin building, working through the to-do list in logical groups.

Step 1: Implement a group of related tasks We start at the top of the to-do list and work through items in coherent groups.

Step 2: Test each completed group After each group, we verify the code works. With CursorAgent, I have it execute CLI commands to test the code directly, or write small scripts to exercise the functionality with different test cases. For appropriate projects, we maintain a passing test suite throughout development.

Step 3: Update documentation After each group, I have the LLM update the readme or project documentation. This results in much more comprehensive documentation than I'd typically write manually.

Step 4: Commit changes I commit after each block of to-dos. This is crucial because LLMs occasionally make unexpected changes to the codebase. While Cursor has built-in checkpoints, git commits provide a more reliable safety net.

Step 5: Update the to-do list and continue The LLM updates the to-do list and we move to the next group.

[EXAMPLE PLACEHOLDER: Insert snippet showing a small implementation task and the corresponding test/verification]

Handling Challenges

KEY TAKEAWAY: Recognize when the LLM is struggling and be ready to intervene—this partnership requires knowing when to step in and provide guidance.

There are cases where the LLM can't implement certain code without significant help:

  • Integrating with external libraries that have insufficient documentation
  • Complex TypeScript typing issues
  • Tricky logic that wasn't adequately described in design documents

For example, I recently needed to polyfill Promise.any() and it took several iterations to get working code. Recognizing when the LLM is spinning its wheels and stepping in is an essential skill—just as you would with a junior developer.

[EXAMPLE PLACEHOLDER: Insert example of an LLM struggling and how you intervened]

Code Review and Finalization

KEY TAKEAWAY: Even when you collaborate with an LLM, reviewing the entire pull request as if you didn't write it is crucial for catching subtle issues and identifying emergent patterns.

After implementing the feature:

Step 1: Create and push a pull request Having done the work on a branch, I create a PR.

Step 2: Review as a senior developer I review the PR as if I didn't write the code. This is crucial because I didn't actually write every line and may have missed things during the implementation phase.

Step 3: Address feedback with LLM assistance When I have PR comments, I show them to the LLM in Cursor and let it fix the issues directly.

[EXAMPLE PLACEHOLDER: Insert example of PR feedback and how the LLM addressed it]

Lessons and Implications

The Continuing Importance of Requirement Clarification

Some believe LLMs will eventually automate the entire development process without human intervention. While possible, it's not happening this month, this year, or likely this decade. In the meantime, we need to consider the intermediate steps.

Requirements clarification remains crucial. In Agile, a user story is a placeholder for a conversation, not complete documentation. Similarly, "the map is not the territory"—the only complete specification is the fully-implemented feature.

The paper "Programming as Theory Building" suggests that software development involves forming a theory about how everything works—a theory that includes tacit knowledge that can't be fully communicated in language. This process requires dialogue between the LLM and implementer.

The critique of the plan—how it could be simpler, better, how it fits in the larger context—requires creative thinking and broader context than what exists in documentation.

LLMs and Context Management

Context windows, while long (200K tokens), don't guarantee the LLM maintains focus throughout. LLMs can go down rabbit holes just like human engineers—fixing one thing leads to another, and soon you're wondering what you originally set out to accomplish.

A detailed to-do list helps the LLM maintain memory and context. This is a technique I've used for 20 years and taught to junior engineers. It's an effective practice that could be built into LLM workflows.

Knowing When to Intervene

Models will continue to struggle with certain areas due to incomplete knowledge, missing documentation, or complex logic. My current process involves personally intervening when necessary.

Sometimes Cursor explicitly states it's stuck after multiple attempts, but other times I need to recognize when it's spinning out. This parallels normal software development where junior engineers ask seniors for help when stuck.

Techniques like timeboxing and limiting attempts could help formalize when an LLM should call for assistance.

The Value of Code Review

The mindset for writing code differs from the one for reading and editing it. A good PR reviewer asks different questions than a developer trying to make code work.

The review process, whether human or LLM-powered, ensures the code makes sense, lacks silly bugs, and doesn't contain debugging artifacts. Sometimes seeing the whole design reveals organizational solutions that weren't apparent during writing.

Conclusion

LLM-driven software development will resemble human-driven development, but faster. Since LLMs are modeled on human communication and software writing, their capabilities will mirror those of excellent humans.

A reasonable assumption is that LLMs working together will function like high-performing teams of senior engineers. Many software development practices we've refined over decades will remain valuable—they'll just be exercised by LLMs collaborating with other LLMs or humans.

I encourage you to try LLM-enabled coding flows yourself. Experiment with different approaches to find what works for you. Pay attention to which constraints feel temporary versus fundamental to software development itself. The most effective workflows will embrace these foundational elements rather than fighting against them.

That's my current Cursor workflow and how I believe it points to both the coming changes and the enduring fundamentals of software development.

Pair Programming with LLMs, My Evolving Workflow
Interactive graph
On this page
Pair Programming with LLMs: My Evolving Workflow with Cursor
My Current LLM-Assisted Coding Process
Initial Feature Discussion
Design Document Creation
To-Do List Generation
Implementation Phase
Handling Challenges
Code Review and Finalization
Lessons and Implications
The Continuing Importance of Requirement Clarification
LLMs and Context Management
Knowing When to Intervene
The Value of Code Review
Conclusion