7 Best Practices for Using Claude Code in Software Development
We spent 3B+ tokens stress testing Claude Code. Here are 7 easy to implement tips to optimize your software development workflow.
Sai Samrit
Jan 28, 2025



Summary: I’ve spent over 3B tokens on Claude Code in the past month. I have a few lessons that could be helpful if you use Claude Code actively.
In a few months, developers have watched coding assistants like Cursor and Claude code shift from a novelty for quick scripts into a full-fledged coding partner. Software Development has changed fundamentally. AI now reviews pull requests, weights different implementation choices, and in some cases even dictates when engineers sleep.
The more I relied on these systems, the more I caught myself accepting their output without much pause. If the code compiled, if the API requests look correct and if the frontend code looks beautiful, it surely must be correct right? Wrong.
I slowly began to forget that the purpose of programming is not simply to make something work, but to build something that can endure. In those moments, I wasn’t writing code; I was surrendering judgment. This post is meant to flip that dynamic for you.
The Stats

Here is my workflow:
Ask Claude to research a particular feature
Ask Claude to write a detailed plan with a checklist
Review the plan and make corrections where necessary
Claude starts executing the plan and updates the checklists automatically
Manually verify results with API calls/Frontend changes
Push changes
Now lets move on to lessons.
Lesson 1: Prune in code space. Ensure no dead branches. Penalize repeated blocks severely.
On paper, the workflow looks very structured and disciplined. We have a classic research, plan, execute, verify loop. But there’s a flaw. This test is binary in nature, evaluating results whether they are good or not. The more features we generated, the more half-finished code paths accumulated: unused helper functions, duplicate utilities, stale components all were left behind after a small design change. None of them broke the build on their own, so they stayed. But over time, code size increases exponentially after every feature change and this became increasingly unbearable.
Lesson 2: Context rot is real
As the codebase grew, two things started to break down: context and repo size. Repo size is easy to quantify but context is only seen with time. After some time, Claude has to drag along all the “context baggage”: unused helpers, old modules, duplicate approaches. This effect was cumulative: by the time we'd implemented a few dozen features, the codebase was full of paths that looked valid but weren't actually being used
This bloat spilled directly into context. When I requested Claude to research or plan the next feature, the context filled more quickly than before. The model wasn't reasoning from a new snapshot. It was pushing through repeated information. The quality of its output suffered, and debug loops persisted longer not because the bugs were difficult, but because the context was rotten.
Lesson 3: Be very mindful of auto compacts
Plan your entire workflow around the context window. Be deliberate about what goes in and what stays out of context. At one point, we let Claude Code run 143 auto compacts in a single stretch, dragging history forward far beyond what I assume anyone (including at Anthropic) had tested or tried for. DON’T DO THIS.
Similarly, using claude --continue across multiple features substantially increases context issues. It should be the exception (network issues), not the default. Use it only when you truly need continuity, otherwise, reset and start fresh so the model isn't encumbered with stale state.
Lesson 4: If You’re Running Out of Space: Use .md Files as Buffers
When you start hitting context window limits, the simplest strategy is to lean on Markdown files in your repo. Think of them as lightweight memory slots. Instead of stuffing all details into a single prompt, offload them into .md files that the model can read back when needed. You can reinject it into context if the model doesn’t listen.
Lesson 5: The top of the page is the most important code real estate. Leverage it.
Claude Code reads heavily from the start of the file. In our analysis, nearly half the reads landed in just the first 50 lines. That means if the top of your file is cluttered with old comments, placeholder functions, or scaffolding, the model will anchor on that noise. Be deliberate about what lives at the top: keep only what matters for reasoning. The first 50 lines sets tone for the entire code document.

Lesson 6: Not scoping your task and planning steps can cause way more issues than bad code
But context discipline alone wasn’t the whole story. Even though our plans were carefully designed with research, checklists, and verification baked in we still ran into trouble. On paper, everything important was covered. In practice, poorly scoped tasks and over-broad phases bloated the codebase just as much as sloppy context did.
Some important tips to add when it comes to planning from Humanlayer (Advanced Context Engineering for Agents):
Follow the plan’s intent while adapting to what you find
Implement each phase fully before moving to the next
Verify your work makes sense in the broader codebase context
Update checkboxes in the plan as you complete sections
Lesson 7: You still need to download the codebase into your head
Your value as an engineer is not just about whether you can build things. Your value also lies in your ability to improve and fix things when they go wrong (and they will). Being able to perform this surgery is critical. Instead of piling bandages on top of bandages, you need to perform the life-saving surgery.
The task of mechanically writing code has been outsourced to tools like Claude, and that's fine. In fact, I think it's great! Engineers should not spend their time worrying about obscure syntactic rules; they need to build systems that solve people's problems.
However, it can be very problematic if you lack a mental model of your codebase. It's very difficult to have confidence in your codebase. Having eroded confidence in the system makes you afraid to take on the risk of new features and ultimately makes your product worse.
At a minimum, you should be aware of the processes being executed and the database models being built and modified. What are the inputs to your system, what is being computed, what the outputs are, and what is being persisted. Ideally, you should have mental maps of the classes and functions within your project. These mental models are crucial for developing better prompts. Being familiar with the "language" your code speaks (i.e., the functions and classes) enables you to create more targeted and higher-quality prompts, debug faster, and feel more confident in your system. We use an internal tool to view function calls across the codebase.
BONUS: Using .md files with metadata
The “top of file” lesson doesn’t just apply to code. In your repo’s Markdown files, you can take advantage of the same idea using front matter. Many systems already use a --- block at the top of .md files to store metadata. We can adopt the same pattern to give Claude clean, structured signals before the actual content

Hope this is useful to people using AI to write a ton of code!



