Tips and Lessons Learned
We’ve gathered these findings from our own experience working with AI tools at DCoding Labs. A lot of what’s mentioned here might become outdated with time, so for general guidance on how to write better prompts, see our Prompting Guide, which describes fundamental practices, not tool-specific tips.
Instruction Files
Modern agentic coding tools structure instructions through layered files rather than a single system prompt. Understanding this layering helps you decide where each type of instruction belongs.
The global layer lives in your home directory. In Claude Code this is ~/.claude/CLAUDE.md.
Instructions here apply across every project and session. Use it for personal defaults: general
coding principles, preferred response style, tools you use regardless of project.
The project layer lives inside a repository, typically as a CLAUDE.md at the root or inside a
.claude/ folder. Use it for codebase-specific context: architecture overview, tech stack and
version constraints, naming and error handling conventions, and build and test commands. Commit this
file so every team member benefits from it.
The session layer is anything you tell the model during the current conversation. It adds to or overrides the layers below for that session only. Use it for temporary adjustments relevant to the current task.
Do not duplicate instructions across layers. If a convention is in the project file, do not restate it every session.
Model Selection
As of April 17, 2026, Anthropic’s current model overview lists Opus 4.7, Sonnet 4.6, and Haiku 4.5 as the latest models.
| Model | Use it for | Avoid it for | Cost / speed |
|---|---|---|---|
| Sonnet 4.6 | Default choice for 80-90% of coding work: multi-file edits, API integration, frontend work, tests, review, docs | Only switch away when you hit a real reasoning limit | Baseline |
| Opus 4.7 | Hardest coding tasks, architectural refactors across many files, complex agentic workflows, concurrency bugs, and high-stakes enterprise work | Routine coding where Sonnet is already good enough | 67% more expensive than Sonnet, with moderate latency |
| Haiku 4.5 | Classification, structured extraction, simple Q&A, repetitive high-volume tasks | Multi-file reasoning, complex debugging, ambiguous tasks | 67% cheaper than Sonnet and the fastest option |
Thinking Modes
Opus 4.7 and Sonnet 4.6 use adaptive thinking, where Claude dynamically decides when and how much to think. Haiku 4.5 does not. Claude calibrates its thinking based on two factors: the effort parameter and query complexity. For most tasks, this happens automatically without any configuration.
Controlling thinking depth in Claude Code:
Use the /effort command to set the thinking level for your session. This is the official,
persistent setting.
| Command | When to use |
|---|---|
/effort low | Quick edits and simple fixes |
/effort medium | Default choice for most work |
/effort high | Complex logic and multi-file changes |
/effort max | Architecture, hard bugs, and tradeoff analysis |
Trigger words also work mid-prompt for a one-off boost without changing your session setting:
| Trigger | When to use |
|---|---|
think | Routine refactors, new features |
think hard | Complex logic, multi-step changes |
ultrathink | Architecture decisions, hard bugs, stuck in a loop |
These are not official settings — they are prompt cues that encourage deeper reasoning. For a
guaranteed, persistent setting, use /effort.
Extended thinking is not free. Use it for problems that deserve it, not as a default.
Worktrees
Git worktrees let you run multiple agents in parallel on the same repo without branch-switching conflicts. With worktrees, you can:
- Have an agent perform a long-running task in a separate worktree without blocking your current branch.
- Compare two different implementations of the same feature by running each implementation as a separate process on a different port.
- Review an agent’s changelist more easily:
cdinto its worktree, run tests, review the diffs, and run the project without having to stash or think about your uncommitted changes.
Closing the Loop
Find ways to have the agent verify its work. Agents can go into feedback loops automatically, checking if their output matches your definition of done. Here are some concrete steps:
- Document the exact commands for type checking, linting, and running tests in
CLAUDE.md. - Invest time in well-written tests.
- For UI work, set up a way for the agent to see what it built: consider using a Playwright script, the Chrome DevTools MCP, or screenshots.
- For bug fixes, find a way to reproduce the bug and instruct the agent to run the repro steps to validate its fix.
- For rewrites, have it run the rewritten version and compare it with the old one.
- Pipe browser console errors and dev-server logs somewhere the agent can read them. Silent failures are the easiest to miss.
Steering the Model
Most of the value in working with an agent comes from how you correct it mid-task, not from the opening prompt. A few habits make a big difference:
- Interrupt the agent’s work. As the agent is working, review its thinking trace to notice if it’s going in the wrong direction. The moment you notice that the agent’s heading down a wrong path, press ESC to stop it and correct course.
- Rewind over patch. When a thread is several wrong turns deep, consider using Claude Code’s
/rewindcommond or staring a fresh session. Bad context keeps poisoning the output even after you correct it. - ESC vs. queue. Press ESC when the direction is wrong; queue when the direction is fine and you want to add a note for after the current step. (Note that Claude Code will flush queued messages at the next LLM pause, not when it’s done with the current prompt. Codex will wait for the full inference to complete before processing a queued prompt.)
The opposite failure mode is over-steering. If you interrupt every thirty seconds, the agent never builds momentum on multi-step work and you end up doing the thinking yourself anyway. The skill is knowing when to let it cook and when to cut it off.
Trigger Words
- “ultra think”
- “search the web to make sure that your answers are based on real-world data”
- “push back if I’m wrong; don’t just agree”
- “keep the diff minimal and don’t refactor anything you don’t have to”
- “red team this” / “what’s most likely to break?”
Avoiding the Dumb Zone
It’s been observed that agent performance starts to degrade when the context window goes over 40-60%. That’s why it’s good practice to keep an eye on the current window’s token use and to create fresh conversations frequently, perhaps summarizing the findings from the current conversation and pasting them onto the next one.
Maintain Code Quality
When adding new code, agents scan your codebase for patterns and conventions and then try to match your project’s style. Try to main good code quality throughout your codebase. Filling the agent’s context window with poor code will “poison” the context and lead to subpar performace.
Lessons learned from Drospect
When we were rewriting the new module for PDF reports in Drospect, the agent probably looked at the old code for generating PDFs, saw that it wasn’t guarded with any authentication/authorization mechanism, and decided not to authorize the new endpoints as well. This was a security leak because those endpoints were suppose to generated reports with user-uploaded data.
Much of the inference code in Drospect was written with AI (without proper review), so it had a lot of unnecessary comments, filled with emoji and narrations for obvious steps. Because of this, when adding new code, agents often added similar comments.
Because the Drospect codebase had a lot of logs and JSON values outputted into those logs, when we instructed agents to add new code, they added a lot of logs and didn’t use structured logging when displaying structured data (like JS objects, e.g.).
Routinely review your usage patterns via /insights
Getting better at these tools requires insight that can’t be communicated without actual practice.
As you’re practicing, consider runnig Claude Code’s /insights command once a week to review how
you’re using Claude Code. /insights outputs an HTML page describing your usage patterns, what the
agent did well, where it went wrong, and how often it hallicunated.