Michael Ouroumis logoichael Ouroumis

How I Use Claude Code in My Daily Development Workflow

Terminal showing Claude Code session with code changes being applied to a project

Why Claude Code Is Different

I have written before about switching to Cursor for my daily IDE work. Claude Code fits a different part of the workflow — and understanding the distinction matters.

Cursor is for in-editor assistance: completions, multi-file edits via Composer, chat about the code I am currently looking at. It is integrated into the development environment I live in.

Claude Code is a terminal-based agent. It can read your entire codebase, run commands, execute tests, and make changes autonomously across as many files as the task requires. The interaction model is different: you describe a task, it plans and executes, you review. It is built for longer, more self-directed work.

Think of Cursor as a capable pair programmer sitting next to you. Claude Code is more like delegating a well-defined task to a senior developer who can work unsupervised for a stretch of time.

The Tasks I Delegate to Claude Code

Exploration and research. When I join a codebase I do not know — or return to one I have not touched in months — I use Claude Code to orient myself. "Walk me through how authentication works in this codebase." "What happens when a user submits an order?" The ability to actually read the relevant code, trace function calls, and explain the system in plain language is dramatically faster than doing this manually.

Refactoring with safety constraints. I describe the refactor I want to do, ask Claude Code to plan the changes before making them, review the plan, and approve or adjust before any files are modified. This works especially well for renaming things across a large codebase, extracting shared utilities, or standardizing an inconsistent pattern.

Test writing. My workflow: implement a feature myself, then ask Claude Code to analyze it and write tests. I specify: unit tests for the core logic, integration tests for the API routes, and edge cases it should think about explicitly. It is consistently better at test coverage breadth than I am when writing tests in the same session as the implementation.

Debugging with full context. When I have a bug that spans multiple files and is hard to reproduce, I describe the symptom to Claude Code and ask it to investigate. It reads the relevant code, traces the execution path, and usually identifies the problem within a few minutes. This is the category where I have saved the most time.

Generating boilerplate for new features. New API route, new service class, new database model — I describe the shape of what I need and Claude Code scaffolds it following the conventions of the existing codebase.

The Prompting Habits That Matter

Be specific about scope. "Fix the auth bug" is too vague. "The JWT token validation in middleware/auth.ts is rejecting valid tokens after expiry time is extended in config — investigate why and propose a fix" gets much better results. The narrower and more concrete your description, the less time the model spends exploring irrelevant parts of the codebase.

Ask for a plan before implementation. Before any significant change, I ask Claude Code to describe what it intends to do and which files it will modify. This catches misunderstandings early and prevents unwanted changes. It takes one extra exchange but saves significantly more than it costs.

Give context about constraints. "We are using optimistic updates here so make sure the state reverts on error." "This code runs on the edge runtime so no Node.js built-ins." Context that a human colleague would already know needs to be stated explicitly. I keep a CLAUDE.md in project roots with project conventions, important constraints, and architecture notes.

Use --print for scripted workflows. For repeatable tasks — generating a changelog from commits, running a code audit, producing a summary of changes — the --print flag lets you pipe Claude Code output into your tooling. I have shell scripts that use this for routine tasks.

The CLAUDE.md Pattern

I put a CLAUDE.md file in the root of every project I use Claude Code on. It contains:

  • Stack overview (frameworks, key packages, why we chose them)
  • Conventions (naming, file structure, testing approach)
  • Critical constraints (runtime environment, performance requirements, compliance notes)
  • Common tasks and how we approach them

This file is read at the start of every session. It is the equivalent of the onboarding notes you would give a new team member. With it, Claude Code starts each session with meaningful project context rather than inferring everything from scratch.

What Requires Human Judgment

Security-sensitive code. Auth, authorization checks, input validation, secret handling. I review this category manually regardless of who wrote it.

Architecture decisions. Claude Code can implement well but does not make strategic calls about how the system should be designed. Those conversations happen elsewhere.

Performance-critical paths. AI-generated code optimizes for correct and readable. Hot paths need profiling and intentional optimization.

Business logic with domain knowledge. Any logic that requires understanding your specific business rules, edge cases, or customer behavior is yours to write or review carefully.

The overall effect: I am more productive on the mechanical and exploratory parts of development, which frees more cognitive space for the parts that genuinely require judgment. That trade is worth it.

Enjoyed this post? Share: