AI Engineer Prep

Session 18: AI-Assisted Development — Cursor, Claude Code & Modern AI Workflows

If you're still writing every line of code by hand in 2026, you're bringing a knife to a gunfight. The best AI engineers don't just BUILD AI systems—they USE AI to build them faster. Cursor, Claude Code, Copilot—these tools have gone from "nice to have" to "how did I ever work without this?" in the space of a year. Miro's job description explicitly asks for "AI-First Proficiency"—engineers who've moved beyond simple chat prompts to using AI agents for autonomous debugging, analyzing logs, or rapidly prototyping. That language isn't fluff. It's a filter. They want people who think in workflows, not one-off prompts.

Here's what separates a strong answer from a weak one. When someone says "I use Copilot for autocomplete," the interviewer nods politely. When you say "I use Cursor with custom MCP servers that expose our deployment pipeline and Jira to the chat context, so when I'm debugging a production incident the AI can query our Runbook DB and correlate logs before I've opened a second tab," they lean forward. That's the gap. This session turns you into the latter.


1. AI-First Development Philosophy

Interview Insight: Miro explicitly asks for engineers who have "moved beyond simple chat prompts" and "truly integrated AI into their development flow." That's not a checkbox—it's a differentiator. They want to hear about workflows, not tools.

AI-first development means AI isn't a tool you sometimes use—it's your pair programmer for everything. You're not copy-pasting into ChatGPT once a week. You're in an agentic loop: describe the task, the AI plans, you review, it executes, you iterate. Code review, debugging, refactoring, documentation, exploration of unfamiliar codebases—all of it flows through AI.

Think of it like the shift from manual transmission to automatic. You still need to know how the engine works (you're the senior engineer), but the mechanics of shifting are handled for you. You focus on navigation, not clutch coordination. The senior AI engineer who "pilots" these tools isn't abandoning fundamentals—they're multiplying their output while keeping architectural control.

Why it matters for Senior AI Engineers in 2025–2026: You're not just building features faster. You're demonstrating that you live in the future you're hiring for. Companies want engineers who ship at "an ambitious pace" (Miro's words). AI-assisted development is how you do that without burning out.

Why This Matters in Production: At Maersk, the Enterprise AI Agent Platform and email booking automation required rapid iteration—new tools, new guardrails, new integrations. Engineers who embraced AI-first workflows could prototype and ship in days instead of weeks. The ones who didn't became bottlenecks.

Aha Moment: "AI-first" doesn't mean "AI does everything." It means you've designed your workflow so AI handles the grunt work (boilerplate, repetitive patterns, log parsing) and you handle the judgment (architecture, trade-offs, security). That division of labor is the skill.


2. Cursor — AI-Native IDE

Interview Insight: "I use Cursor" is common. "I use Cursor with Rules files and MCP so the AI knows our project conventions and can query our internal APIs" is senior. Show you understand the system, not just the UI.

Cursor is VS Code rebuilt for AI. Same extensions, same keybindings, but the entire surface is designed around model-in-the-loop editing. You're not fighting a chat panel bolted onto the side—the chat is the interface.

Key features:

  • Agent mode: The AI autonomously edits multiple files. You describe a task, it plans, applies changes, and you review. No more "here's the code, copy it in." It reads, edits, saves. For large refactors or multi-file changes, this is transformative.
  • Cmd+K inline editing: Select code, hit Cmd+K, describe the change. The AI edits in place. Great for localized fixes: "add error handling here," "extract this to a helper."
  • Chat panel with codebase context: The AI indexes your project. @-mentions let you pin context: @file, @folder, @web, @docs. You're not pasting file contents manually—you're wiring the model's attention.
  • Tab completion (predictive next-edit): Ghost text suggests the next edit as you type. Sometimes it's eerie; sometimes it's wrong. You develop a feel for when to accept vs. ignore.
  • MCP integration: Connect MCP servers so the AI can call tools—database queries, Notion docs, deployment status. Your IDE becomes a control center.
  • Rules files (.cursor/rules/): Project-specific AI behavior. "Use existing utility functions." "Don't add new dependencies." "Follow our error-handling pattern." The AI reads these automatically.

How it uses models: Cursor supports Claude, GPT-4, and others. You pick in settings. Agent mode tends to use stronger models; inline completion might use faster ones. Cost is per-request; teams often track usage.

flowchart LR
    subgraph cursorFlow["Cursor AI Workflow"]
        userPrompt[User Prompt]
        rules[Rules Files]
        context[Context]
        model[LLM]
        edits[Edits Applied]
        userPrompt --> model
        rules --> model
        context --> model
        model --> edits
    end

Tips for effective use:

  • Write clear rules. Vague rules = inconsistent behavior.
  • Use @context liberally: @Folder for scope, @file for precision.
  • Break tasks into steps. "Refactor the auth module" is too big. "Extract JWT validation to a separate function in auth.py" is actionable.
  • Review Agent mode changes. It gets things wrong. Diff before committing.

Why This Matters in Production: At Maersk, we added a .cursor/rules file describing our guardrail patterns, tool-calling conventions, and error-handling style. New engineers got consistent suggestions; the AI stopped proposing patterns we'd rejected.

Aha Moment: Cursor's power scales with how much context you give it. A bare prompt gets a generic answer. @Folder + @rules + "use the existing validate_booking function" gets production-ready code.


3. Claude Code — Terminal-Based AI Agent

Interview Insight: Claude Code is less common in interviews than Cursor, but mentioning it shows breadth. "I use Claude Code for large refactors and exploring unfamiliar codebases—it runs in the terminal, can execute commands, and has extended thinking for complex planning" signals you've tried the full toolchain.

Claude Code (formerly Claude for Developers) is Anthropic's terminal-based AI coding agent. You run it in your terminal. It reads your files, runs commands, searches code, uses git. No IDE required—just you, the shell, and an AI that can actually do things.

Agentic workflow: You describe what you want. It plans (sometimes with extended thinking for complex tasks), proposes steps, and executes. It can run grep, git diff, pip install, even pytest. You approve or reject. For debugging a failing test, you paste the error; it traces through the stack, edits the code, re-runs until it passes.

Best for: Large refactors across many files, debugging unfamiliar code, exploring a new repo. The terminal context means it sees your actual environment—paths, env vars, installed packages.

CLAUDE.md: Project context file. Like Cursor rules but for Claude Code. Describe your stack, conventions, and "do this / don't do that." Claude reads it at the start of a session.

MCP support: Claude Code can connect to MCP servers. Notion, databases, custom tools—same protocol, different host.

Why This Matters in Production: When onboarding to a new codebase at Maersk, Claude Code helped map dependencies, trace call graphs, and identify where guardrails were enforced. "Explain this module and suggest where we'd add a new tool" got a coherent walkthrough in minutes.

Aha Moment: Claude Code vs. Cursor: Cursor is IDE-centric—you're editing in place. Claude Code is terminal-centric—it runs commands. Use Cursor for day-to-day coding; Claude Code when you need to explore or run long command sequences.


4. GitHub Copilot

Interview Insight: Copilot is ubiquitous. The question is whether you can articulate how it differs from Cursor and when you'd choose each. "Copilot is completion-focused; Cursor is agent-focused" is the sound bite.

GitHub Copilot offers inline completion (ghost text), Copilot Chat, and workspace context. It's integrated into VS Code, JetBrains, and other IDEs. The core loop: you type, it suggests; you accept or ignore.

Features:

  • Inline completion: Predicts the next lines as you type. Good for boilerplate, tests, repetitive patterns.
  • Copilot Chat: Chat interface with codebase context. Similar to Cursor Chat but traditionally more completion-oriented.
  • Workspace context: @-mentions for files and folders. Recent updates add more agent-like behavior.
  • Copilot Workspace: Multi-file tasks. Describe a feature, get a plan and implementation across files.
  • Agent mode in VS Code: Newer; brings agentic editing closer to Cursor.

How it differs from Cursor: Copilot started with completion—fill in the next token. Cursor started with chat and agentic editing. The lines are blurring: Copilot has Workspace and agent mode; Cursor has Tab completion. But Cursor's DNA is "AI that edits for you"; Copilot's is "AI that suggests while you type." For rapid iteration and multi-file refactors, Cursor's Agent mode still leads. For steady coding with minimal context switches, Copilot's inline flow is smooth.

Why This Matters in Production: Teams often use both—Copilot for flow-state coding, Cursor for complex tasks. Or standardize on one based on team preference. The interview answer: "I've used both; I prefer Cursor for agentic workflows and Copilot for inline completion when I'm in the zone."

Aha Moment: Don't trash Copilot. It's Microsoft-backed, widely adopted, and improving. The strong answer acknowledges both and articulates a choice based on task type.


5. MCP in IDEs — Context and Tools

Interview Insight: MCP in the IDE context is about connecting your development environment to your ecosystem. "I built an MCP server that exposes our deployment status to Cursor" is a standout answer.

MCP (Model Context Protocol) lets IDEs and AI tools connect to external data and tools. Cursor, Claude Code, and Claude Desktop all support MCP. You run an MCP server (locally or remotely); the client discovers its tools and resources; the model can invoke them.

Example use cases:

  • Database MCP server: Expose schema, run read-only queries. "What tables does the booking service use?" The AI asks the server, gets the answer, uses it in its response.
  • Notion MCP server: Pull docs, runbooks, design specs. "What's our deployment runbook for the booking service?" The AI fetches from Notion.
  • Custom deployment MCP: Expose "get latest deployment status," "list recent builds," "trigger rollback." When debugging, the AI has production context.

How it works: MCP servers expose tools (callable functions) and resources (read-only data). The host (Cursor, Claude Desktop) connects to the server, lists available tools/resources, and passes model requests to the server. The server executes and returns. The model never runs code directly—it requests; the server runs.

flowchart TB
    subgraph ide["IDE (Cursor, Claude Code)"]
        aiModel[AI Model]
        mcpClient[MCP Client]
    end
    subgraph mcpServers["MCP Servers"]
        dbServer[DB Schema Server]
        notionServer[Notion Docs Server]
        deployServer[Deployment Status Server]
    end
    aiModel --> mcpClient
    mcpClient --> dbServer
    mcpClient --> notionServer
    mcpClient --> deployServer

Why This Matters in Production: At Maersk, an MCP server exposing deployment status and Jira tickets would let the AI correlate "this PR just merged" with "this incident started" during debugging. The context window stays focused on code; external context comes via tools.

Aha Moment: MCP servers aren't just for production data. A dev-focused server could expose local docker status, test run results, or even "what's on my calendar for the next hour." The protocol is flexible; your imagination is the limit.


6. Prompt Engineering for Code Generation

Interview Insight: They want to know you understand that "good prompts" for code aren't magic phrases—they're structured context. File structure, types, patterns, constraints.

Context is everything: Include file structure (tree src/), relevant types, existing patterns. "Here's our Booking model and validate_booking; add a new field and validation" beats "add a field to our booking."

Constraints: "Use existing utility functions." "Don't add new dependencies." "Follow our error-handling pattern: log, return Result type, never raise." Constraints prevent the model from going rogue with pip install left-pad.

Iterative refinement: Fix errors by providing error output back. "This test fails with X. Fix it." The model sees the failure, adjusts, tries again. Same loop you'd use with a junior dev.

@-references and rules: In Cursor, @file and @folder inject context. Rules files encode project conventions. The model doesn't guess—it reads.

Example .cursor/rules snippet:

# Project: Maersk AI Agent Platform
 
## Code Style
- Use Python 3.11+ type hints on all public functions
- Errors: use Result types, log with structlog, never bare exceptions
- Async: use asyncio, prefer httpx over requests
 
## Conventions
- Tool schemas: use Pydantic, strict mode, descriptions for every field
- Guardrails: all agent output goes through `validate_output` before returning
- No new dependencies without approval—check pyproject.toml first

Why This Matters in Production: Vague prompts produce generic code. Structured prompts with constraints and context produce code that fits your stack. The difference is the difference between "rewrite this" and "ship it."

Aha Moment: The best prompt for code isn't long—it's precise. "Add retries with exponential backoff to the booking API call" is better than "make the booking API more resilient."


7. AI for Debugging, Log Analysis, and Rapid Prototyping

Interview Insight: Miro wants "AI agents for autonomous debugging, analyzing logs, or rapidly prototyping." Have a concrete example ready—paste error, get diagnosis; describe feature, get prototype.

Debugging: Paste error logs, stack traces. "Why is this test failing?" The AI parses the trace, identifies the likely cause, suggests a fix. You don't have to grep through 50 files—the AI does it. For flaky tests: "This test fails 20% of the time. Look at the code and suggest what might be racy."

Log analysis: "Here are 500 lines of logs from a failed booking. Find the error." The AI scans for exceptions, correlation IDs, timing issues. Humans get fatigue; models don't.

Rapid prototyping: "Build a FastAPI endpoint that accepts a booking reference, fetches from our API, and returns status. Use our existing BookingClient." You get a working implementation in minutes. Not production-ready—you'll add validation, tests, error handling—but the skeleton is there.

Understanding unfamiliar codebases: "Explain this module. What does it depend on? Where would we add a new tool?" The AI walks the call graph, summarizes, and suggests integration points.

flowchart LR
    subgraph debugFlow["Debugging Workflow"]
        error[Error/Logs]
        ai[AI Analysis]
        fix[Suggested Fix]
        user[User Review]
        error --> ai
        ai --> fix
        fix --> user
    end

Why This Matters in Production: At Maersk, when the email booking agent started failing on certain port codes, we pasted the extraction logs and trace into Cursor. The AI identified that our chunking had split "Hamburg" across two chunks, causing the model to hallucinate the code. Fix: document-aware chunking for port tables. Time to diagnosis: minutes instead of hours.

Aha Moment: The AI doesn't replace your judgment—it extends your reach. You still decide if the fix is correct. You still understand the system. The AI handles the scanning and pattern-matching so you can focus on the decision.


8. Building Custom MCP Servers for Development

Interview Insight: "I built an MCP server that..." is a powerful opener. It shows you understand the protocol, the value of tool-augmented context, and when custom tooling pays off.

When custom MCP servers are worth building: When the AI repeatedly needs data that isn't in the codebase—deployment status, Jira tickets, runbooks, monitoring dashboards. If you're copy-pasting the same data into the chat every day, automate it.

Example: MCP server for Jira + deployment status

# FastMCP example - development workflow MCP server
from fastmcp import FastMCP
 
mcp = FastMCP("dev-workflow")
 
@mcp.tool()
async def get_jira_tickets(jql: str = "assignee = currentUser() AND status != Done") -> str:
    """Fetch Jira tickets matching the given JQL. Default: my open tickets."""
    # Call Jira API, return formatted list
    ...
 
@mcp.tool()
async def get_deployment_status(service: str) -> str:
    """Get latest deployment status for a service (e.g. booking-api, agent-platform)."""
    # Call your deployment API
    ...
 
@mcp.tool()
async def get_recent_errors(service: str, hours: int = 1) -> str:
    """Fetch recent errors from monitoring for a service."""
    ...

FastMCP Python SDK: Anthropic's fastmcp (or mcp) package makes it trivial. Define tools with decorators, run with mcp run. The server exposes them; any MCP client can connect.

Cursor MCP config:

// .cursor/mcp.json or Cursor settings
{
  "mcpServers": {
    "dev-workflow": {
      "command": "python",
      "args": ["-m", "dev_mcp_server"],
      "env": {}
    }
  }
}

Why This Matters in Production: At Maersk, a custom MCP server exposing "current deployment of booking-agent," "open incidents," and "recent guardrail triggers" would let the AI correlate changes with incidents during debugging. Worth a day of build time for the time saved in every incident.

Aha Moment: Start small. One tool: "get deployment status." If you use it, add more. Don't build a 20-tool monster before you've validated that one tool gets used.


9. Limitations and When AI Tools Fail

Interview Insight: A senior engineer can articulate both the power and the limits. "AI tools are amazing but..." shows you've thought critically.

Hallucinated APIs and outdated patterns: Models train on public code. Internal APIs, newer library versions, or custom patterns may not exist in training data. The model invents. Always verify: run the code, check the docs, don't trust generated imports blindly.

Security risks: Leaked secrets (API keys in prompts), insecure code (SQL injection, eval of user input). Never paste production secrets into any AI tool. Use code scanning. Review security-critical code by hand.

Over-reliance and loss of fundamentals: If you stop understanding the code the AI writes, you've lost control. You must be able to debug, refactor, and explain. AI accelerates; it doesn't replace comprehension.

When NOT to use AI:

  • Security-critical code (auth, crypto, payments)
  • Novel algorithms or complex architecture decisions
  • Code that has subtle correctness requirements (e.g. financial calculations)
  • When the model's suggestions are consistently wrong—sometimes your prompt or context is the issue, but sometimes the task is beyond current models

Context window limits: Large codebases don't fit. You rely on retrieval, @folder scoping, or breaking the task into smaller chunks. "Refactor the entire repo" fails. "Refactor the auth module" might work.

flowchart TD
    task[New Task] --> checkSecurity{Security-critical?}
    checkSecurity -->|Yes| noAI[Do not use AI]
    checkSecurity -->|No| checkNovel{Novel algorithm or complex architecture?}
    checkNovel -->|Yes| noAI
    checkNovel -->|No| useAI[Use AI with review]
    useAI --> review[Human review required]

Why This Matters in Production: We caught a generated SQL query that would have been vulnerable to injection. The model had seen similar patterns and didn't parameterize. Guardrails caught it; human review before deployment is non-negotiable.

Aha Moment: The best practitioners use AI for 80% of the work and human judgment for the 20% that matters. Knowing which 20% is the skill.


10. Interview Angle — How to Talk About AI Tools

Interview Insight: "I use Cursor" = baseline. "I use Cursor with custom MCP servers and Rules files; I've configured our project conventions so the AI produces code that passes our lint and matches our guardrail patterns" = standout.

Demonstrate productivity gains: "I used to spend an hour debugging extraction failures; with Cursor and log paste I'm down to 10 minutes." "Prototyping a new tool used to take a day; with agent mode it's an afternoon." Quantify when you can.

Show deep understanding: Not "I use autocomplete." But "I use Cursor's Agent mode for multi-file refactors, Cmd+K for localized edits, and I've added an MCP server so the AI can query our deployment status when I'm debugging. Our Rules file encodes our guardrail and tool-calling conventions."

Differentiate from surface-level usage: The interviewer has heard "I use Copilot" a hundred times. They want to know: Do you understand agentic vs. completion? Have you customized the workflow? Do you know when AI helps and when it doesn't?

Phrasing that works:

  • "At Maersk, we use Cursor with project-specific rules so the AI suggests code that matches our guardrail patterns."
  • "I built a small MCP server that exposes our deployment pipeline—when I'm debugging an incident, the AI can correlate recent deploys with error spikes without me switching context."
  • "For rapid prototyping, I describe the feature and get a working skeleton in minutes; then I add tests, validation, and error handling. AI for speed, human for correctness."

Why This Matters in Production: Your interview answer is a proxy for how you'll work on their team. If you can articulate a sophisticated AI workflow, they infer you'll bring that same rigor to their codebase.

Aha Moment: The goal isn't to name-drop every tool. It's to show you've thought about the workflow, the integration, and the limits. One well-told story beats a list of features.


Code Examples

.cursor/rules File (Project Conventions)

# Maersk AI Agent Platform — Cursor Rules
 
## Stack
- Python 3.11+, FastAPI, LangGraph
- Azure OpenAI primary, fallback to Anthropic
- Qdrant for vector store
 
## Code Style
- Type hints on all public functions
- Use Result types (or union with None) for operations that can fail
- structlog for logging, never print()
- Async with asyncio, httpx for HTTP
 
## Tool and Agent Conventions
- Tool schemas: Pydantic with strict mode, description on every field
- All tool outputs go through guardrail validation before returning to user
- Idempotency keys for any action that modifies state
- Never add new pip dependencies without checking pyproject.toml
 
## Testing
- pytest for unit and integration
- Mock LLM calls in tests; use real calls only in eval suite
- Golden scenarios in tests/evals/

CLAUDE.md for Claude Code

# Project: Maersk Email Booking Agent
 
## What this is
FastAPI service that processes email booking requests via LLM extraction + tool calls.
Part of the Enterprise AI Agent Platform.
 
## Key files
- `app/main.py` - FastAPI app, routes
- `app/agents/booking_agent.py` - LangGraph agent definition
- `app/tools/` - Tool implementations (create_booking, search_sailings, etc.)
- `app/guardrails/` - Input/output validation
- `tests/` - Unit and integration tests
- `tests/evals/` - Golden scenario evals
 
## Conventions
- Tools return dicts; agent formats for user
- Guardrails run after every agent response
- Use existing `BookingClient` for API calls—don't create new clients
- Port codes and commodity codes from RAG; never hardcode

FastMCP Server for Development (Minimal)

# dev_mcp_server.py
from fastmcp import FastMCP
 
mcp = FastMCP("maersk-dev")
 
@mcp.tool()
def get_deployment_status(service: str = "booking-agent") -> str:
    """Get latest deployment status. Services: booking-agent, agent-platform, extraction-api."""
    # In production: call your deployment API
    return f"Latest deployment for {service}: v2.3.1, deployed 2 hours ago, healthy."
 
@mcp.tool()
def get_open_incidents() -> str:
    """List currently open incidents from PagerDuty or equivalent."""
    return "No open incidents."
 
@mcp.resource("uri://maersk/runbook/booking-agent")
def booking_agent_runbook() -> str:
    """Runbook for booking-agent service."""
    return """
    ## Booking Agent Runbook
    1. Check deployment status: get_deployment_status('booking-agent')
    2. Recent errors: check CloudWatch / App Insights for booking-agent
    3. Common issues: extraction failures -> check RAG index, guardrail triggers -> check logs
    """

Conversational Interview Q&A

Q1: "How do you use AI in your development workflow?"

Weak answer: "I use Copilot for autocomplete and sometimes ChatGPT when I'm stuck."

Strong answer: "I use Cursor daily—Agent mode for multi-file refactors and Cmd+K for localized edits. We have a .cursor/rules file that encodes our guardrail patterns and tool conventions at Maersk, so the AI produces code that fits our stack. For debugging, I paste error logs and stack traces; the AI traces through and suggests fixes. I also built a small MCP server that exposes our deployment status—when I'm debugging an incident, the AI can correlate recent deploys without me switching tabs. I still review everything; AI accelerates, I validate."


Q2: "What's the difference between Cursor and Copilot?"

Weak answer: "Cursor is more AI-focused. Copilot is from GitHub."

Strong answer: "Cursor is agent-centric—the AI plans and applies edits across files. Copilot is completion-centric—it suggests the next lines as you type. Both have evolved—Copilot has Workspace and agent mode now—but Cursor's DNA is 'AI that edits for you' and Copilot's is 'AI that suggests while you type.' I use Cursor for complex tasks—refactors, debugging, exploring codebases. For steady coding where I'm in flow state, Copilot's inline completion is smooth. At Maersk we standardized on Cursor for the agent platform work because of the multi-file editing and MCP integration."


Q3: "Have you used AI for debugging? Give an example."

Weak answer: "Yeah, I paste errors into ChatGPT sometimes."

Strong answer: "At Maersk, our email booking agent started failing extraction for certain port codes. I pasted the extraction logs and the trace into Cursor. The AI identified that our chunking strategy was splitting port table rows across chunks—'Hamburg' and its code were in different chunks, so the model hallucinated. We switched to document-aware chunking for port tables, keeping rows intact. Time to diagnosis: about 10 minutes. Before AI-assisted debugging, I'd have grepped through chunks manually for an hour."


Q4: "What are the limitations of AI coding tools? When wouldn't you use them?"

Weak answer: "They sometimes get things wrong. I don't use them for important stuff."

Strong answer: "Three limitations stand out. First, hallucinated APIs—models invent methods or parameters that don't exist in our internal or newer libraries. I always verify generated imports and calls against our actual codebase. Second, security—I never paste production secrets, and I review security-critical code by hand. We caught a generated SQL query that would have been injectable; the model hadn't parameterized. Third, over-reliance—I need to understand the code. If I'm just accepting changes blindly, I've lost control. I use AI for the 80% that's repetitive; the 20% that's architecture, trade-offs, and security gets human judgment."


Q5: "Miro asks for AI-First Proficiency—how would you demonstrate that?"

Weak answer: "I use AI tools a lot. ChatGPT, Copilot, that kind of thing."

Strong answer: "I'd demonstrate it in three ways. First, workflow: I've integrated AI into my whole loop—not just occasional Chat GPT—Cursor for editing, agent mode for refactors, MCP for context. Second, customization: we have Rules files at Maersk so the AI knows our conventions; I've built an MCP server for deployment status. Third, judgment: I know when AI helps and when it doesn't—I don't use it for security-critical or novel architecture, and I always review. That combination—workflow integration, customization, and critical awareness—is what I think they mean by AI-first proficiency."


Q6: "How would you use MCP in a development context?"

Weak answer: "MCP connects tools to AI. You can add database or API tools."

Strong answer: "In development, MCP lets the AI pull context that isn't in the codebase. At Maersk, I'd run an MCP server that exposes: deployment status for our services, open Jira tickets, maybe recent errors from our monitoring. When I'm debugging an incident in Cursor, I ask 'what changed in the last hour?' and the AI can query the deployment server and Jira—instead of me tabbing through five systems. For onboarding, a runbook resource would let the AI answer 'how do we deploy the booking agent?' from our actual docs. The key is identifying the data you paste into chat repeatedly and automating that."


Q7: "What's your approach to prompt engineering when generating code?"

Weak answer: "I try to be clear and give examples."

Strong answer: "Context and constraints. I include file structure, relevant types, and existing patterns—'here's our Booking model and validate_booking; add a new field and validation.' I add constraints: use existing utilities, no new dependencies, follow our error-handling pattern. In Cursor, I use @file and @folder to scope context and a Rules file for project conventions. When something fails, I paste the error back—'this test fails with X, fix it'—and iterate. The best prompts aren't long; they're precise. 'Add retries with exponential backoff to the booking API call' beats 'make it more resilient.'"


From Your Experience (Maersk Prompts)

Prompt 1 – AI-assisted development for the email booking agent:
"At Maersk we built the AI email booking automation with Cursor. Describe how you used AI coding tools during development—refactoring extraction logic, debugging failed extractions, adding new tools. What Rules or context did you provide? What would you do differently?"

Prompt 2 – Custom MCP for the Enterprise AI Agent Platform:
"The Enterprise AI Agent Platform at Maersk has multiple agents, guardrails, and tool integrations. If you were to build a custom MCP server to support development, what would it expose? Deployment status? Guardrail trigger history? Agent eval results? Walk through one tool you'd implement and why it would save time."

Prompt 3 – Demonstrating AI-First Proficiency in an interview:
"Miro asks for engineers who've moved beyond simple chat prompts to using AI for autonomous debugging, log analysis, and rapid prototyping. Draft a 2-minute answer describing your AI-assisted development workflow at Maersk. Include: tools, customization (Rules, MCP), concrete example of debugging or prototyping, and when you chose not to use AI."


Quick Fire Round

Q: What is Cursor's Agent mode?
A: Autonomous multi-file editing. You describe a task; the AI plans, edits, and applies changes across files.

Q: Cmd+K in Cursor does what?
A: Inline editing. Select code, describe the change, AI edits in place.

Q: What are .cursor/rules files for?
A: Project-specific AI behavior—conventions, constraints, "do this / don't do that." Read automatically by Cursor.

Q: What is Claude Code?
A: Anthropic's terminal-based AI coding agent. Runs in shell, reads files, runs commands, uses git. Best for refactors and exploring codebases.

Q: CLAUDE.md is used by what?
A: Claude Code. Project context file—stack, conventions, key files.

Q: Copilot vs. Cursor in one sentence?
A: Copilot is completion-focused (suggest next lines); Cursor is agent-focused (AI plans and applies edits).

Q: What does MCP let you do in an IDE?
A: Connect external tools and data—databases, Notion, deployment status—so the AI can query them during development.

Q: When should you NOT use AI for code?
A: Security-critical code (auth, crypto), novel algorithms, or when you'd be accepting changes blindly without understanding.

Q: What's the main risk of AI-generated code?
A: Hallucinated APIs, insecure patterns (e.g. SQL injection), and loss of comprehension if you over-rely.

Q: What's FastMCP?
A: Python SDK for building MCP servers. Decorators for tools/resources, easy to run and plug into Cursor/Claude.

Q: What @-mentions does Cursor support?
A: @file, @folder, @web, @docs—pin context so the AI sees specific files, directories, or web content.

Q: "AI-first" development means what?
A: AI is your pair programmer for everything—not a tool you sometimes use. Workflows integrate AI for the bulk of the work; human judgment for the critical 20%.

Q: How do you fix AI-generated code that fails?
A: Paste the error/output back into the prompt. Iterate: "this test fails with X, fix it." Same loop as with a junior dev.

Q: What's a good first MCP tool to build?
A: One that returns data you paste into chat repeatedly—e.g. deployment status, open tickets. Validate usage before expanding.


Key Takeaways (Cheat Sheet)

Topic Key Point
AI-first philosophy AI as pair programmer for everything. Workflows over one-off prompts. Human judgment for the critical 20%.
Cursor Agent mode (autonomous multi-file), Cmd+K (inline), Chat + @context, Rules files, MCP. Agent-focused.
Claude Code Terminal-based, runs commands, best for refactors and exploration. CLAUDE.md for context.
Copilot Inline completion, Chat, Workspace. Completion-focused; evolving toward agent.
MCP in IDEs Connect DB, Notion, deployment status. AI queries via tools; you get context without tab-switching.
Prompt engineering for code Context (structure, types), constraints (no new deps, use existing utils), iterative refinement with errors.
AI for debugging Paste logs/traces; AI diagnoses and suggests fixes. Rapid prototyping: describe feature, get skeleton.
Custom MCP servers Worth it when you paste the same data repeatedly. FastMCP for quick builds. Start with one tool.
Limitations Hallucinated APIs, security risks, over-reliance. Skip AI for security-critical and novel architecture.
Interview angle Demonstrate workflow, customization (Rules, MCP), concrete examples, and when you don't use AI.

Further Reading