Senior AI Engineer — Interview Preparation Guide
19 sessions. Everything you need. Each one is self-contained—read it, practice the interview Q&A out loud, and you're ready. No external searching required.
These sessions cover every topic across your 6 target job descriptions (Implement Consulting, team.blue, emagine, Miro, TDC, Zendesk). Sessions are ordered from foundational to advanced, 2-3 hours of focused study each.
How to use this guide:
- Go through sessions in order—later sessions build on earlier ones
- Each session has analogies, diagrams, and code examples to make concepts stick
- Practice the Interview Q&A out loud—read the "Weak answer" and "Strong answer" for each question
- The "From Your Experience" section maps each topic to your Maersk work—prepare STAR stories
- The "Quick Fire Round" at the end of each session is for rapid revision before interviews
All session files are in the preparation/ folder.
Session 1: LLM Fundamentals — How Large Language Models Actually Work
File: session-01-llm-fundamentals.md
Transformers, attention (the cocktail party analogy), tokenization (LEGO bricks), embeddings (GPS coordinates), inference parameters (the radio dial), context windows (desk size), model families and comparison, hallucinations (the confident student), fine-tuning vs prompt engineering vs RAG.
Session 2: Prompt Engineering & Context Engineering
File: session-02-prompt-engineering.md
System prompts, few-shot prompting, chain-of-thought, ReAct pattern, structured outputs (JSON mode, Pydantic), prompt templating (Jinja2), context window management, prompt injection attacks and defenses, temperature tuning for different tasks.
Session 3: RAG Fundamentals — The One Topic That Lands You the Job
File: session-03-rag-fundamentals.md
RAG architecture (indexing, retrieval, generation), chunking strategies (fixed-size, recursive, semantic, document-aware), embedding models, vector stores (Qdrant, Pinecone, Chroma, pgvector), similarity search, metadata filtering, document loaders.
Session 4: Advanced RAG Patterns
File: session-04-advanced-rag.md
Hybrid search (BM25 + vector), reranking (cross-encoders), query transformation (HyDE, multi-query), parent-document retrieval, contextual compression, agentic RAG, Self-RAG, Corrective RAG (CRAG), GraphRAG.
Session 5: LangChain, LangGraph & Agent Frameworks
File: session-05-langchain-langgraph.md
LangChain core (LCEL, Runnables, pipe operator), LangGraph deep dive (StateGraph, nodes, edges, conditional routing, cycles), tool calling flows, checkpointing and persistence, subgraphs, LlamaIndex, CrewAI, Agno, framework comparison and decision matrix.
Session 6: Multi-Agent Systems & Orchestration
File: session-06-multi-agent-systems.md
Architecture patterns (supervisor, sequential pipeline, hierarchical, parallel fan-out, debate/consensus, swarm), Anthropic's building-block patterns, inter-agent communication, task decomposition, MCP (Model Context Protocol), A2A protocol, handoff patterns, production challenges (race conditions, infinite loops, cost explosion).
Session 7: Agent Memory, State & Planning
File: session-07-agent-memory-state.md
Memory types (short-term, buffer, summary, long-term, episodic, semantic), state management in LangGraph (TypedDict, Pydantic, reducers), checkpointing (MemorySaver, PostgresSaver), human-in-the-loop with interrupts, planning patterns (ReAct, Plan-and-Execute, Tree-of-Thought, Reflection, LATS), context window management.
Session 8: Tool Integration, Function Calling & MCP
File: session-08-tool-integration-mcp.md
OpenAI function calling, Anthropic tool use, tool schema design, MCP deep dive (host-client-server architecture, stdio vs HTTP, tools/resources/prompts), building MCP servers and clients, tool orchestration patterns, secure tool execution, error handling (retries, circuit breakers, fallbacks), tool selection strategies.
Session 9: Guardrails, Safety & Responsible AI
File: session-09-guardrails-safety.md
Prompt injection (direct and indirect), jailbreaking, content filtering, guardrail frameworks (NeMo Guardrails, Guardrails AI), PII detection and masking, hallucination detection and grounding checks, rate limiting and access control, red teaming, OWASP Top 10 for LLMs.
Session 10: Evaluations — Online & Offline
File: session-10-evaluations.md
Why evals matter (evaluation-driven development), offline evaluation (golden datasets, benchmarks, regression tests, CI/CD integration), online evaluation (A/B testing, human feedback, shadow testing, canary deployments), RAG-specific metrics (faithfulness, relevancy, context precision/recall), evaluation frameworks (DeepEval, RAGAS, LangSmith, Phoenix), LLM-as-judge.
Session 11: LLMOps, Observability & Cost Management
File: session-11-llmops-observability.md
MLflow for LLM tracking, LangSmith tracing, Phoenix (Arize), OpenTelemetry for LLM apps, cost tracking and optimization, prompt versioning, latency monitoring (p50/p95/p99), model performance dashboards, drift detection, incident response for AI systems.
Session 12: Prompt Management & Dataset Management
File: session-12-prompt-dataset-management.md
Prompt versioning and registries, prompt templating (Jinja2), A/B testing prompts, dataset curation for evaluation, synthetic data generation, golden dataset management, DSPy (programmatic prompt optimization), annotation workflows.
Session 13: Vector Databases & Embeddings Deep Dive
File: session-13-vector-databases.md
Embedding models (OpenAI, Cohere, open-source), vector indexing algorithms (HNSW, IVF, PQ), vector database comparison (Qdrant, Pinecone, Weaviate, Chroma, pgvector, Milvus), hybrid search implementation, multi-tenancy, scaling strategies.
Session 14: Performance Optimization — Latency, Cost & Caching
File: session-14-performance-optimization.md
Model routing (small vs large models), semantic caching (GPTCache), token budget management, streaming responses, batching, quantization (GGUF, GPTQ, AWQ), model distillation, async execution, connection pooling, edge deployment (Ollama, vLLM).
Session 15: Production Deployment & Infrastructure for AI
File: session-15-production-deployment.md
Containerizing AI apps (Docker best practices), Kubernetes for AI workloads, GPU sizing, autoscaling strategies, CI/CD for AI (evals in pipeline), API design (FastAPI, streaming, health checks), reliability patterns (circuit breakers, retries, fallbacks), Infrastructure as Code (Terraform, Helm).
Session 16: System Design & Behavioral Prep
File: session-16-system-design-behavioral.md
How to structure a 45-minute AI system design interview (5 phases), practice problems (email extraction system, customer support chatbot, document Q&A, AI agent platform, code review assistant), key dimensions to address, 8 STAR behavioral stories, common behavioral questions.
Session 17: Fine-Tuning — PEFT, LoRA & Training Pipelines
File: session-17-fine-tuning.md
When fine-tuning beats prompt engineering + RAG, full fine-tuning vs PEFT (LoRA, Prefix Tuning, Prompt Tuning), LoRA mechanics (low-rank decomposition, rank selection, target modules, merge), QLoRA (4-bit base, fp16 adapters), Hugging Face ecosystem (PEFT, TRL, SFTTrainer), data requirements, training pipeline design, catastrophic forgetting, SFT vs DPO vs RLHF, cost and infrastructure, provider vs self-host fine-tuning.
Session 18: AI-Assisted Development — Cursor, Claude Code & Modern AI Workflows
File: session-18-ai-coding-tools.md
AI-first development philosophy, Cursor (agent mode, MCP integration, rules files, multi-file editing), Claude Code (terminal-based agent, CLAUDE.md, extended thinking), GitHub Copilot (vs Cursor), MCP in IDEs, prompt engineering for code generation, AI for debugging and log analysis, building custom MCP servers for development, limitations and failure modes, interview angle for AI tool proficiency.
Session 19: Technical Leadership for Lead AI Engineer Roles
File: session-19-technical-leadership.md
Setting technical direction and standards, running design reviews for AI systems, building reusable platform components, mentoring engineers into AI development, cross-functional collaboration, hiring AI engineering candidates, managing technical debt in fast-moving AI, communicating with non-technical stakeholders, incident playbooks for AI systems, 8 lead-level STAR stories, common lead interview questions with model answers.
Quick Reference: Skills Matrix Across Target Jobs
| Skill | Implement | team.blue | emagine | Miro | TDC | Zendesk | Session |
|---|---|---|---|---|---|---|---|
| LLM Fundamentals | yes | yes | yes | yes | yes | yes | 1 |
| Prompt Engineering | yes | yes | yes | yes | yes | yes | 2 |
| RAG | yes | yes | yes | nice-to-have | yes | yes | 3, 4 |
| LangChain/LangGraph | yes | yes | yes | nice-to-have | yes | yes | 5 |
| Multi-Agent | yes | yes | yes | yes | yes | yes | 6 |
| Memory & Planning | - | - | - | - | yes | yes | 7 |
| Tool Calling / MCP | yes | yes | - | - | yes | yes | 8 |
| Guardrails/Safety | - | yes | - | yes | yes | yes | 9 |
| Evaluations | - | - | yes | yes | yes | yes | 10 |
| LLMOps/Observability | yes | yes | - | yes | yes | yes | 11 |
| Prompt Management | - | - | - | - | yes | - | 12 |
| Vector DBs | - | yes | yes | nice-to-have | - | yes | 13 |
| Performance Opt. | - | yes | - | yes | yes | yes | 14 |
| Production Deploy | yes | yes | yes | yes | yes | yes | 15 |
| System Design | yes | yes | yes | yes | yes | yes | 16 |
| Fine-Tuning (PEFT/LoRA) | - | - | yes | - | - | yes | 17 |
| AI Coding Tools | - | - | - | yes | - | - | 18 |
| Technical Leadership | yes | - | - | - | yes | - | 19 |
Suggested Study Schedule
| Week | Sessions | Focus |
|---|---|---|
| Week 1 | 1, 2 | Foundations: LLMs and Prompting |
| Week 2 | 3, 4 | RAG: Fundamentals and Advanced |
| Week 3 | 5, 6 | Agents: Frameworks and Multi-Agent |
| Week 4 | 7, 8 | Agents: Memory and Tools |
| Week 5 | 9, 10 | Safety and Evaluations |
| Week 6 | 11, 12 | Operations: Observability and Management |
| Week 7 | 13, 14 | Depth: Vector DBs and Performance |
| Week 8 | 15, 16 | Production and System Design |
| Week 9 | 17, 18 | Fine-Tuning and AI Development Tools |
| Week 10 | 19 | Technical Leadership (for Lead roles) |
Short on time? Prioritize Sessions 3, 5, 6, 9, 10, and 16 -- these cover the topics that appear most frequently across all 6 job descriptions.
Targeting Lead roles? Add Session 19 to your must-do list. Practice the STAR stories out loud.
Day-before-interview refresh? Read the Quick Fire Round from each session you've studied. That's 10-15 flashcards per session -- covers the essentials in minutes.