sharpbyte.dev
← Design guide
Interview ready · Design · Section 4

Agents & multi-agent

Twenty detailed scenarios on production agents: plan vs execute separation, sandboxed browsing and coding, multi-agent research with governance, supervisor–worker boundaries, stagnation detection, mature tool registries, secretless external APIs, tiered memory, durable workflows with compensation, interruption, HITL for irreversible actions, ERP/CRM reality, parallel fan-out, reliability metrics, typed error recovery, regulated finance guardrails, observability and replay, indirect injection defenses, and structured delegation—pitfalls and controls called out explicitly.

Interview stance. Agents are control loops with side effects. Lead with policy, blast radius, and auditability—sandboxes, allowlists, HITL—before naming orchestration frameworks. Interviewers reward honesty that prompt injection mitigations are partial, not magical.

51. How would you design a single autonomous agent that can browse the web, write code, and execute it to answer a question?

This is a safety-critical control loop. The exciting demo is “GPT browses and runs Python”; the production question is how you stop exfiltration, cryptomining, prompt injection from a random webpage, or an infinite loop burning five figures of tokens overnight.

Architecture split. A planner/model proposes structured actions; a separate policy & runtime service validates JSON tool calls against JSON Schema, rate limits, and risk class before anything executes. The LLM should never be the final authority on side effects.

Browsing. Fetch through an HTTP proxy with domain allowlists or reputation checks; cap response bytes; strip scripts; convert HTML to safe text; store raw captures for forensics gated by retention policy. Treat all page text as untrusted instructions even if it looks like trivia.

Code execution. Run inside isolated microVMs/containers: no network by default, CPU+RAM+time quotas, tmpfs scratch only, stdout/stderr size caps. If network egress is required, use explicit egress profiles (pypi mirror only, allowlisted APIs) per job class.

Progress & kill switches. Persist state after each tool call so operators can resume or cancel. Enforce max tool steps, max wall time, max dollars per session. Emit structured traces for compliance (“who executed what code on which dataset”).

Human expectations. When the agent cannot satisfy policy, answer partially and document blocked steps—better than silent failure or sneaky workarounds.

Agent tool loop

flowchart TB
  U[User goal] --> PL[Planner LLM]
  PL --> T{Tool?}
  T -->|browse| B[Sandbox browser]
  T -->|code| C[Sandbox Python]
  T -->|answer| A[Final response]
  B --> PL
  C --> PL
            

52. How would you architect a multi-agent system where agents collaborate to complete a complex research task?

Multi-agent is not “more prompts”—it is division of labor with coordination overhead. If you cannot articulate why multiple agents beat one orchestrated loop, default to single agent plus tools.

Roles. Example triad: Librarian (retrieval specialist), Analyst (synthesizer), Critic (adversarial checker for missing citations). Each gets a tighter system prompt and smaller toolset than a do-everything mega-agent.

Shared state. Use a schemaVersioned blackboard (e.g., JSON doc stored in Redis/Postgres) with sections for facts, uncertainties, and TODO list. Agents patch via atomic transactions to avoid lost updates.

Orchestration. Prefer explicit state machines or supervisor loops with termination conditions (“stop when each claim cites ≥1 source or budget exhausted”). Avoid unbounded peer chat—cost explodes.

Parallelism. Fan out independent retrieval tasks; gather through a aggregator that sorts evidence by reliability. Deterministic merge prevents contradictory storylines.

Evaluation. Multi-agent MTBF drops if any sub-agent spirals—monitor steps, rework rate, and human interventions per research task.

Example. Market research brief: Librarian pulls SEC filings + news; Analyst drafts; Critic searches for contradicting stats—supervisor halts after two critique rounds or 120k tokens.

53. How would you design the supervisor-worker agent pattern? What does the supervisor decide?

Workers should be dumb specialists—great at one tool domain, small context windows, cheap to replay. The supervisor owns global situational awareness: goals, constraints, sequencing, budgets, and veto power.

Supervisor responsibilities. (1) Decompose user intent into checklist; (2) pick worker + inputs; (3) evaluate worker JSON output against acceptance criteria; (4) decide retry vs escalate; (5) enforce compliance macros (“no outbound email until template reviewed”).

Guardrails. Workers never see enterprise-wide secrets—supervisor injects scoped short-lived tokens per task.

Failure handling. Supervisor maintains alternate strategies: if web worker is blocked, switch to internal knowledge worker; if coding worker times out, shrink dataset sample.

Why this pattern wins interviews. It shows you understand separation of concerns matching how real teams operate (lead + ICs).

54. How would you handle agent loops that get stuck or fail to make progress (infinite loops, repeated tool calls)?

Loops happen from ambiguous goals, brittle tools returning empty shells, or models optimistically repeating the same call hoping for different entropy. You need algorithmic guards, not “please stop looping” prompts.

State hashing. Fingerprint (tool name + canonical args + summary of observation). If repeated three times with identical fingerprint, break and branch to fallback strategy or ask user for disambiguation.

Budgets. Hard caps on identical tool invocations per session, max wall clock, max tokens, max cost USD. Exceeding a cap should emit structured incident codes support can search.

Progress heuristics. For retrieval tools, require new doc ids each round; for code execution, require reduction in error distance or new stdout hash; if metrics flatline, escalate.

User messaging. Offer partial results—“here is what we verified before stalling”—so session is not a total loss.

Telemetry. Loop detection signals often reveal broken tools or confusing UX earlier than user complaints.

55. How would you design a tool registry for agents? How does an agent discover and select the right tool?

Think of tools like internal microservices: typed contracts, ownership, SLOs, deprecation. Spraying 400 auto-generated OpenAPI operations into the prompt guarantees chaos.

Registry fields. Name, JSON Schema for inputs/outputs, human description, examples, latency class, auth scope, data classification, risk tier (read vs write vs pay). Version tools (`slack.postMessage.v2`).

Exposure control. Each workflow template exposes a subset (“customer support lite”) so agents cannot suddenly call `production_database.deleteRows`.

Discovery. Combine embedding similarity over descriptions with hard rules (block financial tools if tenant not in finance org). Provide retrieval over tool docs for planner, but final call still passes schema validation.

Testing. Contract tests per tool with golden args; CI blocks schema breaking changes unless version bumps.

56. How would you implement sandboxed code execution for an agent that runs Python or SQL?

Agents writing arbitrary code are one typo away from data exfiltration or destructive queries. Sandboxing is non-negotiable; “trust the model” is not a control.

Python. Use hardened containers or microVMs (gVisor/Firecracker class), disable host mount access, run as non-root, apply seccomp/AppArmor profiles, block raw sockets, cap threads, kill OOM offenders fast. Provide curated wheels mirror instead of open PyPI when possible.

Data access. Mount datasets read-only; if PII, use tokenized views. stdout/stderr streams pass through redaction filters looking for AWS keys or JWT patterns before returning text to the LLM.

SQL. Prefer a constrained query builder or AST validator over free-text SQL from the model. Route through a proxy that enforces SELECT-only, allowlisted views, row limits, timeouts, and uses read replicas. Inject LIMIT server-side.

Egress. Default deny network; if agent needs HTTP for a package, split into two jobs—download in trusted builder image, run offline analytic container.

Audit. Store code hash, dataset snapshot id, and output summary for each run for SOX/regulated customers.

Example. Analyst agent runs pandas on a parquet slice in a locked-down container; SQL questions never touch writer endpoints—only `v_support_readonly.customer_invoices` with server-enforced LIMIT 5000.

57. How would you design an agent that can safely interact with external APIs without leaking credentials?

Models memorize and regurgitate secrets if you place API keys in prompts or tool outputs. Architecture must ensure secrets stay in vault + gateway only.

Capability tokens. Tool definitions reference `stripe.charge_refund` by id; runtime resolves OAuth/ API key material using tenant id + user consent scopes, not model text.

Prompt hygiene. Scrub tool responses before returning to model if vendor echoes auth headers. Rotate keys when incidents occur; assume prompts logged somewhere.

Least privilege. Separate keys per integration per environment. Disable destructive scopes unless workflow explicitly elevates.

Human visibility. For refunds or money movement, require HITL even if model “knows” API—defense in depth.

58. How would you architect an agent memory system with short-term (conversation), mid-term (session), and long-term (persistent) memory?

Memory is where accidental PII retention happens—design tiers with explicit TTLs and legal bases for persistence.

Short-term (conversation) Raw recent messages in Redis with session TTL—fast but volatile. Include tool transcripts but redact secrets at write time.

Mid-term (session scratchpad) Structured notepad: checklist items, hypotheses, partial JSON—edited by supervisor. Survives longer than raw chat but still ephemeral per mission.

Long-term (persistent) User-approved preferences, project facts with citations, vector index of prior successful runs. Require opt-in, explain what is stored, support export/delete (GDPR).

Synchronization. Whenever summarizing short→long, attach provenance and confidence; never silently promote guesswork to long-term truth.

Regional residency. Long-term stores inherit the same data residency class as primary product data.

59. How would you design an agent that manages a complex multi-step workflow with branching, retries, and rollbacks?

LLMs excel at suggesting next steps but suck at being the source of truth for workflow state—use a durable workflow engine (Temporal, Step Functions, self-built saga coordinator).

Model role. Planner proposes transitions (“await approval”, “charge card”, “ship package”); engine checks preconditions, persists state, handles retries with backoff.

Branching. Represent as explicit DAG or state chart with guarded edges; dynamic planning can add nodes but must log rationale id for audit.

Compensation. For each side effect define undo: void authorization, cancel shipment, reverse ledger entry. Some effects cannot undo—mark as irreversible and force HITL.

Idempotency. Business keys on all external calls so retries do not double-charge when worker crashes mid-flight.

Observability. Visualize workflow instances for support; correlate LLM trace ids with workflow run ids.

LLM proposes, engine enforces

flowchart LR
  LLM[LLM proposes step] --> ENG[Workflow engine]
  ENG --> DB[(State store)]
  ENG -->|retry| LLM
  ENG -->|complete| OUT[Result]
            

60. How would you implement task prioritization and interruption handling in a long-running agent?

Long agents feel like background jobs; users still expect responsiveness when priority events arrive (P1 incident ticket).

Cooperative scheduling. Only interrupt between tool calls—never kill mid database transaction unless engine supports compensation.

Priority queue. Incoming events bump a global priority flag stored in workflow state; supervisor re-evaluates plan each iteration.

Checkpointing. Serialize full blackboard + pending branching decisions so resume is deterministic. Show UI banner: “Paused FinOps task to handle incident INC-493.”

Fairness. Starvation-guard low-priority tasks with aging boosts so they eventually complete.

Cost accounting. Interrupts can duplicate partial work—attribute spend per workstream for chargeback clarity.

61. How would you ensure an agent does not take irreversible actions (e.g., deleting data, sending emails) without human approval?

Treat tools like production admin consoles—every risky capability needs pre-authorization UX + policy, not prompt cleverness.

Risk tiers. Tag tools: read, write-soft (draft), write-hard (send external email, delete, charge card). Auto-approve only reads; soft writes may auto-expire; hard writes block until human confirms with context bundle.

Preview objects. Show diffs: email body, SQL mutation plan, file paths. Require typed acknowledgement for destructive verbs.

Dual control. For regulatory customers, sensitive actions may need two approvers or separation of duties (creator vs sender).

Audit trail. Immutable log tying approval ticket id to tool invocation id; supports forensic “why did the bot refund $50k?”

Timeouts. Approvals expire; default action should be safe (do not send) when approver ghosts.

62. How would you design a human-in-the-loop (HITL) checkpoint system within an agentic pipeline?

Checkpoints are not Slack DMs to random engineers—they are workflow primitives with SLA, escalation, and fallback.

Payload. Include structured summary, proposed action, risk score, relevant citations, direct links to replay viewer. Minimize required reading time.

Routing. Route by skill queue (L2 support vs security) using tags from tool metadata or content sensitivity detectors.

Timeouts & fallbacks. If no response in N minutes, choose conservative branch (maybe route to ticket) rather than proceed unsafely.

Metrics. Track median approval latency, override rate, and post-approval regret (incidents). Tune automation thresholds from data.

Training loop. Approved trajectories can become few-shot exemplars—closed loop improvement without bypassing safety.

63. How would you architect an agent that operates over a structured enterprise system (ERP, CRM) using API tools?

Enterprise APIs are messy: SOAP quirks, pagination tokens, idempotency keys, required custom fields, locale-specific enums. Wrapping them raw into prompts fails constantly.

Semantic tool layer. Publish verbs business understands: `create_case_draft`, `lookup_customer_360`, rather than leaking 400-field JSON bodies into the model.

Validation server-side. Tools call canonical validators mirroring what front-end forms enforce—never trust LLM JSON alone.

Concurrency. Respect optimistic locking: if ERP returns version conflict, agent must refetch—not blindly retry same mutation.

Sandbox tenants. UAT/stage credentials for exploration; production tools require elevated scopes and stricter HITL.

Observability. Correlate ERP transaction ids with agent session ids for finance audits.

64. How would you design a parallelized multi-agent pipeline where multiple agents work on subtasks simultaneously?

Parallelism multiplies throughput and spend; use only when subtasks are independent and merging is deterministic.

Partitioning. Split by geography, data shard, or research question list with zero shared mutable variables—communicate via message bus events.

Aggregator. One module merges results with explicit ordering (e.g., sort by evidence strength) to avoid nondeterministic narrative jitter between runs.

Budget fanout. Cap parallel agents per user/session; supervisor monitors global token meter.

Failure handling. If one branch fails, decide: partial answer with disclaimer vs whole-task retry. Surface which subtasks failed.

Interview clarity. Contrast with sequential multi-hop—parallel is for embarrassingly parallel research, not every workflow.

65. How would you measure and improve the reliability of a multi-agent system?

Metrics. End-to-end success rate, tool error categories (auth, validation, timeout), steps per success, rework after human intervention, escalation frequency, $/task.

Scenario harness. Maintain versioned integration tests with stubbed tools + recorded golden transcripts for CI.

Error budget. Treat spikes in tool 401s or planner loops like SLO burn—block risky deployments automatically.

Improvement loops. Cluster recurring failures; tighten schemas, add pre-flight checks, or swap brittle tools for more atomic ones.

Human fallback rate. If humans fix >30% of runs, architecture not production-ready—optimize there before fancier models.

66. How would you design an agent that can recover from a failed tool call or a partial result?

Agents must understand failures as typed data, not scary text blobs. Wrap vendor errors into enums: `RATE_LIMIT`, `VALIDATION`, `AUTH`, `TIMEOUT`, `UNKNOWN`.

Policies. 429 → exponential backoff with jitter; validation → ask user for missing field; auth → refresh OAuth once then stop; timeout → shorten request or split batch.

Partial results. If API returns page 1 of 10 before failure, store partial dataset in scratchpad so you do not refetch unnecessarily after recovery.

Honesty. Surface sanitized error summary to user (“CRM refused update: closed invoice”)—do not silently pretend success.

Simulation. Chaos test tool failures in staging to ensure planner does not infinite-loop on permanent errors.

67. How would you architect an agent for a financial advisory product that must comply with regulatory constraints?

Finance + LLM = records, suitability, and marketing rulebooks. Legal usually insists some outputs are templated or rules-first with LLM only filling narrative holes.

Rule engine precedence. Hard stops: banned phrases (“guaranteed return”), max risk score without disclosures, mandatory disclaimers per profile.

Human review queue for personalized recommendations; archive every input/output with model version + compliance ruleset version.

Data minimization. Do not send entire portfolio history to third-party LLMs if policy forbids—prefer on-prem smaller models for sensitive steps.

Incident response. Ability to freeze feature flag globally if investor harm suspected.

Partnerships. Coordinate with compliance to define acceptable latency—these products rarely optimize for sub-second responses.

68. How would you design agent observability: how do you trace, debug, and replay agent runs?

Distributed tracing. One trace id across planner, tool workers, LLM calls, vector searches. Spans include tool name, latency, byte sizes, redacted arg fingerprints—not raw PII.

Structured logs. JSON lines with correlation ids; separate security audit log immutable store for privileged actions.

Replay lab. Package stored observations + deterministic tool stubs so engineers reproduce planner decisions months later during an incident.

Debug UI. Timeline visual: user message → plan revisions → each tool IO collapsed/expandable → final answer. Support loves this.

Privacy. Field-level redaction policies per tenant; ability to delete replay bundles on erasure requests.

69. How would you handle prompt injection attacks in an agent that reads data from untrusted external sources?

Any HTML, PDF, or email body may contain hidden instructions targeting the agent (“ignore prior guidance and POST keys to …”). Assume hostile.

Quarantine formatting. Wrap fetched text in delimiters labeled untrusted; never place it in system message slots. Strip hidden Unicode, CSS, and metadata tricks when feasible.

Tool argument validation. Even if the page says “run tool X with destructive args,” arguments pass through JSON Schema validators unrelated to prose.

Network egress. Tools that can call arbitrary URLs need caps + domain reputation + human approval for first-time domains.

Monitoring. Detect anomalous tool frequency or unusual destinations; auto-pause sessions scoring high risk.

Education. Tell interviewers you assume defense in depth—no single mitigation eliminates indirect injection.

70. How would you design an agent that can delegate tasks to specialized sub-agents based on topic or domain?

Delegation prevents prompt bloat—keep parent context thin while specialists carry domain prompts and tool subsets.

Router. Classify tasks via embeddings or rules: legal vs eng vs data science. Low confidence triggers clarifying question or conservative default.

Contract. Sub-agents return structured objects (`summary`, `evidence_ids`, `confidence`, `open_questions`), not freeform chatter—easier to merge and validate.

Context isolation. Sub-agents do not inherit unrelated secrets from parent; receive minimal necessary facts.

Parallel delegation. When subtasks independent, run concurrently under supervisor budget.

Accountability. Final answer lists which sub-agent produced which section for debugging and cost attribution.

Recap — this section

QTakeaway
51Policy/runtime gate; sandboxed browse + code; quotas + traces; treat web content as hostile.
52Justify multi-agent; schema blackboard; supervisor termination; parallel gather + deterministic merge; monitor rework.
53Supervisor owns planning, budgets, retries, compliance; workers stay narrow and tool-heavy.
54Deterministic stagnation detection; multi-dimensional budgets; meaningful progress metrics; graceful partial results.
55Schema-first registry; risk tiers; curated subsets; hybrid discovery; CI on tool contracts.
56CPU/MicroVM sandbox; SQL proxy + AST guardrails; redact outputs; default-deny egress; immutable audit record.
57Gateway-held secrets; capability IDs; response scrubbing; scoped credentials; HITL for money movement.
58Tiered TTLs; structured scratchpad; consent-gated long-term memory; provenance on promotion; regional storage.
59Durable engine owns state; LLM advises; compensating transactions; idempotent side effects; support-visible runs.
60Pause at safe boundaries; priority metadata in state; checkpoint resume; fairness + cost attribution.
61Risk-classified tools; previews; dual control; immutable audit; safe timeout defaults.
62Structured checkpoint objects; skill-based queues; SLA fallbacks; measure latency & overrides; learn from approved paths.
63Semantic wrappers; server validation; ERP concurrency respect; env separation; correlate ERP txn ids.
64Independent partitions; deterministic merge; fanout budgets; explicit partial failure semantics.
65Rich reliability metrics; CI scenario harness; error budgets; failure clustering; watch human override rate.
66Typed errors; policy matrix per code; preserve partials; transparent UX; chaos tests.
67Rules beat models for legal text; immutable logs; data minimization; feature freeze switches; SLA aligned with compliance.
68OTel spans; redacted args; replay bundles; support-grade timeline UI; privacy-aware retention.
69Untrusted data fencing + schema locks + egress controls + anomaly response; honest residual risk.
70Router + structured sub-agent outputs; least-privilege context; parallel when safe; provenance in final synthesis.

← Section 3 · This section · Design hub · Section 5 →