If you ask Claude Code to build a voice agent from scratch, it works about 2% of the time. With LiveKit's new Agent Skills, that jumps to 80%.

I just dropped a [full tutorial](https://youtu.be/92_H7pTf22k) where I build a complete customer support voice agent from a blank terminal — Claude Code does the heavy lifting, and I want to break down what happened and why this matters.

## What Are Agent Skills?

A few weeks ago, LiveKit released [Agent Skills](https://github.com/livekit/agent-skills) — installable instruction bundles that live inside your project and teach coding agents like Claude Code *how to think* about building voice agents.

This is different from the LiveKit MCP server I used in my last tutorial. MCP gives Claude Code access to documentation — it can look up method signatures and API references. But it has no opinion on architecture.

Agent Skills are the opinion. They encode rules like:

- **Don't trust model memory for API details** — always verify against live docs
- **Context bloat kills performance** — a voice agent with 50 tools and a 10,000 token system prompt will feel sluggish regardless of model speed
- **Keep tools scoped** — don't give the agent access to everything at every moment

Think of it this way: MCP is giving someone a library card. Agent Skills is giving them an architect.

## What I Built

A customer support voice agent for a fake company called AcmeCo with:

- **Claude Sonnet 4.6** as the brain (swapped from the default GPT-5.3)
- **5 tools**: order lookup, order cancellation, return policy, shipping options, and agent transfer
- **Voice-optimized system prompt** — short sentences, no markdown, TTS-friendly
- **LiveKit Cloud sandbox** for live voice testing in the browser

The whole build took about 15 minutes with Claude Code doing the actual coding. I gave it three prompts:

1. Swap the LLM to Claude Sonnet via `livekit-plugins-anthropic`
2. Add five customer support tools with mock data
3. Update the system prompt for voice

Claude Code used `lk docs` commands to verify the current Anthropic plugin API before writing code — that's the Agent Skill in action, telling it to never guess from training data.

## The Part Nobody Talks About

Here's where it gets interesting. I threw the agent a messy multi-intent question — cancel my order, what's the return policy, transfer me to billing, do you have express shipping to New York — all at once.

It handled it perfectly. Five tools, four requests, one clean response.

So why am I not celebrating?

Because this was a demo with five tools and a four-sentence system prompt. In production, a real customer support agent has 15-20 tools: identity verification, troubleshooting, returns processing, appointment scheduling, escalation, upsell, warranty checks, payment processing, and more.

Your system prompt becomes a wall of text trying to cover every possible combination of what a customer might ask, in what order, at what stage of the conversation. And every tool is available at every moment — there's nothing stopping the agent from processing a return before it's even verified who the customer is.

That's not engineering. That's luck.

## The Fix: Scenario Branching

The fix isn't a bigger system prompt. It's giving each stage of the conversation its own instructions and its own tools.

- Verification only sees the identity tool
- Troubleshooting only sees diagnostic tools
- Returns only sees the return tool

And the LLM calls **transition functions** to move between stages — explicit edges in a graph, not vibes in a prompt.

This is called **scenario branching**. It's a state machine where each node has scoped instructions, scoped tools, and deterministic transitions. I'm going deep on this in my next video — subscribe on [YouTube](https://www.youtube.com/@JamesAkapulu) so you don't miss it.

## Get Started

If you want to build this yourself:

1. Install the [LiveKit CLI](https://docs.livekit.io/intro/basics/cli/) (`brew install livekit-cli`)
2. Run `lk cloud auth` to authenticate
3. Run `lk agent init my-agent --template agent-starter-python` — this sets up your entire project with the Agent Skill already baked in
4. Open in Claude Code and start prompting

The Agent Skills repo is at [github.com/livekit/agent-skills](https://github.com/livekit/agent-skills) and the starter template is at [github.com/livekit-examples/agent-starter-python](https://github.com/livekit-examples/agent-starter-python).

Full 27-minute tutorial here: [Watch on YouTube](https://youtu.be/92_H7pTf22k)

— James

Keep Reading