- The Conversational Edge
- Posts
- Building a Fully Deployed Full-Stack AI Avatar Appointment Scheduler with Next.js and LiveKit
Building a Fully Deployed Full-Stack AI Avatar Appointment Scheduler with Next.js and LiveKit
Imagine chatting with a friendly AI on your screen. It hears your voice, speaks back with a real tone, and even books your next meeting without you lifting a finger. That's the power of this full-stack AI avatar app. We'll build it step by step using Next.js for the interface, LiveKit for the voice smarts, and n8n to handle real tasks like calendar entries. By the end, you'll have a live link to your own AI scheduler ready for anyone to use.
(by the way, you can watch the full youtube video visiting the following link: https://youtu.be/ibDma8ZLlGo?si=u9e1tB1926g3Y67s)
Deconstructing the Full-Stack AI Architecture
This AI receptionist breaks down into three main parts that work together smoothly. The front end handles what users see and do. The back end powers the smart replies. And the automation workflow ties it all to everyday tools like your calendar.
The Front End: User Interface and Real-Time Connection
Your users start here. The Next.js app creates a simple web page where people talk to the AI. It sets up a live connection for voice and video chats. Once built, you deploy it on Vercel. That gives a public URL anyone can visit from their browser. No more local testing hassles.

The Back End: The LiveKit Python Voice Agent
This is the heart of the operation. A Python script runs on LiveKit Cloud and listens to user speech. It processes words, crafts responses, and turns text into speech using 11 Labs for natural voices. At the same time, it syncs audio with a Tavus avatar for a lifelike face on screen. Low delay makes talks feel instant.

The Automation Layer: Connecting AI to Real-World Actions via n8n
After the chat ends, magic happens. The front end sends a quick note to an n8n webhook. That starts a workflow to grab details from the talk. It pulls key info like names and times, then books the slot in Google Calendar. Users get real results from a casual conversation.
Phase 1: Setting Up the Next.js Front End
Let's kick things off with the user side. You need a solid base to connect everything later. This phase gets your interface running on your machine first.
Cloning and Initializing the Next.js Starter Repository

Search for "Next.js LiveKit" online. Grab the first link to the quick start guide. It points to a GitHub repo called agent-starter-react. Copy that repo link. In your code editor, make a new folder like "livekit-vercel-agent." Open a terminal there and run "git clone" with the link. That pulls in all the files. Switch into the "agent-starter-react" folder. Type "pnpm install" to grab packages. Then hit "pnpm dev" to launch it on localhost:3000. You'll see a basic chat screen ready for tweaks.

Configuring LiveKit Credentials
Errors pop up at first because of missing keys. Head to livekit.io and start a new project. Go to settings, then API keys, and create one named "vercel-agent." Copy the URL, key, and secret. Back in your project, copy the .env.example to .env.local. Paste those values in. Save and restart "pnpm dev." Errors vanish. Now the app waits for a backend to connect. This setup lets your front end talk to LiveKit securely.
Phase 2: Developing the Custom Python Voice Agent (Back End)
Shift to the brain now. The Python agent makes the AI respond like a pro. We'll customize it for scheduling talks and add a smooth voice.
Installing CLI Tools and Creating the Agent Directory
Make a new folder in your root called "livekit-voice-agent." Cd into it. Install the LiveKit CLI with "brew install livekit-cli" on Mac. Skip the auth command since your keys are set. Follow the Voice AI quick start docs. Run "lk voice create" to set up the project. Then "pip install -e ." to add dependencies. Your pyproject.toml now lists all tools needed.
Integrating High-Quality Text-to-Speech with 11 Labs
The default voice is okay, but 11 Labs sounds more human. Search for "11 Labs LiveKit plugin." Install with "uv add livekit-agents[plugins] livekit-plugins-elevenlabs." Get an API key from 11labs.io—create an account and generate one named "LK-vercel." Add "ELEVENLABS_API_KEY" to your .env.local. In agent.py, import the plugin. Swap the TTS line to use ElevenLabsTTS with a default voice ID. Test with "uv run agent.py console." Speak a question. It replies with clear, lifelike audio.
Want a different accent? Go to 11 Labs voices, pick one like Charlie (Australian), copy its ID, and plug it into the code. Run the console test again. The change sticks right away. This tweak boosts how real the AI feels during chats.

Customizing the Agent Persona and Context
Edit the system prompt in agent.py. Change it from generic helper to "You are James' AI appointment scheduler." Add flow: Ask for name and phone. Offer slots tomorrow at 12 p.m. or 3 p.m. Book the choice. To fix date mix-ups, import datetime. Set today = datetime.now(). Tomorrow = today + timedelta(days=1). Format it and add to the prompt like f"Tomorrow is {tomorrow_date}." Test in console. It now gives exact dates and follows the script tight.
Phase 3: Implementing the AI Avatar and Deployment
Add a face to match the voice. Then push everything online so it's always ready. No local runs needed after this.

Integrating the Tavus Virtual Avatar
Tavus brings avatars to life. In docs, find the Tavus plugin section. Install with "uv add livekit-plugins-tavus." Get a key from platform.tavus.io—create one called "LK-vercel." Add "TAVUS_API_KEY" to .env.local. Use curl to make a persona: Paste the command, swap in your key, set mode to "echo." It spits out a persona ID. Pick a replica like "Carter" from their library and copy its ID. In agent.py, import Tavus. After agent init, add tavus = Tavus(...). Then tavus.avatar_session_start(replica_id, persona_id). Add Tavus to plugins list. Indent right to avoid errors. Test with "uv run agent.py dev" while front end runs. Click start call. The avatar appears and lipsyncs perfectly.
Deploying the Back End to LiveKit Cloud
Deployment is simple. From the agent folder, run "lk agent create --env .env.local." It uploads in under a minute. Check with "lk agent list." Your agent shows as live. Restart front end with "pnpm dev." Go to localhost:3000, start call. The avatar chats from the cloud—no local backend needed. Free and always on.
Deploying the Next.js Front End to Vercel
Make a GitHub repo named "lk-vercel." In terminal, "git remote remove origin." Then "git remote add origin [repo url]," "git branch -M main," "git push -u origin main." Go to vercel.com, add new project, import the repo. Set build command to "pnpm build," install to "pnpm install." Add env vars: LIVEKIT_URL, LIVEKIT_API_KEY, LIVEKIT_API_SECRET. Deploy. Grab the live URL. Test start call. Full app works online for free.

Phase 4: Bridging Conversation to Calendar Automation with n8n
Tie chats to actions. When calls end, transcripts trigger bookings. This makes the AI useful beyond talk.
Designing the n8n Workflow for Scheduling
Download the sample JSON from the guide or build it. Start with a webhook node for POST requests. Add a set node to store transcript. Use a code node to map messages: Return only the array of user and AI texts, skip timestamps. Feed to an AI agent node with OpenAI GPT-4o-mini. Prompt: "Schedule based on this chat in my Google Calendar. Use user's name in title." Attach Google Calendar tool, pick your email. Set start/end times from the summary. Test runs show clean data flow. It creates events like "Appointment with Jim at 3 p.m."
Import the JSON in n8n: Click three dots, import from file. Activate the workflow. Copy the production webhook URL.

Implementing the Transcript Capture in the Front End
Open components/session-view.tsx. Around line 166, in the AgentControlBar, add onDisconnect handler. Use useChatTranscription for messages. If messages.length > 0, fetch the n8n URL with POST, headers JSON, body {messages}. Log errors. Save. Add NEXT_PUBLIC_N8N_WEBHOOK_URL to .env.local with the copied URL. Test locally: Run "pnpm dev," start call, chat, end call. Check n8n executions. Transcript arrives, calendar updates.
Push changes: "git add components/session-view.tsx," "git commit -m 'Add n8n functionality'," "git push origin main." Vercel redeploys auto.
Final Testing and Verification
In Vercel settings, add NEXT_PUBLIC_N8N_WEBHOOK_URL with production value. Redeploy. If prettier errors hit, run "pnpm prettier --write ." and push again. Visit the live URL in incognito. Start call, give name "Jim," phone, pick 3 p.m. End call. Monitor n8n: Execution runs, AI parses, calendar node books it. Check Google Calendar—event there with details. Full loop works end-to-end.

Conclusion: Your Fully Operational AI Assistant is Live
You now have a complete AI avatar appointment scheduler up and running. Next.js handles the clean front, LiveKit powers voice and avatar magic with 11 Labs and Tavus, and n8n seals the deal by booking real slots. Share that Vercel link—friends or clients can use it anytime. Tweak the prompt for other tasks, like reminders or quotes. Build on this base for more AI helpers in your life. What's your next automation idea? Drop it in the comments.