Skip to main content
close

Build a multilingual voice agent that automatically switches languages

One of the most common questions developers ask when building voice AI applications is: "How do I detect what language the user is speaking and respond in that same language?" This tutorial walks you through building a voice agent that does exactly that.

You'll create a multilingual voice assistant using LiveKit Agents, Deepgram STT, OpenAI, and Rime TTS. The agent listens for the user's language, detects when they switch languages mid-conversation, and dynamically updates the TTS configuration to respond with a native-sounding voice in that language.

Try the demo live. For the full source code including the Next.js frontend, see the rime-multilingual-demo repository on GitHub. You can also watch a video demo of the multilingual agent in action.

What you'll build

By the end of this tutorial, you'll have a voice agent that:

  • Supports English, Hindi, Spanish, Arabic, French, Portuguese, German, Japanese, Hebrew, and Tamil
  • Automatically detects the language the user is speaking
  • Switches TTS language settings on the fly using a single Rime voice
  • Responds naturally in the detected language
  • Optionally syncs the current language to the frontend via participant attributes

The key technique involves overriding the STT node in your agent to intercept speech events, extract the detected language, and update the TTS configuration before the agent responds.

Prerequisites

Before you start, make sure you have:

  • Python 3.11 or later installed
  • uv package manager installed
  • A LiveKit Cloud account (free tier works)
  • API keys from the following providers:

Step 1: Set up the project

Create a new directory and initialize the project:

1
mkdir rime-multilingual-agent
2
cd rime-multilingual-agent
3
uv init --bare

Step 2: Install dependencies

Install the LiveKit Agents framework and the packages you need:

1
uv add \
2
"livekit>=1.0.23" \
3
"livekit-agents[silero,turn-detector]>=1.3.12" \
4
"livekit-plugins-noise-cancellation>=0.2.5" \
5
"python-dotenv>=1.2.1"

This installs:

  • livekit-agents: The core agents framework with unified inference (STT, LLM, TTS)
  • silero: Voice Activity Detection (VAD)
  • turn-detector: Contextually-aware turn detection for natural conversations

STT, LLM, and TTS are configured via the framework's inference API using provider-prefixed models (e.g. deepgram/nova-3-general, openai/gpt-4o, rime/arcana). You supply the corresponding API keys in your environment.

Step 3: Configure environment variables

Create a .env file in your project directory:

1
LIVEKIT_API_KEY=<your_api_key>
2
LIVEKIT_API_SECRET=<your_api_secret>
3
LIVEKIT_URL=wss://<project-subdomain>.livekit.cloud

You can get your LiveKit credentials from the LiveKit Cloud dashboard under Settings > API Keys.

Step 4: Create the agent

Create a file named main.py and add the following code. I'll break down each section to explain what it does.

Import dependencies and configure logging

1
import logging
2
from typing import AsyncIterable
3
from dataclasses import dataclass
4
from dotenv import load_dotenv
5
from livekit.agents import (
6
Agent,
7
AgentServer,
8
AgentSession,
9
JobContext,
10
JobProcess,
11
MetricsCollectedEvent,
12
ModelSettings,
13
RoomOutputOptions,
14
cli,
15
metrics,
16
stt,
17
inference,
18
)
19
from livekit.plugins import silero
20
from livekit.plugins.turn_detector.multilingual import MultilingualModel
21
from livekit import rtc
22
23
logger = logging.getLogger("multilingual-agent")
24
25
load_dotenv()

Define language configurations

Next, create a dataclass to store TTS settings for each supported language. The current backend uses a single Rime voice (seraphina) and switches only the language code:

1
# Default configuration constants
2
DEFAULT_LANGUAGE = "eng"
3
DEFAULT_TTS_MODEL = "arcana"
4
DEFAULT_VOICE = "seraphina"
5
6
7
@dataclass
8
class LanguageConfig:
9
"""Configuration for TTS settings per language."""
10
11
lang: str
12
model: str = DEFAULT_TTS_MODEL

The LanguageConfig dataclass holds the Rime language code and model name. The framework uses a single voice across languages; Rime handles pronunciation per language.

Create the multilingual agent class

Now create the agent class that handles language detection and TTS switching:

1
class MultilingualAgent(Agent):
2
"""A multilingual voice agent that detects user language and responds accordingly."""
3
4
# TTS config per language. Keys are Rime 3-letter codes. Voice is always seraphina.
5
LANGUAGE_CONFIGS = {
6
"eng": LanguageConfig(lang="eng"),
7
"hin": LanguageConfig(lang="hin"),
8
"spa": LanguageConfig(lang="spa"),
9
"ara": LanguageConfig(lang="ara"),
10
"fra": LanguageConfig(lang="fra"),
11
"por": LanguageConfig(lang="por"),
12
"ger": LanguageConfig(lang="ger"),
13
"jpn": LanguageConfig(lang="jpn"),
14
"heb": LanguageConfig(lang="heb"),
15
"tam": LanguageConfig(lang="tam"),
16
}
17
18
# Display names for instructions. Keys match LANGUAGE_CONFIGS.
19
LANGUAGE_DISPLAY_NAMES = {
20
"eng": "English",
21
"hin": "Hindi",
22
"spa": "Spanish",
23
"ara": "Arabic",
24
"fra": "French",
25
"por": "Portuguese",
26
"ger": "German",
27
"jpn": "Japanese",
28
"heb": "Hebrew",
29
"tam": "Tamil",
30
}
31
32
# STT returns ISO 639-1 (e.g. "en", "es") or locale (e.g. "en-US"). Map to Rime codes.
33
STT_TO_RIME = {
34
"en": "eng",
35
"hi": "hin",
36
"es": "spa",
37
"ar": "ara",
38
"fr": "fra",
39
"pt": "por",
40
"de": "ger",
41
"ja": "jpn",
42
"he": "heb",
43
"ta": "tam",
44
}
45
46
SUPPORTED_LANGUAGES = list(LANGUAGE_CONFIGS.keys())
47
48
def __init__(self) -> None:
49
super().__init__(instructions=self._get_instructions())
50
self._current_language = DEFAULT_LANGUAGE
51
self._room: rtc.Room | None = None
52
53
def _get_instructions(self) -> str:
54
"""Get agent instructions in a clean, maintainable format."""
55
supported_languages = ", ".join(
56
self.LANGUAGE_DISPLAY_NAMES[lang] for lang in self.SUPPORTED_LANGUAGES
57
)
58
return (
59
"You are a voice assistant powered by Rime's text-to-speech technology. "
60
"You are here to showcase Rime's natural, expressive, and multilingual voice capabilities. "
61
"You respond in the same language the user speaks in. "
62
f"You support {supported_languages}. "
63
"If the user speaks in any other language, respond in English and politely let them know: "
64
f"'I only support {supported_languages}. Please speak in one of these languages.' "
65
"Keep your responses concise and to the point since this is a voice conversation. "
66
"Do not use emojis, asterisks, markdown, or other special characters in your responses. "
67
"You are curious, friendly, and have a sense of humor."
68
)

The LANGUAGE_CONFIGS dictionary maps Rime 3-letter language codes to TTS config. STT_TO_RIME maps the ISO codes returned by Deepgram to those Rime codes. The instructions are built from LANGUAGE_DISPLAY_NAMES so the list of supported languages stays in sync.

Override the STT node

This is the core technique for detecting language changes. Override the stt_node method to intercept speech-to-text events and check for language changes:

1
async def stt_node(
2
self, audio: AsyncIterable[rtc.AudioFrame], model_settings: ModelSettings
3
) -> AsyncIterable[stt.SpeechEvent]:
4
"""
5
Override STT node to detect language and update TTS configuration dynamically.
6
7
This method intercepts speech events to detect language changes and updates
8
the TTS settings to match the detected language for natural voice output.
9
"""
10
default_stt = super().stt_node(audio, model_settings)
11
12
async for event in default_stt:
13
if self._is_transcript_event(event):
14
await self._handle_language_detection(event)
15
yield event
16
17
def _is_transcript_event(self, event: stt.SpeechEvent) -> bool:
18
"""Check if event is a transcript event with language information."""
19
return (
20
event.type
21
in [
22
stt.SpeechEventType.INTERIM_TRANSCRIPT,
23
stt.SpeechEventType.FINAL_TRANSCRIPT,
24
]
25
and event.alternatives
26
)
27
28
async def _handle_language_detection(self, event: stt.SpeechEvent) -> None:
29
"""Update TTS from STT-detected language and sync to frontend via participant attributes."""
30
detected_language = event.alternatives[0].language
31
if not detected_language:
32
return
33
effective_language = self._update_tts_for_language(detected_language)
34
if effective_language != self._current_language:
35
self._current_language = effective_language
36
await self._publish_language_update(effective_language)
37
38
def _update_tts_for_language(self, language: str) -> str:
39
"""Update TTS configuration based on detected language.
40
41
Returns the effective Rime language code (the one actually used for TTS).
42
"""
43
base = language.split("-")[0].lower() if language else ""
44
rime_lang = self.STT_TO_RIME.get(base, base) if base else DEFAULT_LANGUAGE
45
effective_lang = rime_lang if rime_lang in self.LANGUAGE_CONFIGS else DEFAULT_LANGUAGE
46
config = self.LANGUAGE_CONFIGS.get(effective_lang, self.LANGUAGE_CONFIGS[DEFAULT_LANGUAGE])
47
logger.info(f"Updating TTS: detected={language} -> rime={effective_lang}")
48
self.session.tts.update_options(
49
model=f"rime/{config.model}",
50
language=config.lang,
51
)
52
return effective_lang
53
54
async def _publish_language_update(self, language_code: str) -> None:
55
"""Sync current language to the frontend via participant attributes (see LiveKit docs: participant attributes)."""
56
if not self._room:
57
return
58
try:
59
display_name = self.LANGUAGE_DISPLAY_NAMES.get(language_code, "English")
60
await self._room.local_participant.set_attributes({"current_language": display_name})
61
except Exception as e:
62
logger.warning("Failed to publish language update: %s", e)

The stt_node method receives audio frames and yields speech events. By iterating through the default STT output and checking each event, you get the detected language from transcript events. When the language changes, _update_tts_for_language maps the STT language (e.g. en or en-US) to a Rime code, updates TTS with update_options(), and returns the effective language. _publish_language_update writes the current language to the room participant's attributes so a frontend can show it (see the full demo repo for an example UI).

Add the greeting

Override on_enter to publish the initial language and greet the user when they connect:

1
async def on_enter(self) -> None:
2
"""Called when the agent session starts. Generate initial greeting."""
3
await self._publish_language_update(self._current_language)
4
self.session.generate_reply(
5
instructions="Greet the user and introduce yourself as a voice assistant powered by Rime's text-to-speech technology. Ask how you can help them."
6
)

Set up the server and entrypoint

The agent uses the AgentServer API: register a prewarm function and an RTC session entrypoint that configures the agent session:

1
def prewarm(proc: JobProcess) -> None:
2
"""Preload VAD model for faster startup."""
3
proc.userdata["vad"] = silero.VAD.load()
4
5
6
server = AgentServer()
7
server.setup_fnc = prewarm
8
9
10
@server.rtc_session(agent_name="rime-multilingual-agent")
11
async def entrypoint(ctx: JobContext) -> None:
12
"""Main entry point for the multilingual agent worker."""
13
ctx.log_context_fields = {"room": ctx.room.name}
14
15
session = AgentSession(
16
vad=ctx.proc.userdata["vad"],
17
stt=inference.STT(model="deepgram/nova-3-general", language="multi"),
18
llm=inference.LLM(model="openai/gpt-4o"),
19
tts=inference.TTS(
20
model=f"rime/{DEFAULT_TTS_MODEL}", voice=DEFAULT_VOICE, language=DEFAULT_LANGUAGE
21
),
22
turn_detection=MultilingualModel(),
23
)
24
25
usage_collector = metrics.UsageCollector()
26
27
@session.on("metrics_collected")
28
def _on_metrics_collected(ev: MetricsCollectedEvent) -> None:
29
metrics.log_metrics(ev.metrics)
30
usage_collector.collect(ev.metrics)
31
32
async def log_usage() -> None:
33
summary = usage_collector.get_summary()
34
logger.info(f"Usage summary: {summary}")
35
36
ctx.add_shutdown_callback(log_usage)
37
38
agent = MultilingualAgent()
39
agent._room = ctx.room
40
await session.start(
41
agent=agent,
42
room=ctx.room,
43
room_output_options=RoomOutputOptions(transcription_enabled=True),
44
)
45
46
47
if __name__ == "__main__":
48
cli.run_app(server)

Configuration notes:

  • inference.STT with model="deepgram/nova-3-general" and language="multi" enables automatic language detection.
  • inference.LLM and inference.TTS use provider-prefixed models (openai/gpt-4o, rime/arcana).
  • MultilingualModel for turn detection works with multilingual STT for natural turn-taking.
  • The agent is given a reference to the room (agent._room = ctx.room) so it can publish language updates to participant attributes.

Step 5: Download model files

Before running the agent for the first time, download the required model files for the turn detector and Silero VAD:

1
uv run main.py download-files

Step 6: Run the agent

Start by running the agent in console mode so you can test the voice pipeline locally with your microphone and speakers:

1
uv run main.py console

Want a visual interface? Run the agent in dev mode (uv run main.py dev), then use the LiveKit Agents Playground. Open agents-playground.livekit.io, sign in with your LiveKit Cloud project, and create or join a room. Your agent will attach when dispatched (e.g. via LiveKit Cloud agent configuration). Use the playground's microphone and speaker to have a voice conversation and confirm language switching.

Development mode

Connect to LiveKit Cloud for internet-accessible testing:

1
uv run main.py dev

Production mode

Run in production:

1
uv run main.py start

How it works

The language detection flow works like this:

  1. User speaks in any supported language.
  2. Deepgram STT (with language="multi") transcribes the speech and detects the language.
  3. The overridden stt_node intercepts the speech event and reads the detected language.
  4. If the language changed, _update_tts_for_language maps the STT code to a Rime code and updates TTS via update_options().
  5. Optionally, _publish_language_update writes the current language to the participant's attributes for the frontend.
  6. The LLM receives the transcript and generates a response in context.
  7. Rime TTS synthesizes the response using the updated language setting.

The instructions tell the LLM to respond in the same language as the user; the TTS update makes the spoken output use the correct Rime language.

Summary

This tutorial covered how to build a multilingual voice agent that automatically detects and responds in the user's language. The key techniques include:

  • Overriding the stt_node to intercept speech events and detect language changes
  • Mapping STT language codes to Rime (or your TTS provider) and using update_options() to change TTS settings mid-conversation
  • Configuring Deepgram STT with multilingual mode for automatic language detection
  • Using the MultilingualModel turn detector for natural conversation flow
  • Optionally syncing the current language to a frontend via participant attributes

For more information, check out:

Complete code

Here is the complete main.py file.

1
import logging
2
from typing import AsyncIterable
3
from dataclasses import dataclass
4
from dotenv import load_dotenv
5
from livekit.agents import (
6
Agent,
7
AgentServer,
8
AgentSession,
9
JobContext,
10
JobProcess,
11
MetricsCollectedEvent,
12
ModelSettings,
13
RoomOutputOptions,
14
cli,
15
metrics,
16
stt,
17
inference,
18
)
19
from livekit.plugins import silero
20
from livekit.plugins.turn_detector.multilingual import MultilingualModel
21
from livekit import rtc
22
23
24
logger = logging.getLogger("multilingual-agent")
25
26
27
load_dotenv()
28
29
30
# Default configuration constants
31
DEFAULT_LANGUAGE = "eng"
32
DEFAULT_TTS_MODEL = "arcana"
33
DEFAULT_VOICE = "seraphina"
34
35
36
@dataclass
37
class LanguageConfig:
38
"""Configuration for TTS settings per language."""
39
40
lang: str
41
model: str = DEFAULT_TTS_MODEL
42
43
44
class MultilingualAgent(Agent):
45
"""A multilingual voice agent that detects user language and responds accordingly."""
46
47
# TTS config per language. Keys are Rime 3-letter codes. Voice is always seraphina.
48
LANGUAGE_CONFIGS = {
49
"eng": LanguageConfig(lang="eng"),
50
"hin": LanguageConfig(lang="hin"),
51
"spa": LanguageConfig(lang="spa"),
52
"ara": LanguageConfig(lang="ara"),
53
"fra": LanguageConfig(lang="fra"),
54
"por": LanguageConfig(lang="por"),
55
"ger": LanguageConfig(lang="ger"),
56
"jpn": LanguageConfig(lang="jpn"),
57
"heb": LanguageConfig(lang="heb"),
58
"tam": LanguageConfig(lang="tam"),
59
}
60
61
LANGUAGE_DISPLAY_NAMES = {
62
"eng": "English",
63
"hin": "Hindi",
64
"spa": "Spanish",
65
"ara": "Arabic",
66
"fra": "French",
67
"por": "Portuguese",
68
"ger": "German",
69
"jpn": "Japanese",
70
"heb": "Hebrew",
71
"tam": "Tamil",
72
}
73
74
STT_TO_RIME = {
75
"en": "eng",
76
"hi": "hin",
77
"es": "spa",
78
"ar": "ara",
79
"fr": "fra",
80
"pt": "por",
81
"de": "ger",
82
"ja": "jpn",
83
"he": "heb",
84
"ta": "tam",
85
}
86
87
SUPPORTED_LANGUAGES = list(LANGUAGE_CONFIGS.keys())
88
89
def __init__(self) -> None:
90
super().__init__(instructions=self._get_instructions())
91
self._current_language = DEFAULT_LANGUAGE
92
self._room: rtc.Room | None = None
93
94
def _get_instructions(self) -> str:
95
"""Get agent instructions in a clean, maintainable format."""
96
supported_languages = ", ".join(
97
self.LANGUAGE_DISPLAY_NAMES[lang] for lang in self.SUPPORTED_LANGUAGES
98
)
99
return (
100
"You are a voice assistant powered by Rime's text-to-speech technology. "
101
"You are here to showcase Rime's natural, expressive, and multilingual voice capabilities. "
102
"You respond in the same language the user speaks in. "
103
f"You support {supported_languages}. "
104
"If the user speaks in any other language, respond in English and politely let them know: "
105
f"'I only support {supported_languages}. Please speak in one of these languages.' "
106
"Keep your responses concise and to the point since this is a voice conversation. "
107
"Do not use emojis, asterisks, markdown, or other special characters in your responses. "
108
"You are curious, friendly, and have a sense of humor."
109
)
110
111
async def stt_node(
112
self, audio: AsyncIterable[rtc.AudioFrame], model_settings: ModelSettings
113
) -> AsyncIterable[stt.SpeechEvent]:
114
"""
115
Override STT node to detect language and update TTS configuration dynamically.
116
117
This method intercepts speech events to detect language changes and updates
118
the TTS settings to match the detected language for natural voice output.
119
"""
120
default_stt = super().stt_node(audio, model_settings)
121
122
async for event in default_stt:
123
if self._is_transcript_event(event):
124
await self._handle_language_detection(event)
125
yield event
126
127
def _is_transcript_event(self, event: stt.SpeechEvent) -> bool:
128
"""Check if event is a transcript event with language information."""
129
return (
130
event.type
131
in [
132
stt.SpeechEventType.INTERIM_TRANSCRIPT,
133
stt.SpeechEventType.FINAL_TRANSCRIPT,
134
]
135
and event.alternatives
136
)
137
138
async def _handle_language_detection(self, event: stt.SpeechEvent) -> None:
139
"""Update TTS from STT-detected language and sync to frontend via participant attributes."""
140
detected_language = event.alternatives[0].language
141
if not detected_language:
142
return
143
effective_language = self._update_tts_for_language(detected_language)
144
if effective_language != self._current_language:
145
self._current_language = effective_language
146
await self._publish_language_update(effective_language)
147
148
def _update_tts_for_language(self, language: str) -> str:
149
"""Update TTS configuration based on detected language.
150
151
Returns the effective Rime language code (the one actually used for TTS).
152
"""
153
base = language.split("-")[0].lower() if language else ""
154
rime_lang = self.STT_TO_RIME.get(base, base) if base else DEFAULT_LANGUAGE
155
effective_lang = rime_lang if rime_lang in self.LANGUAGE_CONFIGS else DEFAULT_LANGUAGE
156
config = self.LANGUAGE_CONFIGS.get(effective_lang, self.LANGUAGE_CONFIGS[DEFAULT_LANGUAGE])
157
logger.info(f"Updating TTS: detected={language} -> rime={effective_lang}")
158
self.session.tts.update_options(
159
model=f"rime/{config.model}",
160
language=config.lang,
161
)
162
return effective_lang
163
164
async def _publish_language_update(self, language_code: str) -> None:
165
"""Sync current language to the frontend via participant attributes (see LiveKit docs: participant attributes)."""
166
if not self._room:
167
return
168
try:
169
display_name = self.LANGUAGE_DISPLAY_NAMES.get(language_code, "English")
170
await self._room.local_participant.set_attributes({"current_language": display_name})
171
except Exception as e:
172
logger.warning("Failed to publish language update: %s", e)
173
174
async def on_enter(self) -> None:
175
"""Called when the agent session starts. Generate initial greeting."""
176
await self._publish_language_update(self._current_language)
177
self.session.generate_reply(
178
instructions="Greet the user and introduce yourself as a voice assistant powered by Rime's text-to-speech technology. Ask how you can help them."
179
)
180
181
182
def prewarm(proc: JobProcess) -> None:
183
"""Preload VAD model for faster startup."""
184
proc.userdata["vad"] = silero.VAD.load()
185
186
187
server = AgentServer()
188
server.setup_fnc = prewarm
189
190
191
@server.rtc_session(agent_name="rime-multilingual-agent")
192
async def entrypoint(ctx: JobContext) -> None:
193
"""Main entry point for the multilingual agent worker."""
194
ctx.log_context_fields = {"room": ctx.room.name}
195
196
session = AgentSession(
197
vad=ctx.proc.userdata["vad"],
198
stt=inference.STT(model="deepgram/nova-3-general", language="multi"),
199
llm=inference.LLM(model="openai/gpt-4o"),
200
tts=inference.TTS(
201
model=f"rime/{DEFAULT_TTS_MODEL}", voice=DEFAULT_VOICE, language=DEFAULT_LANGUAGE
202
),
203
turn_detection=MultilingualModel(),
204
)
205
206
usage_collector = metrics.UsageCollector()
207
208
@session.on("metrics_collected")
209
def _on_metrics_collected(ev: MetricsCollectedEvent) -> None:
210
metrics.log_metrics(ev.metrics)
211
usage_collector.collect(ev.metrics)
212
213
async def log_usage() -> None:
214
"""Log usage summary on shutdown."""
215
summary = usage_collector.get_summary()
216
logger.info(f"Usage summary: {summary}")
217
218
ctx.add_shutdown_callback(log_usage)
219
220
agent = MultilingualAgent()
221
agent._room = ctx.room
222
await session.start(
223
agent=agent,
224
room=ctx.room,
225
room_output_options=RoomOutputOptions(transcription_enabled=True),
226
)
227
228
229
if __name__ == "__main__":
230
cli.run_app(server)