How many times a day do you talk to a computer? We’re not referring to the exasperated exclamation you direct at your laptop when it overheats and crashes. We want you to think about the moments you speak to a device and it actually listens.
New integrations for Voice AI have arrived: Google's Gemini 2.0 Flash model, featuring seamless voice-to-voice conversation capabilities and ElevenLabs low-latency streaming speech synthesis are now available for Voximplant developers
Voximplant now includes a native Cartesia Line / Agents connector that connects any Voximplant call to a Cartesia Line voice agent for real-time, speech-to-speech conversations—over PSTN, SIP, WebRTC, or WhatsApp Business Calling—without building custom media gateways or WebSocket streaming infrastructure.
Learn how a Voice AI Orchestration Platform connects LLMs, STT/TTS, turn‑taking, and telephony (PSTN, SIP, WebRTC) to build reliable real‑time voice agents. See benefits, architecture, and how Voximplant helps.
New Features in Voximplant Kit: Update overview We are constantly working to improve our product to make it easier to use and more effective for you. In this update, we have added several useful features. Here’s what’s new:
Voximplant now includes a native Deepgram module that connects any Voximplant call to Deepgram’s Voice Agent API for real-time, speech‑to‑speech conversations. You can stream audio from phone numbers, SIP trunks, WhatsApp, or WebRTC into Deepgram’s unified agent environment—combining STT, LLM reasoning, and TTS—and play responses via Voximplant’s serverless runtime with minimal latency.
Today Ultravox announced they are directly integrating Voximplant into their platform to provide SIP capabilities. The integration builds on Voximplant’s deep telephony and Voice AI tooling
Voximplant now includes a native Cartesia module for streaming, low-latency text-to-speech (TTS). You can use a single VoxEngine API to synthesize speech in real time, connect it to any call (PSTN, SIP, WebRTC, WhatsApp) and control playback from a Large Language Model (LLM) or other source, all inside VoxEngine.