voice.send()
send() 方法将音频数据实时流式传输到语音提供商进行连续处理。此方法对于实时语音对话至关重要,允许你将麦克风输入直接发送到 AI 服务。
🌐 The send() method streams audio data in real-time to voice providers for continuous processing. This method is essential for real-time speech-to-speech conversations, allowing you to send microphone input directly to the AI service.
使用示例Direct link to 使用示例
🌐 Usage Example
import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime";
import Speaker from "@mastra/node-speaker";
import { getMicrophoneStream } from "@mastra/node-audio";
const speaker = new Speaker({
sampleRate: 24100, // Audio sample rate in Hz - standard for high-quality audio on MacBook Pro
channels: 1, // Mono audio output (as opposed to stereo which would be 2)
bitDepth: 16, // Bit depth for audio quality - CD quality standard (16-bit resolution)
});
// Initialize a real-time voice provider
const voice = new OpenAIRealtimeVoice({
realtimeConfig: {
model: "gpt-5.1-realtime",
apiKey: process.env.OPENAI_API_KEY,
},
});
// Connect to the real-time service
await voice.connect();
// Set up event listeners for responses
voice.on("writing", ({ text, role }) => {
console.log(`${role}: ${text}`);
});
voice.on("speaker", (stream) => {
stream.pipe(speaker);
});
// Get microphone stream (implementation depends on your environment)
const microphoneStream = getMicrophoneStream();
// Send audio data to the voice provider
await voice.send(microphoneStream);
// You can also send audio data as Int16Array
const audioBuffer = getAudioBuffer(); // Assume this returns Int16Array
await voice.send(audioBuffer);
参数Direct link to 参数
🌐 Parameters
audioData:
NodeJS.ReadableStream | Int16Array
Audio data to send to the voice provider. Can be a readable stream (like a microphone stream) or an Int16Array of audio samples.
返回值Direct link to 返回值
🌐 Return Value
返回一个 Promise<void>,当音频数据被语音提供商接受时该对象会被解析。
🌐 Returns a Promise<void> that resolves when the audio data has been accepted by the voice provider.
注意Direct link to 注意
🌐 Notes
- 该方法仅由支持语音到语音功能的实时语音提供商实现
- 如果调用不支持此功能的语音提供商,它将记录一条警告并立即完成。
- 在使用
send()建立 WebSocket 连接之前,必须先调用connect() - 音频格式要求取决于具体的语音提供商
- 对于连续对话,通常先调用
send()来传输用户音频,然后调用answer()来触发 AI 响应 - 提供商通常会在处理音频时发出带有转录文本的“写入”事件
- 当人工智能回应时,提供商会发出带有音频响应的“正在说话”事件