Skip to main content

voice.answer()

answer() 方法用于实时语音提供商中,以触发 AI 生成响应。该方法在语音对话中尤为有用,当你需要在接收到用户输入后明确地向 AI 发出响应信号时。

🌐 The answer() method is used in real-time voice providers to trigger the AI to generate a response. This method is particularly useful in speech-to-speech conversations where you need to explicitly signal the AI to respond after receiving user input.

使用示例
Direct link to 使用示例

🌐 Usage Example

import { OpenAIRealtimeVoice } from "@mastra/voice-openai-realtime";
import { getMicrophoneStream } from "@mastra/node-audio";
import Speaker from "@mastra/node-speaker";

const speaker = new Speaker({
sampleRate: 24100, // Audio sample rate in Hz - standard for high-quality audio on MacBook Pro
channels: 1, // Mono audio output (as opposed to stereo which would be 2)
bitDepth: 16, // Bit depth for audio quality - CD quality standard (16-bit resolution)
});

// Initialize a real-time voice provider
const voice = new OpenAIRealtimeVoice({
realtimeConfig: {
model: "gpt-5.1",
apiKey: process.env.OPENAI_API_KEY,
},
speaker: "alloy", // Default voice
});
// Connect to the real-time service
await voice.connect();
// Register event listener for responses
voice.on("speaker", (stream) => {
// Handle audio response
stream.pipe(speaker);
});
// Send user audio input
const microphoneStream = getMicrophoneStream();
await voice.send(microphoneStream);
// Trigger the AI to respond
await voice.answer();

参数
Direct link to 参数

🌐 Parameters


options?:

Record<string, unknown>
Provider-specific options for the response

返回值
Direct link to 返回值

🌐 Return Value

返回一个 Promise<void>,当响应被触发时会被解析。

🌐 Returns a Promise<void> that resolves when the response has been triggered.

注意
Direct link to 注意

🌐 Notes

  • 该方法仅由支持语音到语音功能的实时语音提供商实现
  • 如果调用不支持此功能的语音提供商,它将记录一条警告并立即完成。
  • 响应音频通常会通过“speaking”事件发出,而不是直接返回
  • 对于支持此功能的提供商,你可以使用此方法发送特定的响应,而不是让 AI 生成响应
  • 这种方法通常与 send() 结合使用,以创建对话流程