Skip to main content

Meetings

Lira joins Google Meet calls as a real participant, listens in real-time, and responds with natural speech when addressed by name.

How It Works

  1. DeployPOST /lira/v1/bot/deploy with a meeting URL
  2. Join — Playwright bot navigates to Google Meet, joins the call
  3. Listen — Audio captured via WebRTC getUserMedia override, streamed to Nova Sonic
  4. Identify — Deepgram provides real-time speaker diarization with name attribution
  5. Respond — Nova Sonic generates speech responses, injected into meeting audio
  6. Summarize — GPT-4o-mini generates meeting summaries after the call

Key Behaviors

BehaviorDescription
Wake wordOnly responds when addressed by name (configurable). 4-layer detection.
Speaker IDEach transcript line tagged with the speaker's real name
Physical micVoice commands click the actual mic button in Google Meet
Barge-inStops talking immediately when interrupted
Auto-leaveLeaves after 45 seconds if alone in the meeting
Echo gatePrevents Lira from hearing her own output

Settings

Update AI settings mid-meeting via:

PUT /lira/v1/meetings/:id/settings
Authorization: Bearer <jwt>

{
"personality": "challenger",
"wake_word_enabled": false,
"summary_mode": "detailed"
}

Summaries

Two summary modes:

ModeOutput
short4–6 sentence overview
detailed400–700 word breakdown with per-person contributions
GET /lira/v1/meetings/:id/summary?mode=detailed

Task Extraction

Lira automatically extracts action items from the meeting and creates tasks. Tasks can be synced to Linear, GitHub, Slack, or email.

See Tasks API for details.