Voice AI Connect Sydney - Event Script

Duration: 60 minutes (12:15 PM - 1:15 PM)
Audience: ~40 tech executives (CTOs, CEOs)
Format: Peer forum, not a pitch

1. Welcome & Framing (3 min)

Alex or Mario:

"Thanks for coming. Quick logistics: this isn't a sales pitch. We're not demoing products or pushing you to sign anything. This is the first Voice AI Connect Sydney — a monthly forum where we talk honestly about what works and what breaks when you deploy voice AI in production.

We drew a good crowd because you're all dealing with the same problem: AI models are commoditized, but getting them to work reliably over a phone call is still black magic. Today we're unpacking why that is, what the real bottlenecks are, and how teams are solving them.

Format: we'll do some guided discussion, then I'll show you what we're seeing work in practice. Interrupt anytime. If you've shipped voice AI and hit these problems, share what you learned. If you're evaluating it now, ask the hard questions. Let's make this useful."

2. Icebreaker Poll (5 min)

Run live show-of-hands (no digital poll - keeps energy up)

Round 1: General AI Adoption

"How many of you have tried and/or are using at least one AI vendor? LLMs, turnkey offerings, voice AI platforms — all count."

(Most hands go up)

"Okay, keep your hands up if you've tried 2 vendors."

(Some hands drop)

"How about 3?""4?""5 or above?"

Callout: "Alright, so we've got some serious evaluators in the room. That's the first signal: everyone's experimenting, but a lot of vendor fatigue is already showing."

Round 2: AI in Production

"New question: raise your hand if an LLM has sent something it created directly to production — no human review in between."

(Smaller group)

"Keep your hand up if you're comfortable with that."

(Laughs, some hands drop)

Callout: "Yeah, that's the tension. AI is powerful, but trust is still the blocker."

Round 3: AI Dev Tools

"By show of hands, how many of you are using a tool like Claude Code or Cursor or Copilot regularly?"

(Moderate number of hands)

Callout: "Good mix. So the tech sophistication is here — this isn't a beginner room."

Round 4: Voice AI Evaluation

"Now let's narrow to voice AI specifically. Raise your hand if you've evaluated a vendor in the last year for a voice AI pilot or PoC."

(Hands go up — probably 40-60% of room)

Progress through: 2 vendors → 3 vendors → More than 3

Callout: "Okay, so we've got some battle scars in the room. The fact that people are evaluating 3+ vendors tells me something: nobody's nailed this yet. You're not finding what you need on the first try."

Round 5: Voice AI in Production

"Alright, final question: raise your hand if your voice AI application is actually in production right now."

(Smaller subset — expect 20-30%)

"Keep your hand raised if it's handling thousands of calls a month."

(Very small group — maybe 3-5 people)

Callout: "There we go. That's the gap. Lots of experimentation, lots of pilots, but getting to production scale is where it gets hard. That's what we're here to talk about."

Quick Industry Check

"And just so I know who's in the room — quick shout-outs by industry. Healthcare? Fintech? Contact centers / customer support? E-commerce / logistics?"

(Note responses)

"Okay, good mix. Let's dig in."

3. Guided Discussion (30 min) — THE CORE

Split into 3 rounds of ~10 minutes each:

3.1 Stories: Success vs Failure (10 min)

Prompt: "Who's got a war story? Something that worked beautifully — or spectacularly didn't."

Facilitate:

If the room is quiet, seed with:

"I'll go first. We had a customer running voice AI for appointment confirmations. Worked perfectly in testing — 10 concurrent calls, sub-300ms latency, 95% accuracy. They launched to production: 500 calls/day. Within 2 hours, their media server crashed because they'd hit the rate limit on their STT provider, and the retry logic created a cascade failure. They didn't know until customers started complaining about dropped calls.

The AI model wasn't the problem. The infrastructure underneath it was. Anyone else hit something like this?"

3.2 Challenges: What Nobody Warns You About (10 min)

Prompt: "What's the hardest part that nobody warns you about?"

Guide toward: Latency, audio quality, vendor stitching, compliance, scaling, cost overruns

Drop this stat to spark discussion:

"67% of CIOs at the ADAPT conference yesterday said conversational AI hasn't delivered on its promise. Why do we think that is?"

Seed questions if discussion stalls:

  1. "How many of you budgeted $X per minute and production cost 3x that? What broke the cost model?"
  2. "Anyone dealt with the interruption problem? Customer talks over the AI and the whole conversation derails?"
  3. "Has anyone shipped something that worked in testing with 10 calls, then fell apart at 100? What broke first?"
When someone mentions latency:

"Right — and here's the thing nobody tells you: the LLM isn't usually the bottleneck. It's the carrier routing and audio transcoding. You've got 6-8 network hops before the LLM even sees the text."

When someone mentions vendor stitching:

"Yeah, and when a call fails at 2AM, whose fault is it? Twilio blames OpenAI, OpenAI blames ElevenLabs, and you're in the middle with an angry customer."

3.3 Insights: Wisdom Gained (10 min)

Prompt: "If you could go back to day one and redesign your voice AI stack, what would you change?"

This naturally surfaces:

Common regrets:

  1. "We should've load-tested at scale before launch"
  2. "We should've owned more of the stack"
  3. "We should've started with compliance in mind"
  4. "We underestimated audio quality impact"
Connect the dots:

"So I'm hearing three themes: latency, vendor fragmentation, and compliance. Those aren't separate problems — they're symptoms of the same root cause: most voice AI stacks weren't designed for production from day one. They were cobbled together from dev tools that work great at 10 calls/day but fall apart at 10,000."

4. Transition: Why Infrastructure Matters (10 min)

Start:

"So here's what I'm hearing from this room — and it's the same thing I hear from teams in NZ, Dubai, across APAC: the AI model isn't the problem. It's everything underneath it."

Show or draw this diagram:

TYPICAL MULTI-VENDOR STACK:

Customer → Carrier A → SIP Provider B → Media Server C 
  → STT (Deepgram) → LLM (OpenAI) → TTS (ElevenLabs) 
  → Media Server C → SIP Provider B → Carrier A → Customer

= 8+ network hops
= 600-900ms round-trip latency
= 4 vendors to coordinate when things break
SINGLE-STACK APPROACH:

Customer → Telnyx (carrier + media + STT + TTS + LLM) → Customer

= 2 network hops
= 180-300ms round-trip latency
= 1 vendor, 1 SLA, 1 throat to choke
Key talking point:

"Most teams don't realize they're paying a 400-600ms 'vendor tax' just in transport overhead. That's before the LLM even thinks. For conversational AI, that's the difference between 'this feels natural' and 'this feels broken.'"

Cover these four bottlenecks briefly (2-3 min each):

4.1 Carrier Routing & Latency

"Most voice providers don't own the carrier network. They lease routes from wholesale carriers. That means unpredictable latency — sometimes 200ms, sometimes 800ms — depending on time of day, carrier congestion, and routing path.

For batch calls (appointment reminders), who cares? For conversational AI, 800ms round-trip feels broken."

What matters:

4.2 Audio Quality & Codec Wars

"AI models are trained on clean audio. Real phone calls are not clean. You've got codec compression, packet loss, background noise, accents, crosstalk.

If your STT model can't handle Australian accents or someone calling from a noisy cafe, your AI agent sounds dumb even if the LLM is GPT-4."

Gotcha: "WebRTC demos use Opus (HD audio). Production phone calls use G.711 (compressed). Your demo sounds great because it's not real telephony. Ship it to production and you'll wonder why accuracy tanked."

4.3 Scaling from 10 to 10,000 Calls

"At 10 concurrent calls, everything works. At 100, you hit rate limits. At 1,000, your media server crashes. At 10,000, your LLM provider throttles you and your TTS queue backs up 5 seconds.

Most teams don't discover this until launch day."

What breaks:

4.4 Multi-Vendor Complexity

"You're stitching together 5-7 vendors: telephony, STT, TTS, LLM, media server, monitoring, billing. When a call fails, whose fault is it? Each vendor points at the other. You're spending 40% of eng time debugging vendor integrations instead of improving your product."

What matters:

Close this section:

"So what does 'enterprise-grade voice AI' actually mean? Not marketing fluff — what should you demand from your stack?"

Requirement Why It Matters
Sub-300ms end-to-end latency Anything above 500ms feels broken
99.99%+ uptime SLA One hour of downtime = customer trust lost
Owned carrier network Predictable routing, no wholesale middlemen
Single-vendor stack Fewer handoffs, unified support
Real-time observability Debug calls mid-conversation, not post-mortem
Compliance baked in SOC2, HIPAA, PCI-DSS if needed

"If you're building voice AI in-house, this is your RFP checklist. If you're buying, don't let vendors hand-wave these. Make them prove it."

5. Demo + Show & Tell (10 min)

Natural trigger: Someone asks "How do YOU solve this?" or room energy pulls toward wanting to see proof.

Your response:

"Fair question. I can talk about it in the abstract, or I can just show you. Give me 90 seconds."

[Execute your prepared demo here]

Key callouts during/after demo:

  1. Response time: "Notice that felt natural? That's 180-250ms round-trip from Sydney. No offshore media servers, no vendor handoffs."
  2. Audio quality: "You heard it clearly on speaker in a noisy room. That's because we control the full stack — carrier to AI inference — so codecs don't degrade."
  3. Infrastructure view: "Here's the call log. See this latency graph? 220ms end-to-end. If I'd built this on [Competitor], it'd be 700-900ms because of carrier routing and vendor handoffs."
Close the demo:

"So that's the stack. Questions?"

Let them ask, then pivot back:

"Look, the point isn't 'use Telnyx.' The point is: if you're stitching together 4-5 vendors, you're paying a latency tax and a complexity tax. Whether you solve that with us or someone else, solve it before you scale to production. Otherwise you'll hit the same walls everyone in this room has hit."

6. Close (2 min)

Alex or Mario:

"We're doing this monthly. Next month's topic: [e.g., 'Interruption Handling in Real Conversations']. If there's something specific you want us to cover, let me know.

No pressure to use Telnyx. But if you want to dig deeper into what we discussed — latency benchmarks, architecture reviews, whatever — grab us after or shoot us an email. Otherwise, see you next month."

Hand out or email afterward:

Tone & Delivery Rules

✅ Do:

❌ Don't:

This positions you as: Experts who've seen the hard problems, honest brokers (not just selling), community builders (not just vendors).