Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[vtp-dev] Trying to understand how VTP fits in to the VoiceXML architecture...

Hi,

I'm a newbie to the whole VoiceXML thing and it's been near impossible to find good tutorials/documentation to help me understand how this all works. 

Am I correct in my understanding that simply running OpenVXML in a web server will not actually allow me to play voice files, etc. That I actually need a Voice Browser to submit the VoiceXMl to? For example, OpenVXI?

So, it would work like this:

1. User -[initiates contact]-> Voice Browser (e.g. OpenVXI)
2. Voice Browser (e.g. OpenVXI) -[sends contact to]-> OpenVXML Web Server
3. OpenVXML WS -[sends result to]-> Voice Browser (e.g. OpenVXI)
4. Voice Browser (e.g. OpenVXI) -[returns result to]-> User

Is that correct?

Also, does anyone have any guides or sample code for getting started with OpenVXI? I'd like to run a small test app on my desktop where I can speak into a microphone, have it validate my voice, and play confirmation audio, using VoiceXML. What would I need to get this running?

Thank you!


      


Back to the top