1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117
|
<pre>Network Working Group D. Oran
Request for Comments: 4313 Cisco Systems, Inc.
Category: Informational December 2005
<span class="h1">Requirements for Distributed Control of</span>
<span class="h1">Automatic Speech Recognition (ASR),</span>
<span class="h1">Speaker Identification/Speaker Verification (SI/SV), and</span>
Text-to-Speech (TTS) Resources
Status of this Memo
This memo provides information for the Internet community. It does
not specify an Internet standard of any kind. Distribution of this
memo is unlimited.
Copyright Notice
Copyright (C) The Internet Society (2005).
Abstract
This document outlines the needs and requirements for a protocol to
control distributed speech processing of audio streams. By speech
processing, this document specifically means automatic speech
recognition (ASR), speaker recognition -- which includes both speaker
identification (SI) and speaker verification (SV) -- and
text-to-speech (TTS). Other IETF protocols, such as SIP and Real
Time Streaming Protocol (RTSP), address rendezvous and control for
generalized media streams. However, speech processing presents
additional requirements that none of the extant IETF protocols
address.
Table of Contents
<a href="#section-1">1</a>. Introduction ....................................................<a href="#page-3">3</a>
<a href="#section-1.1">1.1</a>. Document Conventions .......................................<a href="#page-3">3</a>
<a href="#section-2">2</a>. SPEECHSC Framework ..............................................<a href="#page-4">4</a>
<a href="#section-2.1">2.1</a>. TTS Example ................................................<a href="#page-5">5</a>
<a href="#section-2.2">2.2</a>. Automatic Speech Recognition Example .......................<a href="#page-6">6</a>
<a href="#section-2.3">2.3</a>. Speaker Identification example .............................<a href="#page-6">6</a>
<a href="#section-3">3</a>. General Requirements ............................................<a href="#page-7">7</a>
<a href="#section-3.1">3.1</a>. Reuse Existing Protocols ...................................<a href="#page-7">7</a>
<a href="#section-3.2">3.2</a>. Maintain Existing Protocol Integrity .......................<a href="#page-7">7</a>
<a href="#section-3.3">3.3</a>. Avoid Duplicating Existing Protocols .......................<a href="#page-7">7</a>
<a href="#section-3.4">3.4</a>. Efficiency .................................................<a href="#page-8">8</a>
<a href="#section-3.5">3.5</a>. Invocation of Services .....................................<a href="#page-8">8</a>
<a href="#section-3.6">3.6</a>. Location and Load Balancing ................................<a href="#page-8">8</a>
<span class="grey">Oran Informational [Page 1]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-2" ></span>
<span class="grey"><a href="./rfc4313">RFC 4313</a> Speech Services Control Requirements December 2005</span>
<a href="#section-3.7">3.7</a>. Multiple Services ..........................................<a href="#page-8">8</a>
<a href="#section-3.8">3.8</a>. Multiple Media Sessions ....................................<a href="#page-8">8</a>
<a href="#section-3.9">3.9</a>. Users with Disabilities ....................................<a href="#page-9">9</a>
3.10. Identification of Process That Produced Media or
Control Output ............................................<a href="#page-9">9</a>
<a href="#section-4">4</a>. TTS Requirements ................................................<a href="#page-9">9</a>
<a href="#section-4.1">4.1</a>. Requesting Text Playback ...................................<a href="#page-9">9</a>
<a href="#section-4.2">4.2</a>. Text Formats ...............................................<a href="#page-9">9</a>
<a href="#section-4.2.1">4.2.1</a>. Plain Text ..........................................<a href="#page-9">9</a>
<a href="#section-4.2.2">4.2.2</a>. SSML ................................................<a href="#page-9">9</a>
<a href="#section-4.2.3">4.2.3</a>. Text in Control Channel ............................<a href="#page-10">10</a>
<a href="#section-4.2.4">4.2.4</a>. Document Type Indication ...........................<a href="#page-10">10</a>
<a href="#section-4.3">4.3</a>. Control Channel ...........................................<a href="#page-10">10</a>
<a href="#section-4.4">4.4</a>. Media Origination/Termination by Control Elements .........<a href="#page-10">10</a>
<a href="#section-4.5">4.5</a>. Playback Controls .........................................<a href="#page-10">10</a>
<a href="#section-4.6">4.6</a>. Session Parameters ........................................<a href="#page-11">11</a>
<a href="#section-4.7">4.7</a>. Speech Markers ............................................<a href="#page-11">11</a>
<a href="#section-5">5</a>. ASR Requirements ...............................................<a href="#page-11">11</a>
<a href="#section-5.1">5.1</a>. Requesting Automatic Speech Recognition ...................<a href="#page-11">11</a>
<a href="#section-5.2">5.2</a>. XML .......................................................<a href="#page-11">11</a>
<a href="#section-5.3">5.3</a>. Grammar Requirements ......................................<a href="#page-12">12</a>
<a href="#section-5.3.1">5.3.1</a>. Grammar Specification ..............................<a href="#page-12">12</a>
<a href="#section-5.3.2">5.3.2</a>. Explicit Indication of Grammar Format ..............<a href="#page-12">12</a>
<a href="#section-5.3.3">5.3.3</a>. Grammar Sharing ....................................<a href="#page-12">12</a>
<a href="#section-5.4">5.4</a>. Session Parameters ........................................<a href="#page-12">12</a>
<a href="#section-5.5">5.5</a>. Input Capture .............................................<a href="#page-12">12</a>
<a href="#section-6">6</a>. Speaker Identification and Verification Requirements ...........<a href="#page-13">13</a>
<a href="#section-6.1">6.1</a>. Requesting SI/SV ..........................................<a href="#page-13">13</a>
<a href="#section-6.2">6.2</a>. Identifiers for SI/SV .....................................<a href="#page-13">13</a>
<a href="#section-6.3">6.3</a>. State for Multiple Utterances .............................<a href="#page-13">13</a>
<a href="#section-6.4">6.4</a>. Input Capture .............................................<a href="#page-13">13</a>
<a href="#section-6.5">6.5</a>. SI/SV Functional Extensibility ............................<a href="#page-13">13</a>
<a href="#section-7">7</a>. Duplexing and Parallel Operation Requirements ..................<a href="#page-13">13</a>
<a href="#section-7.1">7.1</a>. Full Duplex Operation .....................................<a href="#page-14">14</a>
<a href="#section-7.2">7.2</a>. Multiple Services in Parallel .............................<a href="#page-14">14</a>
<a href="#section-7.3">7.3</a>. Combination of Services ...................................<a href="#page-14">14</a>
<a href="#section-8">8</a>. Additional Considerations (Non-Normative) ......................<a href="#page-14">14</a>
<a href="#section-9">9</a>. Security Considerations ........................................<a href="#page-15">15</a>
<a href="#section-9.1">9.1</a>. SPEECHSC Protocol Security ................................<a href="#page-15">15</a>
<a href="#section-9.2">9.2</a>. Client and Server Implementation and Deployment ...........<a href="#page-16">16</a>
<a href="#section-9.3">9.3</a>. Use of SPEECHSC for Security Functions ....................<a href="#page-16">16</a>
<a href="#section-10">10</a>. Acknowledgements ..............................................<a href="#page-17">17</a>
<a href="#section-11">11</a>. References ....................................................<a href="#page-18">18</a>
<a href="#section-11.1">11.1</a>. Normative References .....................................<a href="#page-18">18</a>
<a href="#section-11.2">11.2</a>. Informative References ...................................<a href="#page-18">18</a>
<span class="grey">Oran Informational [Page 2]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-3" ></span>
<span class="grey"><a href="./rfc4313">RFC 4313</a> Speech Services Control Requirements December 2005</span>
<span class="h2"><a class="selflink" id="section-1" href="#section-1">1</a>. Introduction</span>
There are multiple IETF protocols for establishment and termination
of media sessions (SIP [<a href="#ref-6" title=""SIP: Session Initiation Protocol"">6</a>]), low-level media control (Media Gateway
Control Protocol (MGCP) [<a href="#ref-7" title=""Media Gateway Control Protocol (MGCP) Version 1.0"">7</a>] and Media Gateway Controller (MEGACO)
[<a href="#ref-8" title=""Gateway Control Protocol Version 1"">8</a>]), and media record and playback (RTSP [<a href="#ref-9" title=""Real Time Streaming Protocol (RTSP)"">9</a>]). This document
focuses on requirements for one or more protocols to support the
control of network elements that perform Automated Speech Recognition
(ASR), speaker identification or verification (SI/SV), and rendering
text into audio, also known as Text-to-Speech (TTS). Many multimedia
applications can benefit from having automatic speech recognition
(ASR) and text-to-speech (TTS) processing available as a distributed,
network resource. This requirements document limits its focus to the
distributed control of ASR, SI/SV, and TTS servers.
There is a broad range of systems that can benefit from a unified
approach to control of TTS, ASR, and SI/SV. These include
environments such as Voice over IP (VoIP) gateways to the Public
Switched Telephone Network (PSTN), IP telephones, media servers, and
wireless mobile devices that obtain speech services via servers on
the network.
To date, there are a number of proprietary ASR and TTS APIs, as well
as two IETF documents that address this problem [<a href="#ref-13" title=""Service Location Protocol, Version 2"">13</a>], [<a href="#ref-14" title=""A DNS RR for specifying the location of services (DNS SRV)"">14</a>]. However,
there are serious deficiencies to the existing documents. In
particular, they mix the semantics of existing protocols yet are
close enough to other protocols as to be confusing to the
implementer.
This document sets forth requirements for protocols to support
distributed speech processing of audio streams. For simplicity, and
to remove confusion with existing protocol proposals, this document
presents the requirements as being for a "framework" that addresses
the distributed control of speech resources. It refers to such a
framework as "SPEECHSC", for Speech Services Control.
<span class="h3"><a class="selflink" id="section-1.1" href="#section-1.1">1.1</a>. Document Conventions</span>
In this document, the key words "MUST", "MUST NOT", "REQUIRED",
"SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY",
and "OPTIONAL" are to be interpreted as described in <a href="./rfc2119">RFC 2119</a> [<a href="#ref-3" title=""Key words for use in RFCs to Indicate Requirement Levels"">3</a>].
<span class="grey">Oran Informational [Page 3]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-4" ></span>
<span class="grey"><a href="./rfc4313">RFC 4313</a> Speech Services Control Requirements December 2005</span>
<span class="h2"><a class="selflink" id="section-2" href="#section-2">2</a>. SPEECHSC Framework</span>
Figure 1 below shows the SPEECHSC framework for speech processing.
+-------------+
| Application |
| Server |\
+-------------+ \ SPEECHSC
SIP, VoiceXML, / \
etc. / \
+------------+ / \ +-------------+
| Media |/ SPEECHSC \---| ASR, SI/SV, |
| Processing |-------------------------| and/or TTS |
RTP | Entity | RTP | Server |
=====| |=========================| |
+------------+ +-------------+
Figure 1: SPEECHSC Framework
The "Media Processing Entity" is a network element that processes
media. It may be a pure media handler, or it may also have an
associated SIP user agent, VoiceXML browser, or other control entity.
The "ASR, SI/SV, and/or TTS Server" is a network element that
performs the back-end speech processing. It may generate an RTP
stream as output based on text input (TTS) or return recognition
results in response to an RTP stream as input (ASR, SI/SV). The
"Application Server" is a network element that instructs the Media
Processing Entity on what transformations to make to the media
stream. Those instructions may be established via a session protocol
such as SIP, or provided via a client/server exchange such as
VoiceXML. The framework allows either the Media Processing Entity or
the Application Server to control the ASR or TTS Server using
SPEECHSC as a control protocol, which accounts for the SPEECHSC
protocol appearing twice in the diagram.
Physical embodiments of the entities can reside in one physical
instance per entity, or some combination of entities. For example, a
VoiceXML [<a href="#ref-11" title=""Voice Extensible Markup Language (VoiceXML) Version 2.0"">11</a>] gateway may combine the ASR and TTS functions on the
same platform as the Media Processing Entity. Note that VoiceXML
gateways themselves are outside the scope of this protocol.
Likewise, one can combine the Application Server and Media Processing
Entity, as would be the case in an interactive voice response (IVR)
platform.
One can also decompose the Media Processing Entity into an entity
that controls media endpoints and entities that process media
directly. Such would be the case with a decomposed gateway using
MGCP or MEGACO. However, this decomposition is again orthogonal to
<span class="grey">Oran Informational [Page 4]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-5" ></span>
<span class="grey"><a href="./rfc4313">RFC 4313</a> Speech Services Control Requirements December 2005</span>
the scope of SPEECHSC. The following subsections provide a number of
example use cases of the SPEECHSC, one each for TTS, ASR, and SI/SV.
They are intended to be illustrative only, and not to imply any
restriction on the scope of the framework or to limit the
decomposition or configuration to that shown in the example.
<span class="h3"><a class="selflink" id="section-2.1" href="#section-2.1">2.1</a>. TTS Example</span>
This example illustrates a simple usage of SPEECHSC to provide a
Text-to-Speech service for playing announcements to a user on a phone
with no display for textual error messages. The example scenario is
shown below in Figure 2. In the figure, the VoIP gateway acts as
both the Media Processing Entity and the Application Server of the
SPEECHSC framework in Figure 1.
+---------+
_| SIP |
_/ | Server |
+-----------+ SIP/ +---------+
| | _/
+-------+ | VoIP |_/
| POTS |___| Gateway | RTP +---------+
| Phone | | (SIP UA) |=========| |
+-------+ | |\_ | SPEECHSC|
+-----------+ \ | TTS |
\__ | Server |
SPEECHSC | |
\_| |
+---------+
Figure 2: Text-to-Speech Example of SPEECHSC
The Plain Old Telephone Service (POTS) phone on the left attempts to
make a phone call. The VoIP gateway, acting as a SIP UA, tries to
establish a SIP session to complete the call, but gets an error, such
as a SIP "486 Busy Here" response. Without SPEECHSC, the gateway
would most likely just output a busy signal to the POTS phone.
However, with SPEECHSC access to a TTS server, it can provide a
spoken error message. The VoIP gateway therefore constructs a text
error string using information from the SIP messages, such as "Your
call to 978-555-1212 did not go through because the called party was
busy". It then can use SPEECHSC to establish an association with a
SPEECHSC server, open an RTP stream between itself and the server,
and issue a TTS request for the error message, which will be played
to the user on the POTS phone.
<span class="grey">Oran Informational [Page 5]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-6" ></span>
<span class="grey"><a href="./rfc4313">RFC 4313</a> Speech Services Control Requirements December 2005</span>
<span class="h3"><a class="selflink" id="section-2.2" href="#section-2.2">2.2</a>. Automatic Speech Recognition Example</span>
This example illustrates a VXML-enabled media processing entity and
associated application server using the SPEECHSC framework to supply
an ASR-based user interface through an Interactive Voice Response
(IVR) system. The example scenario is shown below in Figure 3. The
VXML-client corresponds to the "media processing entity", while the
IVR application server corresponds to the "application server" of the
SPEECHSC framework of Figure 1.
+------------+
| IVR |
_|Application |
VXML_/ +------------+
+-----------+ __/
| |_/ +------------+
PSTN Trunk | VoIP | SPEECHSC| |
=============| Gateway |---------| SPEECHSC |
|(VXML voice| | ASR |
| browser) |=========| Server |
+-----------+ RTP +------------+
Figure 3: Automatic Speech Recognition Example
In this example, users call into the service in order to obtain stock
quotes. The VoIP gateway answers their PSTN call. An IVR
application feeds VXML scripts to the gateway to drive the user
interaction. The VXML interpreter on the gateway directs the user's
media stream to the SPEECHSC ASR server and uses SPEECHSC to control
the ASR server.
When, for example, the user speaks the name of a stock in response to
an IVR prompt, the SPEECHSC ASR server attempts recognition of the
name, and returns the results to the VXML gateway. The VXML gateway,
following standard VXML mechanisms, informs the IVR Application of
the recognized result. The IVR Application can then do the
appropriate information lookup. The answer, of course, can be sent
back to the user using text-to-speech. This example does not show
this scenario, but it would work analogously to the scenario shown in
section <a href="#section-2.1">Section 2.1</a>.
<span class="h3"><a class="selflink" id="section-2.3" href="#section-2.3">2.3</a>. Speaker Identification example</span>
This example illustrates using speaker identification to allow
voice-actuated login to an IP phone. The example scenario is shown
below in Figure 4. In the figure, the IP Phone acts as both the
"Media Processing Entity" and the "Application Server" of the
SPEECHSC framework in Figure 1.
<span class="grey">Oran Informational [Page 6]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-7" ></span>
<span class="grey"><a href="./rfc4313">RFC 4313</a> Speech Services Control Requirements December 2005</span>
+-----------+ +---------+
| | RTP | |
| IP |=========| SPEECHSC|
| Phone | | TTS |
| |_________| Server |
| | SPEECHSC| |
+-----------+ +---------+
Figure 4: Speaker Identification Example
In this example, a user speaks into a SIP phone in order to get
"logged in" to that phone to make and receive phone calls using his
identity and preferences. The IP phone uses the SPEECHSC framework
to set up an RTP stream between the phone and the SPEECHSC SI/SV
server and to request verification. The SV server verifies the
user's identity and returns the result, including the necessary login
credentials, to the phone via SPEECHSC. The IP Phone may use the
identity directly to identify the user in outgoing calls, to fetch
the user's preferences from a configuration server, or to request
authorization from an Authentication, Authorization, and Accounting
(AAA) server, in any combination. Since this example uses SPEECHSC
to perform a security-related function, be sure to note the
associated material in <a href="#section-9">Section 9</a>.
<span class="h2"><a class="selflink" id="section-3" href="#section-3">3</a>. General Requirements</span>
<span class="h3"><a class="selflink" id="section-3.1" href="#section-3.1">3.1</a>. Reuse Existing Protocols</span>
To the extent feasible, the SPEECHSC framework SHOULD use existing
protocols.
<span class="h3"><a class="selflink" id="section-3.2" href="#section-3.2">3.2</a>. Maintain Existing Protocol Integrity</span>
In meeting the requirement of <a href="#section-3.1">Section 3.1</a>, the SPEECHSC framework
MUST NOT redefine the semantics of an existing protocol. Said
differently, we will not break existing protocols or cause
backward-compatibility problems.
<span class="h3"><a class="selflink" id="section-3.3" href="#section-3.3">3.3</a>. Avoid Duplicating Existing Protocols</span>
To the extent feasible, SPEECHSC SHOULD NOT duplicate the
functionality of existing protocols. For example, network
announcements using SIP [<a href="#ref-12" title=""Basic Network Media Services with SIP"">12</a>] and RTSP [<a href="#ref-9" title=""Real Time Streaming Protocol (RTSP)"">9</a>] already define how to
request playback of audio. The focus of SPEECHSC is new
functionality not addressed by existing protocols or extending
existing protocols within the strictures of the requirement in
<span class="grey">Oran Informational [Page 7]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-8" ></span>
<span class="grey"><a href="./rfc4313">RFC 4313</a> Speech Services Control Requirements December 2005</span>
<a href="#section-3.2">Section 3.2</a>. Where an existing protocol can be gracefully extended
to support SPEECHSC requirements, such extensions are acceptable
alternatives for meeting the requirements.
As a corollary to this, the SPEECHSC should not require a separate
protocol to perform functions that could be easily added into the
SPEECHSC protocol (like redirecting media streams, or discovering
capabilities), unless it is similarly easy to embed that protocol
directly into the SPEECHSC framework.
<span class="h3"><a class="selflink" id="section-3.4" href="#section-3.4">3.4</a>. Efficiency</span>
The SPEECHSC framework SHOULD employ protocol elements known to
result in efficient operation. Techniques to be considered include:
o Re-use of transport connections across sessions
o Piggybacking of responses on requests in the reverse direction
o Caching of state across requests
<span class="h3"><a class="selflink" id="section-3.5" href="#section-3.5">3.5</a>. Invocation of Services</span>
The SPEECHSC framework MUST be compliant with the IAB Open Pluggable
Edge Services (OPES) [<a href="#ref-4" title=""IAB Architectural and Policy Considerations for Open Pluggable Edge Services"">4</a>] framework. The applicability of the
SPEECHSC protocol will therefore be specified as occurring between
clients and servers at least one of which is operating directly on
behalf of the user requesting the service.
<span class="h3"><a class="selflink" id="section-3.6" href="#section-3.6">3.6</a>. Location and Load Balancing</span>
To the extent feasible, the SPEECHSC framework SHOULD exploit
existing schemes for supporting service location and load balancing,
such as the Service Location Protocol [<a href="#ref-13" title=""Service Location Protocol, Version 2"">13</a>] or DNS SRV records [<a href="#ref-14" title=""A DNS RR for specifying the location of services (DNS SRV)"">14</a>].
Where such facilities are not deemed adequate, the SPEECHSC framework
MAY define additional load balancing techniques.
<span class="h3"><a class="selflink" id="section-3.7" href="#section-3.7">3.7</a>. Multiple Services</span>
The SPEECHSC framework MUST permit multiple services to operate on a
single media stream so that either the same or different servers may
be performing speech recognition, speaker identification or
verification, etc., in parallel.
<span class="h3"><a class="selflink" id="section-3.8" href="#section-3.8">3.8</a>. Multiple Media Sessions</span>
The SPEECHSC framework MUST allow a 1:N mapping between session and
RTP channels. For example, a single session may include an outbound
RTP channel for TTS, an inbound for ASR, and a different inbound for
SI/SV (e.g., if processed by different elements on the Media Resource
<span class="grey">Oran Informational [Page 8]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-9" ></span>
<span class="grey"><a href="./rfc4313">RFC 4313</a> Speech Services Control Requirements December 2005</span>
Element). Note: All of these can be described via SDP, so if SDP is
utilized for media channel description, this requirement is met "for
free".
<span class="h3"><a class="selflink" id="section-3.9" href="#section-3.9">3.9</a>. Users with Disabilities</span>
The SPEECHSC framework must have sufficient capabilities to address
the critical needs of people with disabilities. In particular, the
set of requirements set forth in <a href="./rfc3351">RFC 3351</a> [<a href="#ref-5" title=""User Requirements for the Session Initiation Protocol (SIP) in Support of Deaf, Hard of Hearing and Speech-impaired Individuals"">5</a>] MUST be taken into
account by the framework. It is also important that implementers of
SPEECHSC clients and servers be cognizant that some interaction
modalities of SPEECHSC may be inconvenient or simply inappropriate
for disabled users. Hearing-impaired individuals may find TTS of
limited utility. Speech-impaired users may be unable to make use of
ASR or SI/SV capabilities. Therefore, systems employing SPEECHSC
MUST provide alternative interaction modes or avoid the use of speech
processing entirely.
<span class="h3"><a class="selflink" id="section-3.10" href="#section-3.10">3.10</a>. Identification of Process That Produced Media or Control Output</span>
The client of a SPEECHSC operation SHOULD be able to ascertain via
the SPEECHSC framework what speech process produced the output. For
example, an RTP stream containing the spoken output of TTS should be
identifiable as TTS output, and the recognized utterance of ASR
should be identifiable as having been produced by ASR processing.
<span class="h2"><a class="selflink" id="section-4" href="#section-4">4</a>. TTS Requirements</span>
<span class="h3"><a class="selflink" id="section-4.1" href="#section-4.1">4.1</a>. Requesting Text Playback</span>
The SPEECHSC framework MUST allow a Media Processing Entity or
Application Server, using a control protocol, to request the TTS
Server to play back text as voice in an RTP stream.
<span class="h3"><a class="selflink" id="section-4.2" href="#section-4.2">4.2</a>. Text Formats</span>
<span class="h4"><a class="selflink" id="section-4.2.1" href="#section-4.2.1">4.2.1</a>. Plain Text</span>
The SPEECHSC framework MAY assume that all TTS servers are capable of
reading plain text. For reading plain text, framework MUST allow the
language and voicing to be indicated via session parameters. For
finer control over such properties, see [<a href="#ref-1" title=""Speech Synthesis Markup Language (SSML) Version 1.0"">1</a>].
<span class="h4"><a class="selflink" id="section-4.2.2" href="#section-4.2.2">4.2.2</a>. SSML</span>
The SPEECHSC framework MUST support Speech Synthesis Markup Language
(SSML)[<a href="#ref-1" title=""Speech Synthesis Markup Language (SSML) Version 1.0"">1</a>] <speak> basics, and SHOULD support other SSML tags. The
framework assumes all TTS servers are capable of reading SSML
<span class="grey">Oran Informational [Page 9]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-10" ></span>
<span class="grey"><a href="./rfc4313">RFC 4313</a> Speech Services Control Requirements December 2005</span>
formatted text. Internationalization of TTS in the SPEECHSC
framework, including multi-lingual output within a single utterance,
is accomplished via SSML xml:lang tags.
<span class="h4"><a class="selflink" id="section-4.2.3" href="#section-4.2.3">4.2.3</a>. Text in Control Channel</span>
The SPEECHSC framework assumes all TTS servers accept text over the
SPEECHSC connection for reading over the RTP connection. The
framework assumes the server can accept text either "by value"
(embedded in the protocol) or "by reference" (e.g., by de-referencing
a Uniform Resource Identifier (URI) embedded in the protocol).
<span class="h4"><a class="selflink" id="section-4.2.4" href="#section-4.2.4">4.2.4</a>. Document Type Indication</span>
A document type specifies the syntax in which the text to be read is
encoded. The SPEECHSC framework MUST be capable of explicitly
indicating the document type of the text to be processed, as opposed
to forcing the server to infer the content by other means.
<span class="h3"><a class="selflink" id="section-4.3" href="#section-4.3">4.3</a>. Control Channel</span>
The SPEECHSC framework MUST be capable of establishing the control
channel between the client and server on a per-session basis, where a
session is loosely defined to be associated with a single "call" or
"dialog". The protocol SHOULD be capable of maintaining a long-lived
control channel for multiple sessions serially, and MAY be capable of
shorter time horizons as well, including as short as for the
processing of a single utterance.
<span class="h3"><a class="selflink" id="section-4.4" href="#section-4.4">4.4</a>. Media Origination/Termination by Control Elements</span>
The SPEECHSC framework MUST NOT require the controlling element
(application server, media processing entity) to accept or originate
media streams. Media streams MAY source & sink from the controlled
element (ASR, TTS, etc.).
<span class="h3"><a class="selflink" id="section-4.5" href="#section-4.5">4.5</a>. Playback Controls</span>
The SPEECHSC framework MUST support "VCR controls" for controlling
the playout of streaming media output from SPEECHSC processing, and
MUST allow for servers with varying capabilities to accommodate such
controls. The protocol SHOULD allow clients to state what controls
they wish to use, and for servers to report which ones they honor.
These capabilities include:
<span class="grey">Oran Informational [Page 10]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-11" ></span>
<span class="grey"><a href="./rfc4313">RFC 4313</a> Speech Services Control Requirements December 2005</span>
o The ability to jump in time to the location of a specific marker.
o The ability to jump in time, forwards or backwards, by a specified
amount of time. Valid time units MUST include seconds, words,
paragraphs, sentences, and markers.
o The ability to increase and decrease playout speed.
o The ability to fast-forward and fast-rewind the audio, where
snippets of audio are played as the server moves forwards or
backwards in time.
o The ability to pause and resume playout.
o The ability to increase and decrease playout volume.
These controls SHOULD be made easily available to users through the
client user interface and through per-user customization capabilities
of the client. This is particularly important for hearing-impaired
users, who will likely desire settings and control regimes different
from those that would be acceptable for non-impaired users.
<span class="h3"><a class="selflink" id="section-4.6" href="#section-4.6">4.6</a>. Session Parameters</span>
The SPEECHSC framework MUST support the specification of session
parameters, such as language, prosody, and voicing.
<span class="h3"><a class="selflink" id="section-4.7" href="#section-4.7">4.7</a>. Speech Markers</span>
The SPEECHSC framework MUST accommodate speech markers, with
capability at least as flexible as that provided in SSML [<a href="#ref-1" title=""Speech Synthesis Markup Language (SSML) Version 1.0"">1</a>]. The
framework MUST further provide an efficient mechanism for reporting
that a marker has been reached during playout.
<span class="h2"><a class="selflink" id="section-5" href="#section-5">5</a>. ASR Requirements</span>
<span class="h3"><a class="selflink" id="section-5.1" href="#section-5.1">5.1</a>. Requesting Automatic Speech Recognition</span>
The SPEECHSC framework MUST allow a Media Processing Entity or
Application Server to request the ASR Server to perform automatic
speech recognition on an RTP stream, returning the results over
SPEECHSC.
<span class="h3"><a class="selflink" id="section-5.2" href="#section-5.2">5.2</a>. XML</span>
The SPEECHSC framework assumes that all ASR servers support the
VoiceXML speech recognition grammar specification (SRGS) for speech
recognition [<a href="#ref-2" title=""Speech Recognition Grammar Specification Version 1.0"">2</a>].
<span class="grey">Oran Informational [Page 11]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-12" ></span>
<span class="grey"><a href="./rfc4313">RFC 4313</a> Speech Services Control Requirements December 2005</span>
<span class="h3"><a class="selflink" id="section-5.3" href="#section-5.3">5.3</a>. Grammar Requirements</span>
<span class="h4"><a class="selflink" id="section-5.3.1" href="#section-5.3.1">5.3.1</a>. Grammar Specification</span>
The SPEECHSC framework assumes all ASR servers are capable of
accepting grammar specifications either "by value" (embedded in the
protocol) or "by reference" (e.g., by de-referencing a URI embedded
in the protocol). The latter MUST allow the indication of a grammar
already known to, or otherwise "built in" to, the server. The
framework and protocol further SHOULD exploit the ability to store
and later retrieve by reference large grammars that were originally
supplied by the client.
<span class="h4"><a class="selflink" id="section-5.3.2" href="#section-5.3.2">5.3.2</a>. Explicit Indication of Grammar Format</span>
The SPEECHSC framework protocol MUST be able to explicitly convey the
grammar format in which the grammar is encoded and MUST be extensible
to allow for conveying new grammar formats as they are defined.
<span class="h4"><a class="selflink" id="section-5.3.3" href="#section-5.3.3">5.3.3</a>. Grammar Sharing</span>
The SPEECHSC framework SHOULD exploit sharing grammars across
sessions for servers that are capable of doing so. This supports
applications with large grammars for which it is unrealistic to
dynamically load. An example is a city-country grammar for a weather
service.
<span class="h3"><a class="selflink" id="section-5.4" href="#section-5.4">5.4</a>. Session Parameters</span>
The SPEECHSC framework MUST accommodate at a minimum all of the
protocol parameters currently defined in Media Resource Control
Protocol (MRCP) [<a href="#ref-10" title=""MRCP: Media Resource Control Protocol"">10</a>] In addition, there SHOULD be a capability to
reset parameters within a session.
<span class="h3"><a class="selflink" id="section-5.5" href="#section-5.5">5.5</a>. Input Capture</span>
The SPEECHSC framework MUST support a method directing the ASR Server
to capture the input media stream for later analysis and tuning of
the ASR engine.
<span class="grey">Oran Informational [Page 12]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-13" ></span>
<span class="grey"><a href="./rfc4313">RFC 4313</a> Speech Services Control Requirements December 2005</span>
<span class="h2"><a class="selflink" id="section-6" href="#section-6">6</a>. Speaker Identification and Verification Requirements</span>
<span class="h3"><a class="selflink" id="section-6.1" href="#section-6.1">6.1</a>. Requesting SI/SV</span>
The SPEECHSC framework MUST allow a Media Processing Entity to
request the SI/SV Server to perform speaker identification or
verification on an RTP stream, returning the results over SPEECHSC.
<span class="h3"><a class="selflink" id="section-6.2" href="#section-6.2">6.2</a>. Identifiers for SI/SV</span>
The SPEECHSC framework MUST accommodate an identifier for each
verification resource and permit control of that resource by ID,
because voiceprint format and contents are vendor specific.
<span class="h3"><a class="selflink" id="section-6.3" href="#section-6.3">6.3</a>. State for Multiple Utterances</span>
The SPEECHSC framework MUST work with SI/SV servers that maintain
state to handle multi-utterance verification.
<span class="h3"><a class="selflink" id="section-6.4" href="#section-6.4">6.4</a>. Input Capture</span>
The SPEECHSC framework MUST support a method for capturing the input
media stream for later analysis and tuning of the SI/SV engine. The
framework may assume all servers are capable of doing so. In
addition, the framework assumes that the captured stream contains
enough timestamp context (e.g., the NTP time range from the RTP
Control Protocol (RTCP) packets, which corresponds to the RTP
timestamps of the captured input) to ascertain after the fact exactly
when the verification was requested.
<span class="h3"><a class="selflink" id="section-6.5" href="#section-6.5">6.5</a>. SI/SV Functional Extensibility</span>
The SPEECHSC framework SHOULD be extensible to additional functions
associated with SI/SV, such as prompting, utterance verification, and
retraining.
<span class="h2"><a class="selflink" id="section-7" href="#section-7">7</a>. Duplexing and Parallel Operation Requirements</span>
One very important requirement for an interactive speech-driven
system is that user perception of the quality of the interaction
depends strongly on the ability of the user to interrupt a prompt or
rendered TTS with speech. Interrupting, or barging, the speech
output requires more than energy detection from the user's direction.
Many advanced systems halt the media towards the user by employing
the ASR engine to decide if an utterance is likely to be real speech,
as opposed to a cough, for example.
<span class="grey">Oran Informational [Page 13]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-14" ></span>
<span class="grey"><a href="./rfc4313">RFC 4313</a> Speech Services Control Requirements December 2005</span>
<span class="h3"><a class="selflink" id="section-7.1" href="#section-7.1">7.1</a>. Full Duplex Operation</span>
To achieve low latency between utterance detection and halting of
playback, many implementations combine the speaking and ASR
functions. The SPEECHSC framework MUST support such full-duplex
implementations.
<span class="h3"><a class="selflink" id="section-7.2" href="#section-7.2">7.2</a>. Multiple Services in Parallel</span>
Good spoken user interfaces typically depend upon the ease with which
the user can accomplish his or her task. When making use of speaker
identification or verification technologies, user interface
improvements often come from the combination of the different
technologies: simultaneous identity claim and verification (on the
same utterance), simultaneous knowledge and voice verification (using
ASR and verification simultaneously). Using ASR and verification on
the same utterance is in fact the only way to support rolling or
dynamically-generated challenge phrases (e.g., "say 51723"). The
SPEECHSC framework MUST support such parallel service
implementations.
<span class="h3"><a class="selflink" id="section-7.3" href="#section-7.3">7.3</a>. Combination of Services</span>
It is optionally of interest that the SPEECHSC framework support more
complex remote combination and controls of speech engines:
o Combination in series of engines that may then act on the input or
output of ASR, TTS, or Speaker recognition engines. The control
MAY then extend beyond such engines to include other audio input
and output processing and natural language processing.
o Intermediate exchanges and coordination between engines.
o Remote specification of flows between engines.
These capabilities MAY benefit from service discovery mechanisms
(e.g., engines, properties, and states discovery).
<span class="h2"><a class="selflink" id="section-8" href="#section-8">8</a>. Additional Considerations (Non-Normative)</span>
The framework assumes that Session Description Protocol (SDP) will be
used to describe media sessions and streams. The framework further
assumes RTP carriage of media. However, since SDP can be used to
describe other media transport schemes (e.g., ATM) these could be
used if they provide the necessary elements (e.g., explicit
timestamps).
<span class="grey">Oran Informational [Page 14]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-15" ></span>
<span class="grey"><a href="./rfc4313">RFC 4313</a> Speech Services Control Requirements December 2005</span>
The working group will not be defining distributed speech recognition
(DSR) methods, as exemplified by the European Telecommunications
Standards Institute (ETSI) Aurora project. The working group will
not be recreating functionality available in other protocols, such as
SIP or SDP.
TTS looks very much like playing back a file. Extending RTSP looks
promising for when one requires VCR controls or markers in the text
to be spoken. When one does not require VCR controls, SIP in a
framework such as Network Announcements [<a href="#ref-12" title=""Basic Network Media Services with SIP"">12</a>] works directly without
modification.
ASR has an entirely different set of characteristics. For barge-in
support, ASR requires real-time return of intermediate results.
Barring the discovery of a good reuse model for an existing protocol,
this will most likely become the focus of SPEECHSC.
<span class="h2"><a class="selflink" id="section-9" href="#section-9">9</a>. Security Considerations</span>
Protocols relating to speech processing must take security and
privacy into account. Many applications of speech technology deal
with sensitive information, such as the use of Text-to-Speech to read
financial information. Likewise, popular uses for automatic speech
recognition include executing financial transactions and shopping.
There are at least three aspects of speech processing security that
intersect with the SPEECHSC requirements -- securing the SPEECHSC
protocol itself, implementing and deploying the servers that run the
protocol, and ensuring that utilization of the technology for
providing security functions is appropriate. Each of these aspects
in discussed in the following subsections. While some of these
considerations are, strictly speaking, out of scope of the protocol
itself, they will be carefully considered and accommodated during
protocol design, and will be called out as part of the applicability
statement accompanying the protocol specification(s). Privacy
considerations are discussed as well.
<span class="h3"><a class="selflink" id="section-9.1" href="#section-9.1">9.1</a>. SPEECHSC Protocol Security</span>
The SPEECHSC protocol MUST in all cases support authentication,
authorization, and integrity, and SHOULD support confidentiality.
For privacy-sensitive applications, the protocol MUST support
confidentiality. We envision that rather than providing
protocol-specific security mechanisms in SPEECHSC itself, the
resulting protocol will employ security machinery of either a
containing protocol or the transport on which it runs. For example,
we will consider solutions such as using Transport Layer Security
(TLS) for securing the control channel, and Secure Realtime Transport
<span class="grey">Oran Informational [Page 15]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-16" ></span>
<span class="grey"><a href="./rfc4313">RFC 4313</a> Speech Services Control Requirements December 2005</span>
Protocol (SRTP) for securing the media channel. Third-party
dependencies necessitating transitive trust will be minimized or
explicitly dealt with through the authentication and authorization
aspects of the protocol design.
<span class="h3"><a class="selflink" id="section-9.2" href="#section-9.2">9.2</a>. Client and Server Implementation and Deployment</span>
Given the possibly sensitive nature of the information carried,
SPEECHSC clients and servers need to take steps to ensure
confidentiality and integrity of the data and its transformations to
and from spoken form. In addition to these general considerations,
certain SPEECHSC functions, such as speaker verification and
identification, employ voiceprints whose privacy, confidentiality,
and integrity must be maintained. Similarly, the requirement to
support input capture for analysis and tuning can represent a privacy
vulnerability because user utterances are recorded and could be
either revealed or replayed inappropriately. Implementers must take
care to prevent the exploitation of any centralized voiceprint
database and the recorded material from which such voiceprints may be
derived. Specific actions that are recommended to minimize these
threats include:
o End-to-end authentication, confidentiality, and integrity
protection (like TLS) of access to the database to minimize the
exposure to external attack.
o Database protection measures such as read/write access control and
local login authentication to minimize the exposure to insider
threats.
o Copies of the database, especially ones that are maintained at
off-site locations, need the same protection as the operational
database.
Inappropriate disclosure of this data does not as of the date of this
document represent an exploitable threat, but quite possibly might in
the future. Specific vulnerabilities that might become feasible are
discussed in the next subsection. It is prudent to take measures
such as encrypting the voiceprint database and permitting access only
through programming interfaces enforcing adequate authorization
machinery.
<span class="h3"><a class="selflink" id="section-9.3" href="#section-9.3">9.3</a>. Use of SPEECHSC for Security Functions</span>
Either speaker identification or verification can be used directly as
an authentication technology. Authorization decisions can be coupled
with speaker verification in a direct fashion through
challenge-response protocols, or indirectly with speaker
identification through the use of access control lists or other
identity-based authorization mechanisms. When so employed, there are
<span class="grey">Oran Informational [Page 16]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-17" ></span>
<span class="grey"><a href="./rfc4313">RFC 4313</a> Speech Services Control Requirements December 2005</span>
additional security concerns that need to be addressed through the
use of protocol security mechanisms for clients and servers. For
example, the ability to manipulate the media stream of a speaker
verification request could inappropriately permit or deny access
based on impersonation, or simple garbling via noise injection,
making it critical to properly secure both the control and data
channels, as recommended above. The following issues specific to the
use of SI/SV for authentication should be carefully considered:
1. Theft of voiceprints or the recorded samples used to construct
them represents a future threat against the use of speaker
identification/verification as a biometric authentication
technology. A plausible attack vector (not feasible today) is to
use the voiceprint information as parametric input to a
text-to-speech synthesis system that could mimic the user's voice
accurately enough to match the voiceprint. Since it is not very
difficult to surreptitiously record reasonably large corpuses of
voice samples, the ability to construct voiceprints for input to
this attack would render the security of voice-based biometric
authentication, even using advanced challenge-response
techniques, highly vulnerable. Users of speaker verification for
authentication should monitor technological developments in this
area closely for such future vulnerabilities (much as users of
other authentication technologies should monitor advances in
factoring as a way to break asymmetric keying systems).
2. As with other biometric authentication technologies, a downside
to the use of speech identification is that revocation is not
possible. Once compromised, the biometric information can be
used in identification and authentication to other independent
systems.
3. Enrollment procedures can be vulnerable to impersonation if not
protected both by protocol security mechanisms and some
independent proof of identity. (Proof of identity may not be
needed in systems that only need to verify continuity of identity
since enrollment, as opposed to association with a particular
individual.
Further discussion of the use of SI/SV as an authentication
technology, and some recommendations concerning advantages and
vulnerabilities, can be found in Chapter 5 of [<a href="#ref-15" title=""Who Goes There?: Authentication Through the Lens of Privacy"">15</a>].
<span class="h2"><a class="selflink" id="section-10" href="#section-10">10</a>. Acknowledgements</span>
Eric Burger wrote the original version of these requirements and has
continued to contribute actively throughout their development. He is
a co-author in all but formal authorship, and is instead acknowledged
here as it is preferable that working group co-chairs have
non-conflicting roles with respect to the progression of documents.
<span class="grey">Oran Informational [Page 17]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-18" ></span>
<span class="grey"><a href="./rfc4313">RFC 4313</a> Speech Services Control Requirements December 2005</span>
<span class="h2"><a class="selflink" id="section-11" href="#section-11">11</a>. References</span>
<span class="h3"><a class="selflink" id="section-11.1" href="#section-11.1">11.1</a>. Normative References</span>
[<a id="ref-1">1</a>] Walker, M., Burnett, D., and A. Hunt, "Speech Synthesis Markup
Language (SSML) Version 1.0", W3C
REC REC-speech-synthesis-20040907, September 2004.
[<a id="ref-2">2</a>] McGlashan, S. and A. Hunt, "Speech Recognition Grammar
Specification Version 1.0", W3C REC REC-speech-grammar-20040316,
March 2004.
[<a id="ref-3">3</a>] Bradner, S., "Key words for use in RFCs to Indicate Requirement
Levels", <a href="https://www.rfc-editor.org/bcp/bcp14">BCP 14</a>, <a href="./rfc2119">RFC 2119</a>, March 1997.
[<a id="ref-4">4</a>] Floyd, S. and L. Daigle, "IAB Architectural and Policy
Considerations for Open Pluggable Edge Services", <a href="./rfc3238">RFC 3238</a>,
January 2002.
[<a id="ref-5">5</a>] Charlton, N., Gasson, M., Gybels, G., Spanner, M., and A. van
Wijk, "User Requirements for the Session Initiation Protocol
(SIP) in Support of Deaf, Hard of Hearing and Speech-impaired
Individuals", <a href="./rfc3351">RFC 3351</a>, August 2002.
<span class="h3"><a class="selflink" id="section-11.2" href="#section-11.2">11.2</a>. Informative References</span>
[<a id="ref-6">6</a>] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, A.,
Peterson, J., Sparks, R., Handley, M., and E. Schooler, "SIP:
Session Initiation Protocol", <a href="./rfc3261">RFC 3261</a>, June 2002.
[<a id="ref-7">7</a>] Andreasen, F. and B. Foster, "Media Gateway Control Protocol
(MGCP) Version 1.0", <a href="./rfc3435">RFC 3435</a>, January 2003.
[<a id="ref-8">8</a>] Groves, C., Pantaleo, M., Ericsson, LM., Anderson, T., and T.
Taylor, "Gateway Control Protocol Version 1", <a href="./rfc3525">RFC 3525</a>,
June 2003.
[<a id="ref-9">9</a>] Schulzrinne, H., Rao, A., and R. Lanphier, "Real Time Streaming
Protocol (RTSP)", <a href="./rfc2326">RFC 2326</a>, April 1998.
[<a id="ref-10">10</a>] Shanmugham, S., Monaco, P., and B. Eberman, "MRCP: Media
Resource Control Protocol", Work in Progress.
<span class="grey">Oran Informational [Page 18]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-19" ></span>
<span class="grey"><a href="./rfc4313">RFC 4313</a> Speech Services Control Requirements December 2005</span>
[<a id="ref-11">11</a>] World Wide Web Consortium, "Voice Extensible Markup Language
(VoiceXML) Version 2.0", W3C Working Draft , April 2002,
<<a href="http://www.w3.org/TR/2002/WD-voicexml20-20020424/">http://www.w3.org/TR/2002/WD-voicexml20-20020424/</a>>.
[<a id="ref-12">12</a>] Burger, E., Ed., Van Dyke, J., and A. Spitzer, "Basic Network
Media Services with SIP", <a href="./rfc4240">RFC 4240</a>, December 2005.
[<a id="ref-13">13</a>] Guttman, E., Perkins, C., Veizades, J., and M. Day, "Service
Location Protocol, Version 2", <a href="./rfc2608">RFC 2608</a>, June 1999.
[<a id="ref-14">14</a>] Gulbrandsen, A., Vixie, P., and L. Esibov, "A DNS RR for
specifying the location of services (DNS SRV)", <a href="./rfc2782">RFC 2782</a>,
February 2000.
[<a id="ref-15">15</a>] Committee on Authentication Technologies and Their Privacy
Implications, National Research Council, "Who Goes There?:
Authentication Through the Lens of Privacy", Computer Science
and Telecommunications Board (CSTB) , 2003,
<<a href="http://www.nap.edu/catalog/10656.html/">http://www.nap.edu/catalog/10656.html/</a> >.
Author's Address
David R. Oran
Cisco Systems, Inc.
7 Ladyslipper Lane
Acton, MA
USA
EMail: oran@cisco.com
<span class="grey">Oran Informational [Page 19]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-20" ></span>
<span class="grey"><a href="./rfc4313">RFC 4313</a> Speech Services Control Requirements December 2005</span>
Full Copyright Statement
Copyright (C) The Internet Society (2005).
This document is subject to the rights, licenses and restrictions
contained in <a href="https://www.rfc-editor.org/bcp/bcp78">BCP 78</a>, and except as set forth therein, the authors
retain all their rights.
This document and the information contained herein are provided on an
"AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS
OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET
ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE
INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Intellectual Property
The IETF takes no position regarding the validity or scope of any
Intellectual Property Rights or other rights that might be claimed to
pertain to the implementation or use of the technology described in
this document or the extent to which any license under such rights
might or might not be available; nor does it represent that it has
made any independent effort to identify any such rights. Information
on the procedures with respect to rights in RFC documents can be
found in <a href="https://www.rfc-editor.org/bcp/bcp78">BCP 78</a> and <a href="https://www.rfc-editor.org/bcp/bcp79">BCP 79</a>.
Copies of IPR disclosures made to the IETF Secretariat and any
assurances of licenses to be made available, or the result of an
attempt made to obtain a general license or permission for the use of
such proprietary rights by implementers or users of this
specification can be obtained from the IETF on-line IPR repository at
<a href="http://www.ietf.org/ipr">http://www.ietf.org/ipr</a>.
The IETF invites any interested party to bring to its attention any
copyrights, patents or patent applications, or other proprietary
rights that may cover technology that may be required to implement
this standard. Please address the information to the IETF at ietf-
ipr@ietf.org.
Acknowledgement
Funding for the RFC Editor function is currently provided by the
Internet Society.
Oran Informational [Page 20]
</pre>
|