1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132
|
/**********
This library is free software; you can redistribute it and/or modify it under
the terms of the GNU Lesser General Public License as published by the
Free Software Foundation; either version 2.1 of the License, or (at your
option) any later version. (See <http://www.gnu.org/copyleft/lesser.html>.)
This library is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for
more details.
You should have received a copy of the GNU Lesser General Public License
along with this library; if not, write to the Free Software Foundation, Inc.,
59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
**********/
// "liveMedia"
// Copyright (c) 1996-2005 Live Networks, Inc. All rights reserved.
// RTP sink for AMR audio (RFC 3267)
// Implementation
// NOTE: At present, this is just a limited implementation, supporting:
// octet-alignment only; no interleaving; no frame CRC; no robust-sorting.
#include "AMRAudioRTPSink.hh"
#include "AMRAudioSource.hh"
AMRAudioRTPSink*
AMRAudioRTPSink::createNew(UsageEnvironment& env, Groupsock* RTPgs,
unsigned char rtpPayloadFormat,
Boolean sourceIsWideband,
unsigned numChannelsInSource) {
return new AMRAudioRTPSink(env, RTPgs, rtpPayloadFormat,
sourceIsWideband, numChannelsInSource);
}
AMRAudioRTPSink
::AMRAudioRTPSink(UsageEnvironment& env, Groupsock* RTPgs,
unsigned char rtpPayloadFormat,
Boolean sourceIsWideband, unsigned numChannelsInSource)
: AudioRTPSink(env, RTPgs, rtpPayloadFormat,
sourceIsWideband ? 16000 : 8000,
sourceIsWideband ? "AMR-WB": "AMR",
numChannelsInSource),
fSourceIsWideband(sourceIsWideband), fAuxSDPLine(NULL) {
}
AMRAudioRTPSink::~AMRAudioRTPSink() {
delete[] fAuxSDPLine;
}
Boolean AMRAudioRTPSink::sourceIsCompatibleWithUs(MediaSource& source) {
// Our source must be an AMR audio source:
if (!source.isAMRAudioSource()) return False;
// Also, the source must be wideband iff we asked for this:
AMRAudioSource& amrSource = (AMRAudioSource&)source;
if ((amrSource.isWideband()^fSourceIsWideband) != 0) return False;
// Also, the source must have the same number of channels that we
// specified. (It could, in principle, have more, but we don't
// support that.)
if (amrSource.numChannels() != numChannels()) return False;
// Also, because in our current implementation we output only one
// frame in each RTP packet, this means that for multi-channel audio,
// each 'frame-block' will be split over multiple RTP packets, which
// may violate the spec. Warn about this:
if (amrSource.numChannels() > 1) {
envir() << "AMRAudioRTPSink: Warning: Input source has " << amrSource.numChannels()
<< " audio channels. In the current implementation, the multi-frame frame-block will be split over multiple RTP packets\n";
}
return True;
}
void AMRAudioRTPSink::doSpecialFrameHandling(unsigned fragmentationOffset,
unsigned char* frameStart,
unsigned numBytesInFrame,
struct timeval frameTimestamp,
unsigned numRemainingBytes) {
// If this is the 1st frame in the 1st packet, set the RTP 'M' (marker)
// bit (because this is considered the start of a talk spurt):
if (isFirstPacket() && isFirstFrameInPacket()) {
setMarkerBit();
}
// If this is the first frame in the packet, set the 1-byte payload
// header (using CMR 15)
if (isFirstFrameInPacket()) {
u_int8_t payloadHeader = 0xF0;
setSpecialHeaderBytes(&payloadHeader, 1, 0);
}
// Set the TOC field for the current frame, based on the "FT" and "Q"
// values from our source:
AMRAudioSource* amrSource = (AMRAudioSource*)fSource;
u_int8_t toc = amrSource->lastFrameHeader();
// Clear the "F" bit, because we're the last frame in this packet: #####
toc &=~ 0x80;
setSpecialHeaderBytes(&toc, 1, 1+numFramesUsedSoFar());
// Important: Also call our base class's doSpecialFrameHandling(),
// to set the packet's timestamp:
MultiFramedRTPSink::doSpecialFrameHandling(fragmentationOffset,
frameStart, numBytesInFrame,
frameTimestamp,
numRemainingBytes);
}
Boolean AMRAudioRTPSink
::frameCanAppearAfterPacketStart(unsigned char const* /*frameStart*/,
unsigned /*numBytesInFrame*/) const {
// For now, pack only one AMR frame into each outgoing RTP packet: #####
return False;
}
unsigned AMRAudioRTPSink::specialHeaderSize() const {
// For now, because we're packing only one frame per packet,
// there's just a 1-byte payload header, plus a 1-byte TOC #####
return 2;
}
char const* AMRAudioRTPSink::auxSDPLine() {
if (fAuxSDPLine == NULL) {
// Generate a "a=fmtp:" line with "octet-aligned=1"
// (That is the only non-default parameter.)
char buf[100];
sprintf(buf, "a=fmtp:%d octet-align=1\r\n", rtpPayloadType());
delete[] fAuxSDPLine; fAuxSDPLine = strDup(buf);
}
return fAuxSDPLine;
}
|