I am engaged on an iOS mission the place I am receiving packets of Opus audio information and trying to play them utilizing AVSampleBufferAudioRenderer. Proper now I am utilizing Opus’s personal decoder, so in the end I simply must get the decoded PCM packets to play. The entire course of from prime to backside is not suuuper nicely documented, however I believe I am getting shut. Here is the code I am working with to this point (edited down, and with some hardcoded values for simplicity).
static AVSampleBufferAudioRenderer* audioRenderer;
static AVSampleBufferRenderSynchronizer* renderSynchronizer;
int samplesPerFrame = 240;
int channelCount = 2;
int sampleRate = 48000;
int streams = 1;
int coupledStreams = 1;
char mapping[8] = [' ','x01',' ',' ',' ',' ',' ',' '];
// known as when the stream is about to start out
void AudioInit()
{
renderSynchronizer = [[AVSampleBufferRenderSynchronizer alloc] init];
audioRenderer = [[AVSampleBufferAudioRenderer alloc] init];
[renderSynchronizer addRenderer:audioRenderer];
int decodedPacketSize = samplesPerFrame * sizeof(brief) * channelCount; // 240 samples per body * 2 channels
decodedPacketBuffer = SDL_malloc(decodedPacketSize);
int err;
opusDecoder = opus_multistream_decoder_create(sampleRate, // 48000
channelCount, // 2
streams, // 1
coupledStreams, // 1
mapping,
&err);
renderSynchronizer.charge = 1.0;
}
// known as each X milliseconds with a brand new packet of audio information to play, IF there's audio. (whereas testing, X = 5)
void AudioDecodeAndPlaySample(char* sampleData, int sampleLength)
{
// decode the packet from Opus to (I believe??) Linear PCM
int numSamples;
numSamples = opus_multistream_decode(opusDecoder,
(unsigned char *)sampleData,
sampleLength,
(brief*)decodedPacketBuffer,
samplesPerFrame, // 240
0);
int bufferSize = sizeof(brief) * numSamples * channelCount; // 240 samples * 2 channels
// LPCM stream description
AudioStreamBasicDescription asbd = {
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger,
.mBytesPerPacket = bufferSize,
.mFramesPerPacket = numSamples, // 240
.mBytesPerFrame = bufferSize / numSamples,
.mChannelsPerFrame = channelCount, // 2
.mBitsPerChannel = 16,
.mSampleRate = sampleRate // 48000
};
// audio format description wrapper round asbd
CMAudioFormatDescriptionRef audioFormatDesc;
OSStatus standing = CMAudioFormatDescriptionCreate(kCFAllocatorDefault,
&asbd,
0,
NULL,
0,
NULL,
NULL,
&audioFormatDesc);
// information block to retailer decoded packet into
CMBlockBufferRef blockBuffer;
standing = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,
decodedPacketBuffer,
bufferSize,
kCFAllocatorNull,
NULL,
0,
bufferSize,
0,
&blockBuffer);
// information block transformed right into a pattern buffer
CMSampleBufferRef sampleBuffer;
standing = CMAudioSampleBufferCreateReadyWithPacketDescriptions(kCFAllocatorDefault,
blockBuffer,
audioFormatDesc,
numSamples,
kCMTimeZero,
NULL,
&sampleBuffer);
// queueing pattern buffer onto audio renderer
[audioRenderer enqueueSampleBuffer:sampleBuffer];
}
The AudioDecodeAndPlaySample
operate comes from the library I am working with, and because the remark says, is named with a packet of about 5 ms price of samples at a time (and, essential to notice, does not get known as if there’s silence).
There are many locations right here I could possibly be unsuitable – I believe I am appropriate that the opus decoder (docs right here) decodes into Linear PCM (interleaved), and I hope I am constructing the AudioStreamBasicDescription
accurately. I am undoubtedly unsure what to do with the PTS (presentation timestamp) in CMAudioSampleBufferCreateReadyWithPacketDescriptions
– I’ve put in zero hoping that it will simply play as quickly as potential, however I do not know if I’ve obtained that proper.
I’ve run this code with error-checking in all places (edited out right here for simplicity), and I do not get any errors till after [audioRenderer enqueueSampleBuffer:sampleBuffer]
, when audioRenderer.error
reviews an unknown error. So clearly it is sad with no matter I am giving it. Most code examples I’ve seen of enqueueSampleBuffer
have it wrapped in requestMediaDataWhenReady
with a dispatch queue, which I’ve additionally tried to no avail. (I believe it is extra good follow than important to functioning, so I am simply making an attempt to get the best case working first; however whether it is important I can drop it again in.)
Be happy to reply utilizing Swift in the event you’re extra snug with it, I can work with both. (I am caught with Goal-C right here, prefer it or not. 🙂)