
Recherche avancée
Médias (1)
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
Autres articles (99)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
À propos des documents
21 juin 2013, parQue faire quand un document ne passe pas en traitement, dont le rendu ne correspond pas aux attentes ?
Document bloqué en file d’attente ?
Voici une liste d’actions ordonnée et empirique possible pour tenter de débloquer la situation : Relancer le traitement du document qui ne passe pas Retenter l’insertion du document sur le site MédiaSPIP Dans le cas d’un média de type video ou audio, retravailler le média produit à l’aide d’un éditeur ou un transcodeur. Convertir le document dans un format (...)
Sur d’autres sites (8096)
-
ffmpeg-mp4box-mpeg dash plays only few segments
30 octobre 2015, par IdrisNeed help in debugging the segment dash files
The input was an MP4 with these details. This was recorded from a video camera, the output from the camera was mkv and we converted into MP4 after editing the audio via adobe
- Size : 7.51 GB Frame rate : 25 frames/ second
- Data rate : 25326kbps
- Total bitrate : 25525kbps
Converted this to another mp4 with this command
ffmpeg -i "input.mp4" -s 1280x720 -c:v libx264 -b:v 750k -bf 2 -g 75 -sc_threshold 0 -an video_1280x720_750k.mp4
ffmpeg -i "input.mp4" -c:a aac -strict experimental -b:a 96k -ar 32000 -vn audio_96k.mp4The output video has
- fps : 25
- Data rate : 761kbps
- bitrate : 761kbps
Then, created the segmented dash through MP4Box
MP4Box -dash 10000 -frag 10000 -rap -segment-name video_0_1280000\segment_ video_1280x720_750k.mp4
MP4Box -dash 3000 -frag 10000 -rap -segment-name audio_0_96000\segment_ audio_96k.mp4The MPD generated was validated online and its perfect
UPDATE ! Included the MPD file
<?xml version="1.0"?>
<mpd xmlns="urn:mpeg:dash:schema:mpd:2011" minbuffertime="PT1.500S" type="static" mediapresentationduration="PT0H2M0.000S" maxsegmentduration="PT0H0M10.000S" profiles="urn:mpeg:dash:profile:full:2011">
<programinformation moreinformationurl="http://gpac.sourceforge.net">
</programinformation>
<period duration="PT0H2M0.000S">
<adaptationset segmentalignment="true" lang="eng">
<representation mimetype="audio/mp4" codecs="mp4a.40.2" audiosamplingrate="32000" startwithsap="1" bandwidth="98434">
<audiochannelconfiguration schemeiduri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011" value="2"></audiochannelconfiguration>
<segmentlist timescale="32000" duration="319999">
<initialization sourceurl="audio_0_96000/segment_init.mp4"></initialization>
<segmenturl media="audio_0_96000/segment_1.m4s"></segmenturl>
<segmenturl media="audio_0_96000/segment_2.m4s"></segmenturl>
<segmenturl media="audio_0_96000/segment_3.m4s"></segmenturl>
<segmenturl media="audio_0_96000/segment_4.m4s"></segmenturl>
<segmenturl media="audio_0_96000/segment_5.m4s"></segmenturl>
<segmenturl media="audio_0_96000/segment_6.m4s"></segmenturl>
<segmenturl media="audio_0_96000/segment_7.m4s"></segmenturl>
<segmenturl media="audio_0_96000/segment_8.m4s"></segmenturl>
<segmenturl media="audio_0_96000/segment_9.m4s"></segmenturl>
<segmenturl media="audio_0_96000/segment_10.m4s"></segmenturl>
<segmenturl media="audio_0_96000/segment_11.m4s"></segmenturl>
<segmenturl media="audio_0_96000/segment_12.m4s"></segmenturl>
<segmenturl media="audio_0_96000/segment_13.m4s"></segmenturl>
</segmentlist>
</representation>
</adaptationset>
<adaptationset segmentalignment="true" maxwidth="1280" maxheight="720" maxframerate="25" par="16:9" lang="eng">
<representation mimetype="video/mp4" codecs="avc3.64001f" width="1280" height="720" framerate="25" sar="1:1" startwithsap="1" bandwidth="764668">
<segmentlist timescale="12800" duration="125866">
<initialization sourceurl="video_0_1280000/segment_init.mp4"></initialization>
<segmenturl media="video_0_1280000/segment_1.m4s"></segmenturl>
<segmenturl media="video_0_1280000/segment_2.m4s"></segmenturl>
<segmenturl media="video_0_1280000/segment_3.m4s"></segmenturl>
<segmenturl media="video_0_1280000/segment_4.m4s"></segmenturl>
<segmenturl media="video_0_1280000/segment_5.m4s"></segmenturl>
<segmenturl media="video_0_1280000/segment_6.m4s"></segmenturl>
<segmenturl media="video_0_1280000/segment_7.m4s"></segmenturl>
<segmenturl media="video_0_1280000/segment_8.m4s"></segmenturl>
<segmenturl media="video_0_1280000/segment_9.m4s"></segmenturl>
<segmenturl media="video_0_1280000/segment_10.m4s"></segmenturl>
<segmenturl media="video_0_1280000/segment_11.m4s"></segmenturl>
<segmenturl media="video_0_1280000/segment_12.m4s"></segmenturl>
<segmenturl media="video_0_1280000/segment_13.m4s"></segmenturl>
</segmentlist>
</representation>
</adaptationset>
</period>
</mpd>Played the video through dashjs.. I believe it just plays the initial segment and errors out as MEDIA_ERR_DECODE..MEDIA_ERR_SRC_NOT_SUPPORTED.. or some message which says start not found..
Through chrome debugging I see that atleast 4 segments are correctly loading.. I am not sure whats going on..
Any help in debugging the issue is really appreciated. I really can’t understand if this is a problem with the file or ffmpeg or mp4box or chrome.
Output from chrome debugging tool
[dash.js 1.5.1] new MediaPlayer instance has been created
dash.all.js:11 Playback initiated!
dash.all.js:11 Parsing complete: ( xml2json: 5ms, objectiron: 10ms, total: 0.015s)
dash.all.js:11 Manifest has been refreshed at Mon Oct 26 2015 10:19:22 GMT-0400 (Eastern Daylight Time)[1445869162092]
dash.all.js:11 SegmentTimeline detected using calculated Live Edge Time
dash.all.js:11 MediaSource is open!
dash.all.js:11 [object Event]
dash.all.js:11 Duration successfully set to: 120
dash.all.js:11 Added 0 inline events
dash.all.js:11 video codec: video/mp4;codecs="avc3.64001f"
dash.all.js:11 [video] stop
dash.all.js:11 audio codec: audio/mp4;codecs="mp4a.40.2"
dash.all.js:11 [audio] stop
dash.all.js:11 No text data.
dash.all.js:11 No fragmentedText data.
dash.all.js:11 No muxed data.
dash.all.js:11 [video] start
dash.all.js:11 [video] Getting the request for time: 0
dash.all.js:11 [video] Index for time 0 is 0
dash.all.js:11 [video] SegmentList: 0 / 120
dash.all.js:11 [audio] start
dash.all.js:11 [audio] Getting the request for time: 0
dash.all.js:11 [audio] Index for time 0 is 0
dash.all.js:11 [audio] SegmentList: 0 / 120
dash.all.js:11 [video] Getting the request for time: 9.83328125
dash.all.js:11 [video] Index for time 9.83328125 is 0
dash.all.js:11 [video] SegmentList: 0 / 120
dash.all.js:11 [video] SegmentList: 9.83328125 / 120
dash.all.js:11 [audio] Getting the request for time: 9.99996875
dash.all.js:11 [audio] Index for time 9.99996875 is 0
dash.all.js:11 [audio] SegmentList: 0 / 120
dash.all.js:11 [audio] SegmentList: 9.99996875 / 120
dash.all.js:11 loaded audio:Media Segment:0 (200, 20ms, 6ms)
dash.all.js:11 loaded video:Media Segment:0 (200, 153ms, 43ms)
dash.all.js:11 loaded video:Initialization Segment:NaN (200, 0ms, 32ms)
dash.all.js:11 [video] Initialization finished loading
dash.all.js:11 loaded audio:Initialization Segment:NaN (200, 0ms, 34ms)
dash.all.js:11 [audio] Initialization finished loading
dash.all.js:11 [video] Getting the request for time: 19.6665625
dash.all.js:11 [video] Index for time 19.6665625 is 1
dash.all.js:11 [video] SegmentList: 9.83328125 / 120
dash.all.js:11 [video] SegmentList: 19.6665625 / 120
dash.all.js:11 [audio] Getting the request for time: 19.9999375
dash.all.js:11 [audio] Index for time 19.9999375 is 1
dash.all.js:11 [audio] SegmentList: 9.99996875 / 120
dash.all.js:11 [audio] SegmentList: 19.9999375 / 120
dash.all.js:11 [video] Stalling Buffer
dash.all.js:11 [video] Waiting for more buffer before starting playback.
dash.all.js:11 [video] Getting the request for time: 0
dash.all.js:11 [video] Index for time 0 is 0
dash.all.js:11 [audio] Stalling Buffer
dash.all.js:11 [audio] Waiting for more buffer before starting playback.
dash.all.js:11 [audio] Getting the request for time: 0
dash.all.js:11 [audio] Index for time 0 is 0
dash.all.js:11 <video> loadedmetadata
dash.all.js:11 Starting playback at offset: 0
dash.all.js:11 [video] Getting the request for time: 29.499843750000004
dash.all.js:11 [video] Index for time 29.499843750000004 is 2
dash.all.js:11 [video] SegmentList: 19.6665625 / 120
dash.all.js:11 [video] SegmentList: 29.499843750000004 / 120
dash.all.js:11 [video] Got enough buffer to start.
dash.all.js:11 [video] seek: 0
dash.all.js:11 [audio] Getting the request for time: 29.999906250000002
dash.all.js:11 [audio] Index for time 29.999906250000002 is 2
dash.all.js:11 [audio] SegmentList: 19.9999375 / 120
dash.all.js:11 [audio] SegmentList: 29.999906250000002 / 120
dash.all.js:11 [audio] Got enough buffer to start.
dash.all.js:11 [audio] seek: 0
dash.all.js:11 loaded audio:Media Segment:9.99996875 (200, 67ms, 24ms)
dash.all.js:11 loaded video:Media Segment:9.83328125 (200, 71ms, 31ms)
dash.all.js:11 [audio] Buffered Range: 0.032 - 9.984
dash.all.js:11 [audio] Getting the request for time: 0
dash.all.js:11 [audio] Index for time 0 is 0
dash.all.js:11 Start Event Controller
dash.all.js:11 [audio] Buffered Range: 0.032 - 19.999968
dash.all.js:11 [audio] Getting the request for time: 0
dash.all.js:11 [audio] Index for time 0 is 0
dash.all.js:11 <video> play
dash.all.js:11 [video] start
dash.all.js:11 [video] Getting the request for time: 0
dash.all.js:11 [video] Index for time 0 is 0
dash.all.js:11 [video] SegmentList: 0 / 120
dash.all.js:11 [video] SegmentList: 9.83328125 / 120
dash.all.js:11 [video] SegmentList: 19.6665625 / 120
dash.all.js:11 [audio] start
dash.all.js:11 <video> playing
dash.all.js:11 [video] Buffered Range: 0 - 9
dash.all.js:11 [video] Getting the request for time: 0
dash.all.js:11 [video] Index for time 0 is 0
dash.all.js:11 Do seek: 0.032
dash.all.js:11 <video> seek
dash.all.js:11 [video] Getting the request for time: 29.499843750000004
dash.all.js:11 [video] Index for time 29.499843750000004 is 2
dash.all.js:11 [video] SegmentList: 19.6665625 / 120
dash.all.js:11 [video] SegmentList: 29.499843750000004 / 120
dash.all.js:11 [video] seek: 0.032
dash.all.js:11 [audio] seek: 0.032
dash.all.js:11 [video] Getting the request for time: 9
dash.all.js:11 [video] Index for time 9 is 0
dash.all.js:11 [video] SegmentList: 0 / 120
dash.all.js:11 [video] SegmentList: 9.83328125 / 120
dash.all.js:11 [video] SegmentList: 19.6665625 / 120
dash.all.js:11 [video] SegmentList: 29.499843750000004 / 120
dash.all.js:11 [video] Buffered Range: 0 - 18
dash.all.js:11 [video] Getting the request for time: 0
dash.all.js:11 [video] Index for time 0 is 0
dash.all.js:11 loaded video:Media Segment:19.6665625 (200, 42ms, 33ms)
dash.all.js:11 <video> seeked
dash.all.js:11 Start Event Controller
dash.all.js:11 <video> playing
dash.all.js:11 [video] Buffered Range: 0 - 28
dash.all.js:11 [video] Getting the request for time: 0
dash.all.js:11 [video] Index for time 0 is 0
dash.all.js:11 [audio] Getting the request for time: 19.999968
dash.all.js:11 [audio] Index for time 19.999968 is 1
dash.all.js:11 [audio] SegmentList: 9.99996875 / 120
dash.all.js:11 [audio] SegmentList: 19.9999375 / 120
dash.all.js:11 [audio] Getting the request for time: 29.999906250000002
dash.all.js:11 [audio] Index for time 29.999906250000002 is 2
dash.all.js:11 [audio] SegmentList: 19.9999375 / 120
dash.all.js:11 [audio] SegmentList: 29.999906250000002 / 120
dash.all.js:11 loaded audio:Media Segment:19.9999375 (200, 102ms, 2ms)
dash.all.js:11 [audio] Buffered Range: 0.032 - 29.983968
dash.all.js:11 [audio] Getting the request for time: 0
dash.all.js:11 [audio] Index for time 0 is 0
dash.all.js:11 loaded audio:Media Segment:29.999906250000002 (200, 26ms, 2ms)
dash.all.js:11 [audio] Buffered Range: 0.032 - 39.999968
dash.all.js:11 [audio] Getting the request for time: 0
dash.all.js:11 [audio] Index for time 0 is 0
dash.all.js:11 loaded video:Media Segment:29.499843750000004 (200, 47ms, 7ms)
dash.all.js:11 Video Element Error: MEDIA_ERR_DECODE
dash.all.js:11 [video] stop
dash.all.js:11 [audio] stop
dash.all.js:11 Video Element Error: MEDIA_ERR_SRC_NOT_SUPPORTED
dash.all.js:11 <video> play
</video></video></video></video></video></video></video> -
how to host live stream video
21 août 2015, par VhanjanUsing jwplayer with html5 users can watch videos online..
just put the source where the video file is located,, and the users can play the video that easy..i also tried to put a live stream source, as i remember it is rts ://sampledomain.com/video.mp4,
then jwplayer easily play that mp4 file..my question is
how can i host this kind of file "rts ://sampledomain.com/video.mp4" using my web camera or hypercam3 as my video recorder.. and send it to jwplayer so the users can play live stream in the browser using jwplayer..any hint are appreciated, but thank you very much for step by step tutorials..
-
need help configuring ffmpeg to decode raw AAC with android ndk
24 octobre 2016, par Matt WolfeI’ve got an android app that gets raw AAC bytes from an external device and I want to decode that data but I can’t seem to get the decoder to work, yet ffmpeg seems to work fine for decoding an mp4 file that contains the same audio data (verified with isoviewer). Recently I was able to get this ffmpeg library on android to decode video frames from the same external device but audio won’t seem to work.
Here is the ffmpeg output for the file with the same data :
$ ffmpeg -i Video_2000-01-01_0411.mp4
ffmpeg version 2.6.1 Copyright (c) 2000-2015 the FFmpeg developers
built with Apple LLVM version 6.0 (clang-600.0.57) (based on LLVM 3.5svn)
configuration: --prefix=/usr/local/Cellar/ffmpeg/2.6.1 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libx264 --enable-libmp3lame --enable-libvo-aacenc --enable-libxvid --enable-vda
libavutil 54. 20.100 / 54. 20.100
libavcodec 56. 26.100 / 56. 26.100
libavformat 56. 25.101 / 56. 25.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 11.102 / 5. 11.102
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'AXON_Flex_Video_2000-01-01_0411.mp4':
Metadata:
major_brand : mp42
minor_version : 1
compatible_brands: isom3gp43gp5
Duration: 00:00:15.73, start: 0.000000, bitrate: 1134 kb/s
Stream #0:0(eng): Audio: aac (LC) (mp4a / 0x6134706D), 8000 Hz, mono, fltp, 40 kb/s (default)
Metadata:
handler_name : soun
Stream #0:1(eng): Video: mpeg4 (Simple Profile) (mp4v / 0x7634706D), yuv420p, 640x480 [SAR 1:1 DAR 4:3], 1087 kb/s, 29.32 fps, 26.58 tbr, 90k tbn, 1k tbc (default)
Metadata:
handler_name : videHere is my ndk code for setting up and decoding the audio :
jint ffmpeg_init(JNIEnv * env, jobject this) {
audioCodec = avcodec_find_decoder(AV_CODEC_ID_AAC);
if (!audioCodec) {
LOGE("audio codec %d not found", AV_CODEC_ID_AAC);
return -1;
}
audioContext = avcodec_alloc_context3(audioCodec);
if (!audioContext) {
LOGE("Could not allocate codec context");
return -1;
}
int openRet = avcodec_open2(audioContext, audioCodec, NULL);
if (openRet < 0) {
LOGE("Could not open codec, error:%d", openRet);
return -1;
}
audioContext->sample_rate = 8000;
audioContext->channel_layout = AV_CH_LAYOUT_MONO;
audioContext->profile = FF_PROFILE_AAC_LOW;
audioContext->bit_rate = 48 * 1024;
audioContext->sample_fmt = AV_SAMPLE_FMT_FLTP;
// unsigned char extradata[] = {0x15, 0x88};
// audioContext->extradata = extradata;
// audioContext->extradata_size = sizeof(extradata);
audioFrame = av_frame_alloc();
if (!audioFrame) {
LOGE("Could not create audio frame");
return -1;
}
}
jint ffmpeg_decodeAudio(JNIEnv *env, jobject this, jbyteArray aacData, jbyteArray output, int offset, int len) {
LOGI("ffmpeg_decodeAudio()");
char errbuf[128];
AVPacket avpkt = {0};
av_init_packet(&avpkt);
LOGI("av_init_packet()");
int error, got_frame;
uint8_t* buffer = (uint8_t *) (*env)->GetByteArrayElements(env, aacData,0);
uint8_t* copy = av_malloc(len);
memcpy(copy, &buffer[offset], len);
av_packet_from_data(&avpkt, copy, len);
if ((error = avcodec_decode_audio4(audioContext, audioFrame, &got_frame, &avpkt)) < 0) {
ffmpeg_log_error(error);
av_free_packet(&avpkt);
return error;
}
if (got_frame) {
LOGE("Copying audioFrame->extended_data to output jbytearray, linesize[0]:%d", audioFrame->linesize[0]);
(*env)->SetByteArrayRegion(env, output, 0, audioFrame->linesize[0], *audioFrame->extended_data);
}
return 0;
}As you can see I’ve got an init function that opens the decoder and creates the context, these things all work fine, without error. However when I call avcodec_decode_audio4 I get an error :
FFMPEG error : -1094995529, Invalid data found when processing input
I’ve tried all sorts of combinations of AVCodecContext properties. I’m not sure which I need to set for the decoder to do it’s job but from reading online I should just need to set the channel layout and the sample_rate (which I’ve tried by themself). I’ve also tried setting the extradata/extradata_size parameters to that which should match the video settings per : http://wiki.multimedia.cx/index.php?title=MPEG-4_Audio
But no luck.Since the device we’re getting packets from sends aac data that have no sound at the beginning (but are valid packets), I’ve tried to just send those since they definitely should decode correctly.
Here is an example of the initial audio packets that are of silence :
010c9eb43f21f90fc87e46fff10a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5dffe214b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4bbd1c429696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696978
Note that the data shown above is just a hex encoding of the data that I’m putting in AVPacket, and it was sent from an external device to the android application. My application doesn’t have direct access to the file though so I need to decode the raw frames/samples as I get them. When I look at the audio track data in isoviewer I can see that the audio track’s first sample is the same data as what I got from the device that contained that file (thus, the external device is just sending me the sample’s raw data). I believe this data can be derived from reading stsz (sample size) box starting at stco (chunk offset) boxes from the mdat box of the file.
Also, isoviewer shows the esds box as having the following :
ESDescriptor{esId=0, streamDependenceFlag=0, URLFlag=0, oCRstreamFlag=0, streamPriority=0, URLLength=0, URLString='null', remoteODFlag=0, dependsOnEsId=0, oCREsId=0, decoderConfigDescriptor=DecoderConfigDescriptor{objectTypeIndication=64, streamType=5, upStream=0, bufferSizeDB=513, maxBitRate=32000, avgBitRate=32000, decoderSpecificInfo=null, audioSpecificInfo=AudioSpecificConfig{configBytes=1588, audioObjectType=2 (AAC LC), samplingFrequencyIndex=11 (8000), samplingFrequency=0, channelConfiguration=1, syncExtensionType=0, frameLengthFlag=0, dependsOnCoreCoder=0, coreCoderDelay=0, extensionFlag=0, layerNr=0, numOfSubFrame=0, layer_length=0, aacSectionDataResilienceFlag=false, aacScalefactorDataResilienceFlag=false, aacSpectralDataResilienceFlag=false, extensionFlag3=0}, configDescriptorDeadBytes=, profileLevelIndicationDescriptors=[[]]}, slConfigDescriptor=SLConfigDescriptor{predefined=2}}
And the binary is this :
00 00 00 30 65 73 64 73 00 00 00 00 03 80 80 80
1f 00 00 00 04 80 80 80 14 40 15 00 02 01 00 00
7d 00 00 00 7d 00 05 80 80 80 02 15 88 06 01 02