
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (87)
-
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)
Sur d’autres sites (7236)
-
How to adjust mpeg 2 ts start time with ffmpeg ?
29 juin 2015, par Maxim KornienkoI’m writing simple HLS (Http Live Streaming) java server to live cast (really live, not on demand) screenshow + voice. I constantly get chunks of image frames and audio samples as input to my service and produce mpeg 2 ts files + m3u8 playlist web page as output. The workflow is the following :
- Collect (buffer) source video frames and audio for certain period of time
- Convert series of video frames to h.264 encoded video file
- Convert audio samples to mp3 audio file
-
Merge them to
.ts
file with ffmpeg commandffmpeg -i audio.mp3 -i video.mp4 -f mpegts -c:a copy -c:v copy -vprofile main -level:v 4.0 -vbsf h264_mp4toannexb -flags -global_header segment.ts
-
Publish several
.ts
files on m3u8 playlist.
The problem is resulting playlist interrupts after first segment is played. VLC logs following error :
freetype error: Breaking unbreakable line
ts error: libdvbpsi (PSI decoder): TS discontinuity (received 0, expected 4) for PID 17
ts error: libdvbpsi (PSI decoder): TS duplicate (received 0, expected 1) for PID 0
ts error: libdvbpsi (PSI decoder): TS duplicate (received 0, expected 1) for PID 4096
core error: ES_OUT_SET_(GROUP_)PCR is called too late (pts_delay increased to 1000 ms)
core error: ES_OUT_RESET_PCR called
core error: Could not convert timestamp 185529572000
ts error: libdvbpsi (PSI decoder): TS discontinuity (received 0, expected 4) for PID 17
ts error: libdvbpsi (PSI decoder): TS duplicate (received 0, expected 1) for PID 0
ts error: libdvbpsi (PSI decoder): TS duplicate (received 0, expected 1) for PID 4096
core error: ES_OUT_SET_(GROUP_)PCR is called too late (jitter of 8653 ms ignored)
core error: Could not get display date for timestamp 0
core error: Could not convert timestamp 185538017000
core error: Could not convert timestamp 185538267000
core error: Could not convert timestamp 185539295977
...I guess the reason is that start time of segments do not belong to one stream, but it’s impossible to concat and resegment (with
ffmepg -f segment
) whole stream once new chunk is added. Tried adding#EXT-X-DISCONTINUITY
tag to playlist as suggested here but it didn’t help. When Iffprobe
them I get :Input #0, mpegts, from '26.ts':
Duration: 00:00:10.02, start: 1.876978, bitrate: 105 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 640x640, 4 fps, 4 tbr, 90k tbn, 8 tbc
Stream #0:1[0x101]: Audio: mp3 ([3][0][0][0] / 0x0003), 48000 Hz, mono, s16p, 64 kb/sWhere start value in line
Duration: 00:00:10.02, start: 1.876978, bitrate: 105 kb/s
is more or less equal for all segments.
When I check segments from available proven-to-work playlists (like http://vevoplaylist-live.hls.adaptive.level3.net/vevo/ch1/appleman.m3u8) they all have diffrenet start values for each segment, for example :Input #0, mpegts, from 'segm150518140104572-424570.ts':
Duration: 00:00:06.17, start: 65884.808689, bitrate: 479 kb/s
Program 257
Stream #0:0[0x20]: Video: h264 (Constrained Baseline) ([27][0][0][0] / 0x001B), yuv420p, 320x180 [SAR 1:1 DAR 16:9], 30 fps, 29.97 tbr, 90k tbn, 60 tbc
Stream #0:1[0x21]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 115 kb/s
Stream #0:2[0x22]: Data: timed_id3 (ID3 / 0x20334449)and the next after it
Input #0, mpegts, from 'segm150518140104572-424571.ts':
Duration: 00:00:06.22, start: 65890.814689, bitrate: 468 kb/s
Program 257
Stream #0:0[0x20]: Video: h264 (Constrained Baseline) ([27][0][0][0] / 0x001B), yuv420p, 320x180 [SAR 1:1 DAR 16:9], 30 fps, 29.97 tbr, 90k tbn, 60 tbc
Stream #0:1[0x21]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 124 kb/s
Stream #0:2[0x22]: Data: timed_id3 (ID3 / 0x20334449)differ in the way that start time of
segm150518140104572-424571.ts
is equal to start time + duration ofsegm150518140104572-424570.ts
.How could this start value be adjusted with
ffmpeg
? Or maybe my whole aproach is wrong ? Unfortunately I couldn’t find on the internet working example of live (not on demand) video service implemented with ffmepg. -
Instagram Live API using Graph API
16 août 2020, par Deepak SharmaI see Facebook has new graph API for live video. But I am not sure if it can used to go live on Instagram as well. I see third party softwares such as Yellow Duck being able to go live on Instagram. Not only that, a lot of softwares support streaming to any destination by just using an RTMP link. So does that mean any service that can generate an RTMP stream can broadcast to Instagram (with/without login to Instagram) ? How does Instagram live work if one can generate an RTMP stream ? Finally, if I can generate an RTMP/RTMPS stream locally on my desktop or phone using ffmpeg libraries, can I stream to Instagram ?


-
JavaCPP FFMpeg to JavaSound
8 août 2020, par TW2I have a problem to be able to read audio using JavaCPP FFMpeg library. I don’t know how to pass it to java sound and I don’t know too if my code is correct.


Let’s see the more important part of my code (video is OK so I drop this) :


The variables :


//==========================================================================
// FFMpeg 4.x - Video and Audio
//==========================================================================

private final AVFormatContext pFormatCtx = new AVFormatContext(null);
private final AVDictionary OPTIONS_DICT = null;
private AVPacket pPacket = new AVPacket();
 
//==========================================================================
// FFMpeg 4.x - Audio
//==========================================================================
 
private AVCodec pAudioCodec;
private AVCodecContext pAudioCodecCtx;
private final List<streaminfo> audioStreams = new ArrayList<>();
private int audio_data_size;
private final BytePointer audio_data = new BytePointer(0);
private int audio_ret;
private AVFrame pAudioDecodedFrame = null;
private AVCodecParserContext pAudioParser;
private SwrContext audio_swr_ctx = null;
</streaminfo>


Then I call prepare functions in this order :


private void prepareFirst() throws Exception{
 oldFile = file;
 
 // Initialize packet and check for error
 pPacket = av_packet_alloc();
 if(pPacket == null){
 throw new Exception("ALL: Couldn't allocate packet");
 }

 // Open video file
 if (avformat_open_input(pFormatCtx, file.getPath(), null, null) != 0) {
 throw new Exception("ALL: Couldn't open file");
 }

 // Retrieve stream information
 if (avformat_find_stream_info(pFormatCtx, (PointerPointer)null) < 0) {
 throw new Exception("ALL: Couldn't find stream information");
 }

 // Dump information about file onto standard error
 av_dump_format(pFormatCtx, 0, file.getPath(), 0);

 // Find the first audio/video stream
 for (int i = 0; i < pFormatCtx.nb_streams(); i++) {
 switch(pFormatCtx.streams(i).codecpar().codec_type()){
 case AVMEDIA_TYPE_VIDEO -> videoStreams.add(new StreamInfo(i, pFormatCtx.streams(i)));
 case AVMEDIA_TYPE_AUDIO -> audioStreams.add(new StreamInfo(i, pFormatCtx.streams(i)));
 }
 }
 
 if(videoStreams.isEmpty() && type != PlayType.AudioOnly){
 throw new Exception("Didn't find an audio stream");
 }
 if(audioStreams.isEmpty() && type != PlayType.VideoOnly){
 throw new Exception("Didn't find a video stream");
 }
}

private void prepareAudio() throws Exception{
 //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 // AUDIO
 //------------------------------------------------------------------

 if(audioStreams.isEmpty() == false){
 //===========================
 //------------
 
// // Let's search for AVCodec
// pAudioCodec = avcodec_find_decoder(pFormatCtx.streams(audioStreams.get(0).getStreamIndex()).codecpar().codec_id());
// if (pAudioCodec == null) {
// throw new Exception("AUDIO: Unsupported codec or not found!");
// }
//
// // Let's alloc AVCodecContext
// pAudioCodecCtx = avcodec_alloc_context3(pAudioCodec);
// if (pAudioCodecCtx == null) { 
// throw new Exception("AUDIO: Unallocated codec context or not found!");
// }
 
 // Get a pointer to the codec context for the video stream
 pAudioCodecCtx = pFormatCtx.streams(audioStreams.get(0).getStreamIndex()).codec();

 // Find the decoder for the video stream
 pAudioCodec = avcodec_find_decoder(pAudioCodecCtx.codec_id());
 if (pAudioCodec == null) {
 throw new Exception("AUDIO: Unsupported codec or not found!");
 }

 //===========================
 //------------

 /* open it */
 if (avcodec_open2(pAudioCodecCtx, pAudioCodec, OPTIONS_DICT) < 0) {
 throw new Exception("AUDIO: Could not open codec");
 }

 pAudioDecodedFrame = av_frame_alloc();
 if (pAudioDecodedFrame == null){
 throw new Exception("AUDIO: DecodedFrame allocation failed");
 }

 audio_swr_ctx = swr_alloc_set_opts(
 null, // existing Swr context or NULL
 AV_CH_LAYOUT_STEREO, // output channel layout (AV_CH_LAYOUT_*)
 AV_SAMPLE_FMT_S16, // output sample format (AV_SAMPLE_FMT_*).
 44100, // output sample rate (frequency in Hz)
 pAudioCodecCtx.channels(), // input channel layout (AV_CH_LAYOUT_*)
 pAudioCodecCtx.sample_fmt(), // input sample format (AV_SAMPLE_FMT_*).
 pAudioCodecCtx.sample_rate(), // input sample rate (frequency in Hz)
 0, // logging level offset
 null // parent logging context, can be NULL
 );
 
 swr_init(audio_swr_ctx);
 
 av_samples_fill_arrays(
 pAudioDecodedFrame.data(), // audio_data,
 pAudioDecodedFrame.linesize(), // linesize
 audio_data, // buf
 (int)AV_CH_LAYOUT_STEREO, // nb_channels
 44100, // nb_samples
 AV_SAMPLE_FMT_S16, // sample_fmt
 0 // align
 );
 
 }
 
 // Audio treatment end ---------------------------------------------
 //==================================================================
}



And then when I launch the thread :


private void doPlay() throws Exception{
 av_init_packet(pPacket);

 // Read frames
 while (av_read_frame(pFormatCtx, pPacket) >= 0) {
 if (type != PlayType.AudioOnly && pPacket.stream_index() == videoStreams.get(0).getStreamIndex()) {
 // Is this a packet from the video stream?
 decodeVideo();
 renewPacket();
 }

 if (type != PlayType.VideoOnly && pPacket.stream_index() == audioStreams.get(0).getStreamIndex()) {
 // Is this a packet from the audio stream?
 if(pPacket.size() > 0){
 decodeAudio();
 renewPacket();
 }
 }
 }
}

private void renewPacket(){
 // Free the packet that was allocated by av_read_frame
 av_packet_unref(pPacket);

 pPacket.data(null);
 pPacket.size(0);
 av_init_packet(pPacket);
}



And again, this is where I don’t read audio :


private void decodeAudio() throws Exception{

 do {
 audio_ret = avcodec_send_packet(pAudioCodecCtx, pPacket);
 } while(audio_ret == AVERROR_EAGAIN());
 System.out.println("packet sent return value: " + audio_ret);

 if(audio_ret == AVERROR_EOF || audio_ret == AVERROR_EINVAL()) {
 StringBuilder sb = new StringBuilder();
 Formatter formatter = new Formatter(sb, Locale.US);
 formatter.format("AVERROR(EAGAIN): %d, AVERROR_EOF: %d, AVERROR(EINVAL): %d\n", AVERROR_EAGAIN(), AVERROR_EOF, AVERROR_EINVAL());
 formatter.format("Audio frame getting error (%d)!\n", audio_ret);
 throw new Exception(sb.toString());
 }

 audio_ret = avcodec_receive_frame(pAudioCodecCtx, pAudioDecodedFrame);
 System.out.println("frame received return value: " + audio_ret);

 audio_data_size = av_get_bytes_per_sample(AV_SAMPLE_FMT_S16);

 if (audio_data_size < 0) {
 /* This should not occur, checking just for paranoia */
 throw new Exception("Failed to calculate data size");
 }
 
 double frame_nb = 44100d / pAudioCodecCtx.sample_rate() * pAudioDecodedFrame.nb_samples();
 long out_count = Math.round(Math.floor(frame_nb));

 int out_samples = swr_convert(
 audio_swr_ctx,
 audio_data, 
 (int)out_count,
 pAudioDecodedFrame.data(0),
 pAudioDecodedFrame.nb_samples()
 );
 
 if (out_samples < 0) {
 throw new Exception("AUDIO: Error while converting");
 }
 
 int dst_bufsize = av_samples_get_buffer_size(
 pAudioDecodedFrame.linesize(), 
 (int)AV_CH_LAYOUT_STEREO, 
 out_samples,
 AV_SAMPLE_FMT_S16,
 1
 );
 
 AudioFormat audioFormat = new AudioFormat(
 pAudioDecodedFrame.sample_rate(),
 16,
 2, 
 true, 
 false
 );
 
 BytePointer bytePointer = pAudioDecodedFrame.data(0);
 ByteBuffer byteBuffer = bytePointer.asBuffer();

 byte[] bytes = new byte[byteBuffer.remaining()];
 byteBuffer.get(bytes);
 
 try (SourceDataLine sdl = AudioSystem.getSourceDataLine(audioFormat)) {
 sdl.open(audioFormat); 
 sdl.start();
 sdl.write(bytes, 0, bytes.length);
 sdl.drain();
 sdl.stop();
 } catch (LineUnavailableException ex) {
 Logger.getLogger(AVEntry.class.getName()).log(Level.SEVERE, null, ex);
 } 
}



Do you have an idea ?