
Recherche avancée
Autres articles (47)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (6643)
-
Rotating a video during encoding with ffmpeg and libav API results in half of video corrupted
11 mai 2020, par Daniel KobeI'm using the C API for ffmpeg/libav to rotate a vertically filmed iphone video during the encoding step. There are other questions asking to do a similar thing but they are all using the CLI tool to do so.



So far I was able to figure out how to use the
AVFilter
to rotate the video, base off this example https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/filtering_video.c


The problem is that half the output file is corrupt.




Here is the code for my encoding logic. Its written with GOLANG using CGO to interface with the C API.



// Encode encode an AVFrame and return it
func Encode(enc Encoder, frame *C.AVFrame) (*EncodedFrame, error) {
 ctx := enc.Context()

 if ctx.buffersrcctx == nil {
 // initialize filter
 outputs := C.avfilter_inout_alloc()
 inputs := C.avfilter_inout_alloc()
 m_pFilterGraph := C.avfilter_graph_alloc()
 buffersrc := C.avfilter_get_by_name(C.CString("buffer"))
 argsStr := fmt.Sprintf("video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d", ctx.avctx.width, ctx.avctx.height, ctx.avctx.pix_fmt, ctx.avctx.time_base.num, ctx.avctx.time_base.den, ctx.avctx.sample_aspect_ratio.num, ctx.avctx.sample_aspect_ratio.den)
 Log.Info.Println("yakotest")
 Log.Info.Println(argsStr)
 args := C.CString(argsStr)
 ret := C.avfilter_graph_create_filter(&ctx.buffersrcctx, buffersrc, C.CString("my_buffersrc"), args, nil, m_pFilterGraph)
 if ret < 0 {
 Log.Info.Printf("\n problem creating filter %v\n", AVError(ret).Error())
 }

 buffersink := C.avfilter_get_by_name(C.CString("buffersink"))
 ret = C.avfilter_graph_create_filter(&ctx.buffersinkctx, buffersink, C.CString("my_buffersink"), nil, nil, m_pFilterGraph)
 if ret < 0 {
 Log.Info.Printf("\n problem creating filter %v\n", AVError(ret).Error())
 }

 /*
 * Set the endpoints for the filter graph. The filter_graph will
 * be linked to the graph described by filters_descr.
 */

 /*
 * The buffer source output must be connected to the input pad of
 * the first filter described by filters_descr; since the first
 * filter input label is not specified, it is set to "in" by
 * default.
 */
 outputs.name = C.av_strdup(C.CString("in"))
 outputs.filter_ctx = ctx.buffersrcctx
 outputs.pad_idx = 0
 outputs.next = nil

 /*
 * The buffer sink input must be connected to the output pad of
 * the last filter described by filters_descr; since the last
 * filter output label is not specified, it is set to "out" by
 * default.
 */
 inputs.name = C.av_strdup(C.CString("out"))
 inputs.filter_ctx = ctx.buffersinkctx
 inputs.pad_idx = 0
 inputs.next = nil

 ret = C.avfilter_graph_parse_ptr(m_pFilterGraph, C.CString("transpose=clock,scale=-2:1080"),
 &inputs, &outputs, nil)
 if ret < 0 {
 Log.Info.Printf("\n problem with avfilter_graph_parse %v\n", AVError(ret).Error())
 }

 ret = C.avfilter_graph_config(m_pFilterGraph, nil)
 if ret < 0 {
 Log.Info.Printf("\n problem with graph config %v\n", AVError(ret).Error())
 }
 }

 filteredFrame := C.av_frame_alloc()

 /* push the decoded frame into the filtergraph */
 ret := C.av_buffersrc_add_frame_flags(ctx.buffersrcctx, frame, C.AV_BUFFERSRC_FLAG_KEEP_REF)
 if ret < 0 {
 Log.Error.Printf("\nError while feeding the filter greaph, err = %v\n", AVError(ret).Error())
 return nil, errors.New(ErrorFFmpegCodecFailure)
 }

 /* pull filtered frames from the filtergraph */
 for {
 ret = C.av_buffersink_get_frame(ctx.buffersinkctx, filteredFrame)
 if ret == C.AVERROR_EAGAIN || ret == C.AVERROR_EOF {
 break
 }
 if ret < 0 {
 Log.Error.Printf("\nCouldnt find a frame, err = %v\n", AVError(ret).Error())
 return nil, errors.New(ErrorFFmpegCodecFailure)
 }

 filteredFrame.pts = frame.pts
 frame = filteredFrame
 defer C.av_frame_free(&filteredFrame)
 }

 if frame != nil {
 frame.pict_type = 0 // reset pict type for the encoder
 if C.avcodec_send_frame(ctx.avctx, frame) != 0 {
 Log.Error.Printf("%+v\n", StackErrorf("codec error, could not send frame"))
 return nil, errors.New(ErrorFFmpegCodecFailure)
 }
 }

 for {
 ret := C.avcodec_receive_packet(ctx.avctx, ctx.pkt)
 if ret == C.AVERROR_EAGAIN {
 break
 }
 if ret == C.AVERROR_EOF {
 return nil, fmt.Errorf("EOF")
 }
 if ret < 0 {
 Log.Error.Printf("%+v\n", StackErrorf("codec error, receiving packet"))
 return nil, errors.New(ErrorFFmpegCodecFailure)
 }

 data := C.GoBytes(unsafe.Pointer(ctx.pkt.data), ctx.pkt.size)
 return &EncodedFrame{data, int64(ctx.pkt.pts), int64(ctx.pkt.dts),
 (ctx.pkt.flags & C.AV_PKT_FLAG_KEY) != 0}, nil
 }

 return nil, nil
}




It seems like I need to do something with the scaling here but I'm struggling to find helpful information online.


-
Does PTS have to start at 0 ?
5 juillet 2018, par stevendesuI’ve seen a number of questions regarding video PTS values not starting at zero, or asking how to make them start at zero. I’m aware that using ffmpeg I can do something like
ffmpeg -i <video> -vf="setpts=PTS-STARTPTS" <output></output></video>
to fix this kind of thingHowever it’s my understanding that PTS values don’t have to start at zero. For instance, if you join a live stream then odds are it has been going on for an hour and the PTS is already somewhere around 3600000+ but your video player faithfully displays everything just fine. Therefore I would expect there to be no problem if I intentionally created a video with a PTS value starting at, say, the current wall clock time.
I want to send a live stream using ffmpeg, but embed the current time into the stream. This can be used both for latency calculation while the stream is live, and later to determine when the stream was originally aired. From my understanding of PTS, something as simple as this should probably work :
ffmpeg -i video.flv -vf="setpts=RTCTIME" rtmp://<output>
</output>When I try this, however, ffmpeg outputs the following :
frame= 93 fps= 20 q=-1.0 Lsize= 9434kB time=535020:39:58.70 bitrate= 0.0kbits/s speed=1.35e+11x
Note the extremely large value for "time", the bitrate (0.0kbits), and the speed (135000000000x !!!)
At first I thought the issue might be my timebase, so I tried the following :
ffmpeg -i video.flv -vf="settb=1/1K,setpts=RTCTIME/1K" rtmp://<output>
</output>This puts everything in terms of milliseconds (1 PTS = 1 ms) but I had the same issue (massive time, zero bitrate, and massive speed)
Am I misunderstanding something about PTS ? Is it not allowed to start at non-zero values ? Or am I just doing something wrong ?
Update
After reviewing @Gyan’s answer, I formatted my command like so :
ffmpeg -re -i video.flv -vf="settb=1/1K, setpts=(RTCTIME-RTCSTART)/1K" -output_ts_offset $(date +%s.%N) rtmp://<output>
</output>This way the PTS values would match up to "milliseconds since the stream started" and would be offset by the start time of the stream (theoretically making PTS = timestamp on the server)
This looked like it was encoding better :
frame= 590 fps=7.2 q=22.0 size= 25330kB time=00:01:21.71 bitrate=2539.5kbits/s dup=0 drop=1350 speed= 1x
Bitrate was now correct, time was accurate, and speed was not outrageous. The frames per second was still a bit off, though (the source video is 24 fps but it’s reporting 7.2 frames per second)
When I tried watching the stream from the other end, the video was out of sync with the audio and played at about double normal speed for a while, then the video froze and the audio continued without it
Furthermore, when I dumped the stream to a file (
ffmpeg -i rtmp://<output> dump.mp4</output>
) and look at the PTS timestamps with ffprobe (ffprobe -show_entries packet=codec_type,pts dump.mp4 | grep "video" -B 1 -A 2
) the timestamps didn’t seem to show server time at all :...
--
[PACKET]
codec_type=video
pts=131072
[/PACKET]
[PACKET]
codec_type=video
pts=130048
[/PACKET]
--
[PACKET]
codec_type=video
pts=129536
[/PACKET]
[PACKET]
codec_type=video
pts=130560
[/PACKET]
--
[PACKET]
codec_type=video
pts=131584
[/PACKET]Is the problem just an incompatibility with RTMP ?
Update 2
I’ve removed the video filter and I’m now encoding like so :
ffmpeg -re -i video.flv -output_ts_offset $(date +%s.%N) rtmp://<output>
</output>This is encoding correctly :
frame= 910 fps= 23 q=25.0 size= 12027kB time=00:00:38.97 bitrate=2528.2kbits/s speed=0.981x
In order to verify that the PTS values are correct, I’m dumping the output to a file like so :
ffmpeg -i rtmp://<output> -copyts -write_tmcd 0 dump.mp4
</output>I tried saving it as
dump.flv
(since it’s RTMP) however this threw the error :[flv @ 0x5600f24b4620] Audio codec mp3 not compatible with flv
This is a bit weird since the video isn’t mp3-encoded (it’s speex) - but whatever.
While dumping this file the following error pops up repeatedly :
frame= 1 fps=0.0 q=0.0 size= 0kB time=00:00:09.21 bitrate= 0.0kbits/s dup=0 dr
43090023 frame duplication too large, skipping
43090027 frame duplication too large, skipping
Last message repeated 3 times
43090031 frame duplication too large, skipping
Last message repeated 3 times
43090035 frame duplication too large, skippingPlaying the resulting video in VLC plays an audio stream but displays no video. I then attempt to probe this video with
ffprobe
to look at the video PTS values :ffprobe -show_entries packet=codec_type,pts dump.mp4 | grep "video" -B 1 -A 2
This returns only a single video frame whose PTS is not large like I would expect :
[PACKET]
codec_type=video
pts=1020
[/PACKET]This has been a surprisingly difficult task
-
ffmpeg issue with Jupyter notebook
16 novembre 2022, par NandaI am trying to extract frames from a video file using ffmpeg in Python. I installed ffmpeg using Homebrew and ffmpeg-python on the Anaconda-Navigator. Yet when I call ffmpeg on Jupyter notebook as follows


!ffmpeg -i "$file" "$rootdir"/"$folder_name"/frame%04d.png



I get an error saying


zsh:1: command not found: ffmpeg



I clearly see ffmpeg in my usr/local/bin. Can someone please assist me in sorting this ? I am able to use ffmpeg in Google Colab, though.