
Recherche avancée
Médias (91)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Wired NextMusic
14 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (68)
-
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...) -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)
Sur d’autres sites (2425)
-
Availability of WebM (VP8) Video Hardware IP Designs
10 janvier 2011, par noreply@blogger.com (John Luther)Hello from the frigid city of Oulu, in the far north of Finland. Our WebM hardware development team, formerly part of On2 Technologies, is now up-to-speed and working hard on a number of video efforts for WebM.
- VP8 (the video codec used in WebM) hardware decoder IP is available from Google for semiconductor companies who want to support high-quality WebM playback in their chipsets.
- The Oulu team will release the first VP8 video hardware encoder IP in the first quarter of 2011. We have the IP running in an FPGA environment, and rigorous testing is underway. Once all features have been tested and implemented, the encoder will be launched as well.
WebM video hardware IPs are implemented and delivered as RTL (VHDL/Verilog) source code, which is a register-level hardware description language for creating digital circuit designs. The code is based on the Hantro brand video IP from On2, which has been successfully deployed by numerous chipset companies around the world. Our designs support VP8 up to 1080p resolution and can run 30 or 60fps, depending on the foundry process and hardware clock frequency.
The WebM/VP8 hardware decoder implementation has already been licensed to over twenty partners and is proven in silicon. We expect the first commercial chips to integrate our VP8 decoder IP to be available in the first quarter of 2011. For example, Chinese semiconductor maker Rockchip last week demonstrated full WebM hardware playback on their new RK29xx series processor at CES in Las Vegas (video below).
Note : To view the video in WebM format, ensure that you’ve enrolled in the YouTube HTML5 trial and are using a WebM-compatible browser. You can also view the video on YouTube.Hardware implementations of the VP8 encoder also bring exciting possibilities for WebM in portable devices. Not only can hardware-accelerated devices play high-quality WebM content, but hardware encoding also enables high-resolution, real-time video communications apps on the same devices. For example, when VP8 video encoding is fully off-loaded to a hardware accelerator, you can run 720p or even 1080p video conferencing at full framerate on a portable device with minimal battery use.
The WebM hardware video IP team will be focusing on further developing the VP8 hardware designs while also helping our semiconductor partners to implement WebM video compression in their chipsets. If you have any questions, please visit our Hardware page.
Happy New Year to the WebM community !
Jani Huoponen, Product Manager
Aki Kuusela, Engineering Manager -
Does PTS have to start at 0 ?
5 juillet 2018, par stevendesuI’ve seen a number of questions regarding video PTS values not starting at zero, or asking how to make them start at zero. I’m aware that using ffmpeg I can do something like
ffmpeg -i <video> -vf="setpts=PTS-STARTPTS" <output></output></video>
to fix this kind of thingHowever it’s my understanding that PTS values don’t have to start at zero. For instance, if you join a live stream then odds are it has been going on for an hour and the PTS is already somewhere around 3600000+ but your video player faithfully displays everything just fine. Therefore I would expect there to be no problem if I intentionally created a video with a PTS value starting at, say, the current wall clock time.
I want to send a live stream using ffmpeg, but embed the current time into the stream. This can be used both for latency calculation while the stream is live, and later to determine when the stream was originally aired. From my understanding of PTS, something as simple as this should probably work :
ffmpeg -i video.flv -vf="setpts=RTCTIME" rtmp://<output>
</output>When I try this, however, ffmpeg outputs the following :
frame= 93 fps= 20 q=-1.0 Lsize= 9434kB time=535020:39:58.70 bitrate= 0.0kbits/s speed=1.35e+11x
Note the extremely large value for "time", the bitrate (0.0kbits), and the speed (135000000000x !!!)
At first I thought the issue might be my timebase, so I tried the following :
ffmpeg -i video.flv -vf="settb=1/1K,setpts=RTCTIME/1K" rtmp://<output>
</output>This puts everything in terms of milliseconds (1 PTS = 1 ms) but I had the same issue (massive time, zero bitrate, and massive speed)
Am I misunderstanding something about PTS ? Is it not allowed to start at non-zero values ? Or am I just doing something wrong ?
Update
After reviewing @Gyan’s answer, I formatted my command like so :
ffmpeg -re -i video.flv -vf="settb=1/1K, setpts=(RTCTIME-RTCSTART)/1K" -output_ts_offset $(date +%s.%N) rtmp://<output>
</output>This way the PTS values would match up to "milliseconds since the stream started" and would be offset by the start time of the stream (theoretically making PTS = timestamp on the server)
This looked like it was encoding better :
frame= 590 fps=7.2 q=22.0 size= 25330kB time=00:01:21.71 bitrate=2539.5kbits/s dup=0 drop=1350 speed= 1x
Bitrate was now correct, time was accurate, and speed was not outrageous. The frames per second was still a bit off, though (the source video is 24 fps but it’s reporting 7.2 frames per second)
When I tried watching the stream from the other end, the video was out of sync with the audio and played at about double normal speed for a while, then the video froze and the audio continued without it
Furthermore, when I dumped the stream to a file (
ffmpeg -i rtmp://<output> dump.mp4</output>
) and look at the PTS timestamps with ffprobe (ffprobe -show_entries packet=codec_type,pts dump.mp4 | grep "video" -B 1 -A 2
) the timestamps didn’t seem to show server time at all :...
--
[PACKET]
codec_type=video
pts=131072
[/PACKET]
[PACKET]
codec_type=video
pts=130048
[/PACKET]
--
[PACKET]
codec_type=video
pts=129536
[/PACKET]
[PACKET]
codec_type=video
pts=130560
[/PACKET]
--
[PACKET]
codec_type=video
pts=131584
[/PACKET]Is the problem just an incompatibility with RTMP ?
Update 2
I’ve removed the video filter and I’m now encoding like so :
ffmpeg -re -i video.flv -output_ts_offset $(date +%s.%N) rtmp://<output>
</output>This is encoding correctly :
frame= 910 fps= 23 q=25.0 size= 12027kB time=00:00:38.97 bitrate=2528.2kbits/s speed=0.981x
In order to verify that the PTS values are correct, I’m dumping the output to a file like so :
ffmpeg -i rtmp://<output> -copyts -write_tmcd 0 dump.mp4
</output>I tried saving it as
dump.flv
(since it’s RTMP) however this threw the error :[flv @ 0x5600f24b4620] Audio codec mp3 not compatible with flv
This is a bit weird since the video isn’t mp3-encoded (it’s speex) - but whatever.
While dumping this file the following error pops up repeatedly :
frame= 1 fps=0.0 q=0.0 size= 0kB time=00:00:09.21 bitrate= 0.0kbits/s dup=0 dr
43090023 frame duplication too large, skipping
43090027 frame duplication too large, skipping
Last message repeated 3 times
43090031 frame duplication too large, skipping
Last message repeated 3 times
43090035 frame duplication too large, skippingPlaying the resulting video in VLC plays an audio stream but displays no video. I then attempt to probe this video with
ffprobe
to look at the video PTS values :ffprobe -show_entries packet=codec_type,pts dump.mp4 | grep "video" -B 1 -A 2
This returns only a single video frame whose PTS is not large like I would expect :
[PACKET]
codec_type=video
pts=1020
[/PACKET]This has been a surprisingly difficult task
-
Rotating a video during encoding with ffmpeg and libav API results in half of video corrupted
11 mai 2020, par Daniel KobeI'm using the C API for ffmpeg/libav to rotate a vertically filmed iphone video during the encoding step. There are other questions asking to do a similar thing but they are all using the CLI tool to do so.



So far I was able to figure out how to use the
AVFilter
to rotate the video, base off this example https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/filtering_video.c


The problem is that half the output file is corrupt.




Here is the code for my encoding logic. Its written with GOLANG using CGO to interface with the C API.



// Encode encode an AVFrame and return it
func Encode(enc Encoder, frame *C.AVFrame) (*EncodedFrame, error) {
 ctx := enc.Context()

 if ctx.buffersrcctx == nil {
 // initialize filter
 outputs := C.avfilter_inout_alloc()
 inputs := C.avfilter_inout_alloc()
 m_pFilterGraph := C.avfilter_graph_alloc()
 buffersrc := C.avfilter_get_by_name(C.CString("buffer"))
 argsStr := fmt.Sprintf("video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d", ctx.avctx.width, ctx.avctx.height, ctx.avctx.pix_fmt, ctx.avctx.time_base.num, ctx.avctx.time_base.den, ctx.avctx.sample_aspect_ratio.num, ctx.avctx.sample_aspect_ratio.den)
 Log.Info.Println("yakotest")
 Log.Info.Println(argsStr)
 args := C.CString(argsStr)
 ret := C.avfilter_graph_create_filter(&ctx.buffersrcctx, buffersrc, C.CString("my_buffersrc"), args, nil, m_pFilterGraph)
 if ret < 0 {
 Log.Info.Printf("\n problem creating filter %v\n", AVError(ret).Error())
 }

 buffersink := C.avfilter_get_by_name(C.CString("buffersink"))
 ret = C.avfilter_graph_create_filter(&ctx.buffersinkctx, buffersink, C.CString("my_buffersink"), nil, nil, m_pFilterGraph)
 if ret < 0 {
 Log.Info.Printf("\n problem creating filter %v\n", AVError(ret).Error())
 }

 /*
 * Set the endpoints for the filter graph. The filter_graph will
 * be linked to the graph described by filters_descr.
 */

 /*
 * The buffer source output must be connected to the input pad of
 * the first filter described by filters_descr; since the first
 * filter input label is not specified, it is set to "in" by
 * default.
 */
 outputs.name = C.av_strdup(C.CString("in"))
 outputs.filter_ctx = ctx.buffersrcctx
 outputs.pad_idx = 0
 outputs.next = nil

 /*
 * The buffer sink input must be connected to the output pad of
 * the last filter described by filters_descr; since the last
 * filter output label is not specified, it is set to "out" by
 * default.
 */
 inputs.name = C.av_strdup(C.CString("out"))
 inputs.filter_ctx = ctx.buffersinkctx
 inputs.pad_idx = 0
 inputs.next = nil

 ret = C.avfilter_graph_parse_ptr(m_pFilterGraph, C.CString("transpose=clock,scale=-2:1080"),
 &inputs, &outputs, nil)
 if ret < 0 {
 Log.Info.Printf("\n problem with avfilter_graph_parse %v\n", AVError(ret).Error())
 }

 ret = C.avfilter_graph_config(m_pFilterGraph, nil)
 if ret < 0 {
 Log.Info.Printf("\n problem with graph config %v\n", AVError(ret).Error())
 }
 }

 filteredFrame := C.av_frame_alloc()

 /* push the decoded frame into the filtergraph */
 ret := C.av_buffersrc_add_frame_flags(ctx.buffersrcctx, frame, C.AV_BUFFERSRC_FLAG_KEEP_REF)
 if ret < 0 {
 Log.Error.Printf("\nError while feeding the filter greaph, err = %v\n", AVError(ret).Error())
 return nil, errors.New(ErrorFFmpegCodecFailure)
 }

 /* pull filtered frames from the filtergraph */
 for {
 ret = C.av_buffersink_get_frame(ctx.buffersinkctx, filteredFrame)
 if ret == C.AVERROR_EAGAIN || ret == C.AVERROR_EOF {
 break
 }
 if ret < 0 {
 Log.Error.Printf("\nCouldnt find a frame, err = %v\n", AVError(ret).Error())
 return nil, errors.New(ErrorFFmpegCodecFailure)
 }

 filteredFrame.pts = frame.pts
 frame = filteredFrame
 defer C.av_frame_free(&filteredFrame)
 }

 if frame != nil {
 frame.pict_type = 0 // reset pict type for the encoder
 if C.avcodec_send_frame(ctx.avctx, frame) != 0 {
 Log.Error.Printf("%+v\n", StackErrorf("codec error, could not send frame"))
 return nil, errors.New(ErrorFFmpegCodecFailure)
 }
 }

 for {
 ret := C.avcodec_receive_packet(ctx.avctx, ctx.pkt)
 if ret == C.AVERROR_EAGAIN {
 break
 }
 if ret == C.AVERROR_EOF {
 return nil, fmt.Errorf("EOF")
 }
 if ret < 0 {
 Log.Error.Printf("%+v\n", StackErrorf("codec error, receiving packet"))
 return nil, errors.New(ErrorFFmpegCodecFailure)
 }

 data := C.GoBytes(unsafe.Pointer(ctx.pkt.data), ctx.pkt.size)
 return &EncodedFrame{data, int64(ctx.pkt.pts), int64(ctx.pkt.dts),
 (ctx.pkt.flags & C.AV_PKT_FLAG_KEY) != 0}, nil
 }

 return nil, nil
}




It seems like I need to do something with the scaling here but I'm struggling to find helpful information online.