Recherche avancée

Médias (1)

Mot : - Tags -/ogv

Autres articles (69)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (5092)

  • Decoding RIMM streaming file format

    10 septembre 2011, par Thomas

    I want to decode the video (visual) frames within a Blackberry RIMM file. So far I have a parser, and some corresponding container documentation from RIM. 

    The video codec is H264 and is explicitly set on the device using one of the video.encodings properties. However, FFMPEG is not able to decode the frames and this is driving me nuts.

    Edit 1 : The issues seems to be lack of SPS and PPS in the frames, and artificially inserting them have proven unsuccessful so far (all grey image). Blackberry 9700 sends

    0x00 0x00 0x ?? 0x ?? 0xType

    where Type is according to table 7-1 in the H264 spec (I and P frames). We believe the 0x ?? 0x ?? represent the size of the frame, however the size does not always correspond to the size found by the parser (the parser seems to be working correctly).

    I have a windows decoder codec from blackberry, called mc_demux_mp2_ds.ax, and can play some MPEG-4 files captured the same way, but it is a binary for windows. And the H264 files will not play either way. I am aware of previous attempts. The capture url for javax.microedition.media.Manager is

    encoding=video-3gpp_width=176_height=144_video_codec=H264_audio_codec=AAC

    and I am writing to an output stream. Some example files here.

    Edit 2 :Turns out that about 3-4 of the 12-15 available video capture modes are flat out failing and refusing to output data, even in the simplest of test applications. So any working solution should implement MPEG-4, H264 and H263 in both AMR and AAC, in so getting fallback alternatives when one sound codec and/or resolution fails. Reboots, hangs and what not litters the Blackberry video implementation and vary from firmware to firmware ; total suckage.

  • ffmpeg : add a data size threshold for muxing queue size

    15 octobre 2020, par Jan Ekström
    ffmpeg : add a data size threshold for muxing queue size
    

    This way the old max queue size limit based behavior for streams
    where each individual packet is large is kept, while for smaller
    streams more packets can be buffered (current default is at 50
    megabytes per stream).

    For some explanation, by default ffmpeg copies packets from before
    the appointed seek point/start time and puts them into the local
    muxing queue. Before, it getting utilized was much less likely
    since as soon as the filter chain was initialized, the encoder
    (and thus output stream) was also initialized.

    Now, since we will be pushing the encoder initialization to when the
    first AVFrame is decoded and filtered - which only happens after
    the exact seek point is hit as packets are ignored until then -
    this queue will be seeing much more usage.

    In more layman's terms, this attempts to fix cases such as where :
    - seek point ends up being 5 seconds before requested time.
    - audio is set to copy, and thus immediately begins filling the
    muxing queue.
    - video is being encoded, and thus all received packets are skipped
    until the requested time is hit.

    • [DH] doc/ffmpeg.texi
    • [DH] fftools/ffmpeg.c
    • [DH] fftools/ffmpeg.h
    • [DH] fftools/ffmpeg_opt.c
  • FFMPEG H264 with custom overlay per frame

    4 octobre 2020, par La bla bla

    We have a stream that is stored in the cloud (Amazon S3) as individual H264 frames. The frames are stored as framexxxxxx.264, the numbering doesn't start from 0 but rather from some larger number, say 1000 (so, frame001000.264)

    


    The goal is to create a mp4 clip which is either timelapse or just faster for inspection and other checking (much faster, compressing around 3 hours of video down to < 20 minutes), this also requires we overlay the frame number (the filename) on the frame itself

    &#xA;

    At first I was creating a timelapse by pulling from S3 only the keyframes (i-frames ? still rather new to codecs & stuff) and overlaying the filename on them and saving as png (which probably isn't needed, but that's what I did) using (this command is used inside a python script)

    &#xA;

    ffmpeg -y -i {h264_name} -vf \"scale=1920:-1, &#xA;drawtext=fontfile=/usr/share/fonts/truetype/ubuntu-font-family/Ubuntu-B.ttf:fontsize=34:text={txt}:fontcolor=white:x=50:y=50:bordercolor=black:borderw=2\" &#xA;-c:a copy -pix_fmt yuv420p {basename}.png&#xA;

    &#xA;

    after this I combined all the frames by using python to convert the lowest numbered frame to 0.png and incrementing (so it would be continuous, because I only used keyframes the numbers originally weren't sequential) and running

    &#xA;

    ffmpeg -y -f image2 -i %d.png -r {self.params.fps} -vcodec libx264 -crf {self.params.crf} -pix_fmt yuv420p {out_file}&#xA;

    &#xA;

    and this worked great, but the difference between keyframes was too long to allow for proper inspection

    &#xA;

    so now for the question(s)

    &#xA;

    since I know frames that are not keyframes (p-frames ?) can't be used alone by ffmpeg, the method of overlaying the file name and converting it to png (or keep as h264, same thing) won't work, or at least, I couldn't find a way for it to work, maybe there's a way to specify a frame's keyframe ?, how can one overlay the filename (and not the frame number as shown here for example)

    &#xA;

    Also, is it possible to skip some p-frames between the keyframes ? (so if a keyframe is every 30 frames, we would take a keyframe, a frame 15 frames later, and next another keyframe)

    &#xA;

    I thought about using ffmpeg's pipe option to feed it with the files as they're being downloaded, but I'm not sure if I can specify drawtext this way

    &#xA;

    Also, if there's another alternative that can achieve that (at first I was converting to png, using python and OpenCV to add the filename and then merging the pngs to mp4, but then I found drawtext can do that in a single command so I used it)

    &#xA;