Recherche avancée

Médias (0)

Mot : - Tags -/content

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (102)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

Sur d’autres sites (6828)

  • FFMPEG : FLV header for AVPacket

    25 mai 2012, par victor kulichkin

    I used FFMPEg codes in my app, where I need to get FLV packets for my program. For this I use avcodec_encode_video2(). My problem is that function creates AVPacket packet, which does not keep a full FLV format, only its body. But I need still its header. Usually another function (av_write_frame()) makes it. I cannot use av_write_frame() in my app, because it does not fit my requirement. So maybe anybody knows a function in ffmpeg library, which could add FLV header to the created packets by avcodec_encode_video2().

  • How to create buffer for video streaming

    4 juin 2014, par John Simpson

    I am developing a Android video player that can play RTSP Stream. I use ffmpeg in jni part to get and decode RTSP Stream. For now, the player can play and then pause video stream. The next step is to create a buffer for the player so that when user pauses video, the player can still load video stream in the next several seconds.

    Is there any good documentation on how to create a buffer for video streaming in proper way ?

    My plan is to create a array of packets. When the array is full, the player calls

    av_read_pause();

    to stop buffering. When the array has spaces, the player will call

    av_read_play();

    to continue buffering. There is a read_thread for getting packets from the buffer and the decode the packets. The read_thread will stop (resume), when user pauses (resume) video.

    Can this plan work ?

  • Gallery of VP8 Encoding Naivete

    15 octobre 2010, par Multimedia Mike — VP8

    I’ve been toiling away as a multimedia technology generalist for so long that it’s easy for me to forget that not everyone is as versed in the minutiae of the domain as I am. But I recently experienced what it’s like to be such an outsider when I posted about my toy VP8 encoder, expressing that it’s one of the hardest things I have ever tried to do. I heard of from number of people who do have extensive experience in video encoding, particularly with the H.264 and VP8 codecs. Their reactions were predictable : What’s so hard ? Look, you might be a little too immersed in the area to really understand a relative beginner’s perspective.

    And to all the people who suggested that I should get the encoder into FFmpeg ASAP : Are you crazy ?! Did you see what the first pass of the encoder produced ? Do you have lower standards than even I do ?



    Not Giving Up
    I worked a little more on the toy encoder. Remember that the above image is what I’m hoping to encode somewhat faithfully for this experiment. In my first pass, I attempted vertical prediction for all planes. For my next pass, I forced the chroma planes to mid-level (which results in a greyscale image) and played with the 16×16 luma prediction modes. When implementing an extremely naive algorithm to decide which 16×16 prediction mode would be the best for a particular block, this is what the program produced :



    For fun, here is what the image encodes to when forcing various prediction modes :

    I think the DC-only prediction mode actually looks a little better than the image that the naive algorithm produced :



    Vertical 16×16 prediction, similar to the image from the last post (just in black and white) :



    Horizontal 16×16 prediction :



    This is the 16×16 prediction mode unique to VP8, the TrueMotion mode (based on On2/Duck’s very first video codec) :



    Wow, these encodings really bring down the cheerful tone of the original image.

    Next Steps
    I have little reason to believe that I am encoding and subsequently reconstructing the image correctly (i.e., error is likely propagating through the entire encoding). If I have time, the next step is to validate my reconstruction against the encoder. Then I need to get the entropy considerations correct so that I actually get some compression out of this format.