Recherche avancée

Médias (0)

Mot : - Tags -/content

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (27)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Dépôt de média et thèmes par FTP

    31 mai 2013, par

    L’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
    Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)

  • Les thèmes de MediaSpip

    4 juin 2013

    3 thèmes sont proposés à l’origine par MédiaSPIP. L’utilisateur MédiaSPIP peut rajouter des thèmes selon ses besoins.
    Thèmes MediaSPIP
    3 thèmes ont été développés au départ pour MediaSPIP : * SPIPeo : thème par défaut de MédiaSPIP. Il met en avant la présentation du site et les documents média les plus récents ( le type de tri peut être modifié - titre, popularité, date) . * Arscenic : il s’agit du thème utilisé sur le site officiel du projet, constitué notamment d’un bandeau rouge en début de page. La structure (...)

Sur d’autres sites (4261)

  • How improves Video Player processing using Qt and FFmpeg ?

    13 septembre 2016, par Eric Menezes

    A time ago, I started to develop a video player/analyser. For beeing an analyser as well, the application should have inside its buffer the next frames and the previous as well. That’s where the complication begins.

    For that, we started to use an VideoProducer that decodes the frames and audio from video (using ffmpeg), added it into a buffer from where the video and audio consumers retrieve that objects (VideoFrame and AudioChunk). For this job, we have some QThreads which is one producer, 2 consumers and (the biggest trouble maker) 2 workers that is used to retrieve objects from producer’s buffer and insert them into a circular buffer (that because of previous frames). These workers are important because of the backwards buffering job (this player should play backwards too).

    So, now the player is running well, but not so good. It’s notable that is losing performance. Like removing producer buffer and using just the circular. But still, some questions remains :

    • Should I continue using QThread with reimplemented run() ? I read that works better with Signals & Slots ;

    • If Signals & Slots worth it, the producer still needs to reimplement QThread::run(), right ?

    • Cosidering that buffer must have some previous frames and bad quality videos will be reproduced, is that (VideoProducer insert objects into a Buffer, AudioConsumer and FrameConsumer retrieve these objects from Buffer and display/reproducer them) the better way ?

    • What is the best way to sync audio and video ? The sync using audio pts is doing well, but some troubles appear sometimes ; and

    • For buffering backwards, ffmpeg does not provide frames this way, so I need to seek back, decode older frames, reorder them and prepend to the buffer. This job has been done by that Workers, another QThread the keep consuming from Producer buffer and, if buffering backwards, asks for seek and do the reorder job. I can just guess that it is bad. And I assume that do this reorder job should be done at Producer level. Is that any way to do this better ?

    I know it’s a lot of questions, and I’m sorry for that, but I don’t know where to find these answers.

    Thanks for helping.

    For better understanding, heres how its been done :

    • VideoProducer -> Decoder QThread. Runs in a loop decoding and enqueuing frames into a Buffer.

    • FrameConsumer -> Video consumer. Retrieves frames from frame CircularBuffer in a loop using another QThread. Display the frame and sleep few mseconds based on video fps and AudioConsumer clock time.

    • AudioConsumer -> Audio consumer and video’s clock. Works with signals using QAudioOutput::notify() to retrieve chunks of audio from audio CircularBuffer and insert them into QAudioOutput buffer. When decodes the first frame, its pts is used to start the clock. (if a seek has been called, the next audio frame will mark the clock start time)

    • Worker -> Each stream (audio and video) has one. It’s a QThread running in a loop (run() reimplemented) retrieving objects from Buffer and inserting (backwards or forward) to CircularBuffer.

    And another ones that manage UI, filters, some operations with frames/chunks...

  • AForge FFMPEG Video operation issue

    9 janvier 2017, par Berkay

    I have a problem actually an issue about making video of images(captured frames). I am using nVLC to capture my IP camera’s video content. I managed to take these and show them in my application whenever a new frame comes up. But I also need to convert them whenever a user wants to save the record. This record should be long enough like 1-2 hours.

    So I decided to use AForge lib. In this library there is a class called VideoFileWriter.It does what I want. But here is the problem. I am getting 24-25 frames per second and I am drawing it on the screen. This causes a little bit of an overhead to my system (Especially when I am streaming 6 IP cameras to my screen). My CPU is utilized like %50-60 and i am using 500-550 MB of RAM while streaming 6 IP cameras and drawing to my screen.

    So I came up with two ideas,

    • I can process (write to a video file) each frame whenever I capture. (If user starts recording)
    • Or, I can store Bitmaps (captured frames) whenever user starts recording. Then I can process these list of (might be an array or List<> or another container etc...) later.

    The first approach will destroy my pc, because read/write is very very expensive operations for computers, so this will put me in a very bad position.

    Second approach will destroy my ram, because I may end up with 3600*25 frames for each bitmap list. But seems to be the better choice to me.

    Does anyone have any advice on this topic ?

  • Fire events at specific timestamps during video playback

    27 octobre 2017, par Simon

    I’m using a Raspberry Pi 3 running Raspbian. I need to play a video file via HDMI, and I need events to be fired at specific timecodes during the playback of the video. The events are simple write operations to the GPIO. My problem is : what approach should I use to do this ?

    My first approach was to use OpenCv (python) and VideoCapture(), but the raspberry pi is too slow, and my FPS is very low (I need at least 25 FPS @ 1080p).

    So now I’m looking into other solutions : Gstreamer, FFMPEG, omxplayer, I read the documentations but I can’t figure out which tool to use for this job.