Recherche avancée

Médias (91)

Autres articles (41)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

Sur d’autres sites (3538)

  • How can I simulate OpenFile in FFmpeg ?

    16 juillet 2019, par Jason117

    Most gif capture software capture screen and then save them one by one single frame picture file on disk,then read them into memory and combine them to gif,makes the whole procdure very slowly.

    I got a idea to capture screen with DirectX(so we could also capture directx window faster since it direct operate the screen d3d device)API to got the bitmap,then save them to memory(such as buffer),then passing the memory location to ffmpeg to produce a video so we don’t need disk storge as a middle buffer so it could be ten more faster since the disk is now most slowly part on pc now.

    the directx capture screen part is already.But I found that ffmpeg using OpenFile to read the picture file,so here may we can simulate the OpenFile ?
    If answer is yes,how could we do it ?

  • open h.264 encoded stream and write the encoded AVPackets into mp4-file

    29 novembre 2013, par Olgen2013

    I want to read/open a h264 encoded video-file (.mp4) or live stream (rtp, tcp) and save this video or a part of this video into a h264 encoded mp4-file. I don't want to decode the input stream. So I want to read the AVPackets from the input stream and save them into a new container (output file (mp4)).

    So far I am able to read a video-file or live stream. For some test cases I implemented a method which decodes the input stream and stores each frame as image. This works fine, too.

    For the output part, which should write the input AVPackers into a mp4-file, I followed this example : http://libav.org/doxygen/release/0.8/libavformat_2output-example_8c-example.html.
    I adjusted the example for my case, without the encoding part of raw data.

    And I think there is my problem : while creating the AVCodecContex, which is also required to create the encoding parameters in the header of the output file.

    But my main problem is that I don't know which parameters of AVCodecContex are required for h264/mp4 and which values I need for each parameter. I tried to use the AVCodecContex from the input file/stream but partial there occur errors for some parameters. And like I said I don't know which parameters are really necessary for h264/mp4.

    I work with these versions :

    • libavutil 51. 22. 1 / 51. 22. 1
    • libavcodec 53. 35. 0 / 53. 35. 0
    • libavformat 53. 21. 1 / 53. 21. 1
    • libavdevice 53. 2. 0 / 53. 2. 0
    • libavfilter 2. 15. 0 / 2. 15. 0
    • libswscale 2. 1. 0 / 2. 1. 0
    • libpostproc 52. 0. 0 / 52. 0. 0

    What do you think ? Is my approach completely wrong or what could be the issue ?

    If you need further information just send me a message !

    Thanks for your help !
    Olgen

  • How to mimic Audacity's "truncate silence" with ffmpeg "silenceremove" filter

    16 juin 2024, par Cara Duf

    I want to remove completely silence parts from wav files with ffmpeg.

    


    Input wav can be like :
enter image description here

    


    I am using the following ffmpeg command to remove silence part ffmpeg -i input.wav -af silenceremove=stop_periods=-1:stop_duration=0.2:stop_threshold=-45dB output.wav because I understand from the doc that it will remove all silence parts longer than 0.2 s (silence being below -45dB).

    


    But I get that enter image description here where silence part has only been reduced to around 0.1 wheras I want it to be 0 (no remaining silence).

    


    In Audacity I will use "truncate audio" filter and choose the above parameters to detect silence and in the action part I will choose to truncate to 0 : enter image description here.

    


    This will yield to what I want (ie an audio with no silence part remaining) :
enter image description here

    


    Searching on the internet only lead me to what I already do.

    


    So how can I reproduce the output I get from Audacity "Truncate Silence" filter with ffmpeg and remove all silence parts from audio ?

    


    Edit : The output from silencedetect filter is correct : ffmpeg -i input.wav -af silencedetect=0.2:n=-45dB -f null - detects exactly what audacity detects.

    


    Thanks in advance for your help