Recherche avancée

Médias (91)

Autres articles (101)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (6976)

  • AVFrame : How to get/replace plane data buffer(s) and size ?

    19 juillet 2018, par user10099431

    I’m working on gstreamer1.0-libav (1.6.3), trying to port custom FPGA based H264 video acceleration from gstreamer 0.10.

    The data planes (YUV) used to be allocated by a simple malloc back in gstreamer 0.10, so we simply replaced the AVFrame.data[i] pointers by pointers to memory in our video acceleration core. It seems to be MUCH more complicated in gstreamer 1.12.

    For starters, I tried copying the YUV planes from AVFrame.data[i] to a separate buffer - which worked fine ! Since I haven’t seen an immediate way to obtain the size of AVFrame.data[i] and I recognized that data[0], data[1], data[2] seem to be in a single continuous buffer, I simply used (data[1] - data [0]) for the size of the Y plane and (data[2] - data[1]) for the sizes of the U/V planes respectively. This works fine, expect for one scenario :

    • Input H264 stream with resolution of 800x600 or greater
    • The camera is covered (jacket, hand, ...)

    This causes a SEGFAULT in the memcpy of the V plane (data[2]) using the sizes determined as described above. Before covering the camera, the stream is displayed completely fine ... so for some reason the dark screen changes the plane sizes ?

    My ultimate goal is replacing the data[i] pointers allocated by gstreamer by my custom memory allocation (for futher processing) ... where exactly are these buffers assigned, can I change them and how can I obtain the size of each plane (data[0], data[1], data[2]) ?

  • Windows batch with FFmpeg, error handling

    26 mai 2021, par Neel

    first let me apologize as the question might not be clear ! Let me explain, in a folder i have a batch file every time i want to convert 1 or more videos i drop them in the folder and run the batch, this i guess is general scenario for most of the people, additionally i delete the file one by one once the ffmpeg process the file (i.e. within the for loop).

    


    Now the problem is comes when ffmpeg fails to process the file because of an error, but as i am deleting the file i have no way of knowing which files failed, other than manually checking them.

    


    i tried introducing errorlevel in the batch, but that doesn't seem to work with ffmpeg.

    


    if !errorlevel! neq 0 move %%a ..\ProblemFiles


    


    Now my question are :

    


      

    1. Is there an alternative way of achieving the above with ffmpeg i.e.
if error converting move the file.
    2. 


    3. Can i log the error / warning only in case if they occur !
    4. 


    


    Following is the pseudo batch file :

    


    @echo off
echo -------------------------------------------------------------------------------->> ..\Log.txt
echo Batch Started at %time% >> ..\Log.txt
for %%G in (.mp4, .mov) do ( echo %%G
  for /f "delims=" %%a in ('forfiles /s /m *%%G /c "cmd /c echo @relpath"') do (
echo Original file MediaInfo : >> ..\Log.txt
"C:\utils\MediaInfo.exe" --Output=file://"C:\utils\custom.template" "%%a"  >> ..\Log.txt

ffmpeg.exe -y -hide_banner -i "%%a" ...... "..\Output\%%~na_Out.mp4" 

echo Converted file MediaInfo : >> ..\Log.txt
"C:\utils\MediaInfo.exe" --Output=file://"C:\utils\custom.template" "..\Output\%%~na_Out.mp4"  >> ..\Log.txt
timeout 10
echo Deleting File : "%%a" >> ..\Log.txt
del "%%a"
echo ------------------------------------------->> ..\Log.txt
)
)
echo Batch completed at %time% >> ..\Log.txt
echo -------------------------------------------------------------------------------->> ..\Log.txt 


    


    Please assist and thanks in advance !

    


  • Parallelize encoding of audio-only segments in ffmpeg

    5 juin 2013, par NeverFall

    We are looking to decrease the execution time of segmentation/encoding from wav to aac segmented for HTTP live streaming using ffmpeg to segment and generate a m3u8 playlist by utilizing all the cores of our machine.

    In one experiment, I had ffmpeg directly segment a wav file into aac with libfdk_aac, however it took quite a long time to finish.

    In the second experiment, I had ffmpeg segment a wav file as is (wav) which was quite fast (< 1 second on our machines), then use GNU parallel to execute ffmpeg again to encode the wav segments to aac and manually changed the .m3u8 file without changing their durations. This was performed much faster however "silence" gaps could be heard when streaming the output audio.

    I have initially tried the second scenario using mp3 and result was still quite the same. Though I've read that lame adds padding during encoding (http://scruss.com/blog/2012/02/21/generational-loss-in-mp3-re-encoding/), does this this mean that libfdk_aac also adds padding during encoding ?

    Maybe this one is related to this question : How can I encode and segment audio files without having gaps (or audio pops) between segments when I reconstruct it ?