Recherche avancée

Médias (91)

Autres articles (31)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

Sur d’autres sites (4510)

  • Making a timelapse by drag and drop - A rebuild of an old script using ImageMagick

    14 août 2019, par cursor_major

    I have written an apple script previously to automate a task I do in my work many times.

    I shoot Raw + JPG in camera, copy to hard drive.

    I then drag named and dated folder eg. "2019_08_14_CAM_A_CARD_01" on to an automator app and it divides the files in to folders "NEF" and "JPG" respectively.

    I then drag the appropriate "JPG" folder onto my Timelapse app and it runs the image sequence process in QT7 and then saves the file with the parent folder name in the grandparent folder. This keeps things super organised for when I want to re link to the original RAW files.

    [code below]

    It is a 2 step process and works well for my needs, however, Apple are going to be resigning Quicktime 7 Pro so my app has a foreseeable end of life.

    I want to take this opportunity to refine and improve the process using terminal and ImageMagick.

    I have managed to work some code that runs well in terminal, but I have to navigate to the folder first then run a script. It doesn’t do the file renaming and doesn’t save in the right place.

    Also, when I try and run the simple script in an automator ’App’ it throws up errors even before trying to add anything clever with the file naming.

    Later, once I have recreated my timelapse. maker app I want to get clever with more of ImageMagicks commands and overlay a small super of the original frame name in the corner so I can expedite my reconnecting workflow.

    I’m sorry, I’m a photographer not a coder but I’ve been bashing my head trying to work this out and I’ve hit a brick wall.

    File Sorter

       repeat with d in dd
           do shell script "d=" & d's POSIX path's quoted form & "
    cd \"$d\" || exit

    mkdir -p {MOV,JPG,NEF,CR2}
    find . -type f -depth 1 -iname '*.mov' -print0 | xargs -0 -J % mv % MOV
    find . -type f -depth 1 -iname '*.cr2' -print0 | xargs -0 -J % mv % CR2
    find . -type f -depth 1 -iname '*.jpg' -print0 | xargs -0 -J % mv % JPG
    find . -type f -depth 1 -iname '*.nef' -print0 | xargs -0 -J % mv % NEF

    for folder in `ls`;
    do if [ `ls $folder | wc -l` == 0 ]; then
       rmdir $folder;
    fi; done;

    "
       end repeat
    end open```



    Timelapse Compiler

    ```on run {input, parameters}
       repeat with d in input
           set d to d's contents
           tell application "Finder"
               set seq1 to (d's file 1 as alias)
               set dparent to d's container as alias
               set mov to "" & dparent & (dparent's name) & ".mov"
           end tell
           tell application "QuickTime Player 7"
               activate
               open image sequence seq1 frames per second 25
               tell document 1
                   with timeout of 500 seconds
                       save self contained in file mov
                   end timeout
                   quit
               end tell
           end tell
       end repeat
       return input
    end run```


    Current code that runs from within Terminal after I have navigated to folder of JPGs

    ```ffmpeg -r 25 -f image2 -pattern_type glob -i '*.JPG' -codec:v prores_ks -profile:v 0 imagemagick_TL_Test_01.mov```
  • How do I compose three overlapping videos w/audio in ffmpeg ?

    10 avril 2021, par Idan Gazit

    I have three videos : let's call them intro, recording and outro. My ultimate goal is to stitch them together like so :

    


    enter image description here

    


    Both intro and outro have alpha (prores 4444) and a "wipe" to transition, so when overlaying, they must be on top of the recording. The recording is h264, and ultimately I'm encoding out for youtube with these recommended settings.

    


    I've figured out how to make the thing work correctly for intro + recording :

    


    $ ffmpeg \
  -i intro.mov \
  -i recording.mp4 \
  -filter_complex \
  "[1:v]tpad=start_duration=10:start_mode=add:color=black[rv]; \
   [1:a]adelay=delays=10s:all=1[ra]; \
   [rv][0:v]overlay[v];[0:a][ra]amix[a]" \
  -map "[a]" -map "[v]" \
  -movflags faststart -c:v libx264 -profile:v high -bf 2 -g 30 -crf 18 -pix_fmt yuv420p \
  out.mp4 -y


    


    However I can't use the tpad trick for the outro because it would render black frames over everything.

    


    I've tried various iterations with setpts/asetpts as well as passing -itsoffset for the input, but haven't come up with a solution that works correctly for both video and audio. This tries to start the outro at 16 seconds into the recording (10s start + 16s of recording is how I got to setpts=PTS+26/TB). del, but doesn't work correctly, I get both intro and outro audio from the first frame, and the recording audio cuts out when the outro overlay begins :

    


    $ ffmpeg \
  -i intro.mov \
  -i recording.mp4 \
  -i outro.mov \
  -filter_complex \
  "[1:v]tpad=start_duration=10:start_mode=add:color=black[rv]; \
   [1:a]adelay=delays=10s:all=1[ra]; \
   [2:v]setpts=PTS+26/TB[outv]; \
   [2:a]asetpts=PTS+26/TB[outa]; \
   [rv][0:v]overlay[v4]; \
   [0:a][ra]amix[a4]; \
   [v4][outv]overlay[v]; \
   [a4][outa]amix[a]" \
  -map "[a]" -map "[v]" \
  -movflags faststart -c:v libx264 -profile:v high -bf 2 -g 30 -crf 18 -pix_fmt yuv420p \
  out.mp4 -y


    


    I think the right solution lies in the direction of using setpts correctly but I haven't been able to wrap my brain fully around it. Or, maybe I'm making life complicated and there's an easier approach ?

    


    In the nice-to-have realm, I'd love to be able to specify the start of the outro relative to the end of the recording. I will be doing this to a bunch of recordings of varying lengths. It would be nice to have one command to invoke on everything rather than figuring out a specific timestamp for each one.

    


    Thank you !

    


  • write x264_encoder_encode output nals to h264 file

    17 mai 2013, par Zetlb

    I'm trying to do a life stream software. What I'm trying is to use x264's lib to encode to h264 on server side and ffmpeg to decode on client side.
    After some failed attemps of doing it directly I decided to simplify, and the first thing I am trying to do is simply to encode frames with x264_encoder_encode and writing the resulting NALs to a file. Now I want to test if that file es correct.

    To initialize the encoder I do :

    x264_param_t param;
    x264_param_default_preset(&param, "veryfast", "zerolatency");
    param.i_threads = 1;
    param.i_width = a_iWidth;
    param.i_height = a_iHeight;
    param.i_fps_num = a_iFPS;
    param.i_fps_den = 1;
    param.i_keyint_max = a_iFPS;
    param.b_intra_refresh = 1;
    param.rc.i_rc_method = X264_RC_CRF;
    param.rc.i_vbv_buffer_size = 1000000;
    param.rc.i_vbv_max_bitrate =500;    //For streaming:
    param.b_repeat_headers = 1;
    param.b_annexb = 1;
    x264_param_apply_profile(&param, "baseline");
    encoder = x264_encoder_open(&param);

    Then when I have an image ( RGBA image ) I convert it to YUV420P and then I call x264_encoder_encode :

    int frame_size = x264_encoder_encode(encoder, &nals, &num_nals, picture, &pic_out);
    if (frame_size > 0)
    {
       m_vNALs.push_back( (char*)nals[0].p_payload );
       m_vSizes.push_back( frame_size );

       return frame_size;
    }

    Everything appears to be valid, frame_size returns non-zero positive value and all the others parameters appears to be ok. Every NAL starts with correct code.

    So I'm writing all the nals to a file :

    FILE* pFile;
    pFile = fopen("file.h264", "wb");
    for( int nIndex = 0; nIndex < m_vNALs.size(); nIndex++ )
    {
       fwrite( m_vNALs[ nIndex ], m_vSizes[ nIndex ], 1, pFile );
    }

    Now I have file.h264 file.
    So to test this file I use ffmpeg.exe ( I'm on windows ), and I get this output :

    [h264 @ 00000000021c5a60] non-existing PPS referenced
    [h264 @ 00000000021c5a60] non-existing PPS 0 referenced
    [h264 @ 00000000021c5a60] decode_slice_header error
    [h264 @ 00000000021c5a60] no frame!
    [h264 @ 00000000021c5a60] non-existing PPS referenced
    [h264 @ 00000000021b74a0] max_analyze_duration 5000000 reached at 5000000
    [h264 @ 00000000021b74a0] decoding for stream 0 failed
    [h264 @ 00000000021b74a0] Could not find codec parameters for stream 0 (Video: h
    264): unspecified size
    Consider increasing the value for the 'analyzeduration' and 'probesize' options
    [h264 @ 00000000021b74a0] Estimating duration from bitrate, this may be inaccura
    te
    file.h264: could not find codec parameters

    vlc also can't play the file.

    ffplay tells :

    file.h264: Invalid data found when processing input

    What is wrong ?????

    Thanks in advance,
    Zetlb