Recherche avancée

Médias (91)

Autres articles (63)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

Sur d’autres sites (6269)

  • Concat video files using ffmpeg in individual subfolders with shell script [closed]

    18 août 2022, par Weatherdark

    I had a Windows script that would do what I want, but I switched my server to Unraid so now I need a new script. Anyway, enough backstory.

    


    I regularly have media that is named in a style like "Videoname CD(Number).ext". They get stored in subfolders called Videoname with a different folder for each video.

    


    What I need is a script that goes through each subfolder and creates a concat.txt file with the names of each video file ending in CD(number) in it, in numerical order, that will then call ffmpeg to concat the files using the concat.txt in each subfolder, placing the video file (named after the subfolder) into a folder like /mnt/User/Pool/Finished/Videofile. Then, hopefully it will delete the concat.txt file so I know which folders are finished with a quick look.

    


    I know the ffmpeg concat options for that part, I just have no idea what to use to make sure the resulting video file is named after the subfolder it came from.

    


    I could do this in Windows (albeit in a hacky, not very pretty way), but I have no idea how to accomplish this in Linux.

    


    As an example...

    


    


    /mnt/User/Pool/Concat/10.10.2022.0034.Recording/10.10.2022.0034.Recording
CD1.mp4
/mnt/User/Pool/Concat/10.10.2022.0034.Recording/10.10.2022.0034.Recording
CD2.mp4
/mnt/User/Pool/Concat/10.10.2022.0034.Recording/10.10.2022.0034.Recording
CD3.mp4
/mnt/User/Pool/Concat/10.10.2022.0034.Recording/10.10.2022.0034.Recording
CD4.mp4
/mnt/User/Pool/Concat/10.11.2022.0254.Recording/10.11.2022.0254.Recording
CD1.mp4
/mnt/User/Pool/Concat/10.11.2022.0254.Recording/10.11.2022.0254.Recording
CD2.mp4
/mnt/User/Pool/Concat/10.11.2022.0254.Recording/10.11.2022.0254.Recording
CD3.mp4
/mnt/User/Pool/Concat/10.11.2022.0254.Recording/10.11.2022.0254.Recording
CD4.mp4

    


    Put through ffmpeg to concat, results in files

    


    /mnt/User/Pool/Finished/10.10.2022.0034.Recording.mp4

    


    /mnt/User/Pool/Finished/10.11.2022.0254.Recording.mp4

    


    


    Hopefully that makes sense.

    


    Someone closed this because it wasn't "focused enough" and didn't focus on one thing... but there is ONLY one thing that it needs to do.

    


    I have no idea what they would want me to change to fix what isn't broken, so I fundamentally am at a loss. The script just needs to search through subfolders, create a concat.txt file and send that file to ffmpeg creating the new file, named after the folder, in a new location. That is literally as focused as I can make it.

    


  • How do I use FFMPEG/libav to access the data in individual audio samples ?

    15 octobre 2022, par Breadsnshreds

    The end result is I'm trying to visualise the audio waveform to use in a DAW-like software. So I want to get each sample's value and draw it. With that in mind, I'm currently stumped by trying to gain access to the values stored in each sample. For the time being, I'm just trying to access the value in the first sample - I'll build it into a loop once I have some working code.

    


    I started off by following the code in this example. However, LibAV/FFMPEG has been updated since then, so a lot of the code is deprecated or straight up doesn't work the same anymore.

    


    Based on the example above, I believe the logic is as follows :

    


      

    1. get the formatting info of the audio file
    2. 


    3. get audio stream info from the format
    4. 


    5. check that the codec required for the stream is an audio codec
    6. 


    7. get the codec context (I think this is info about the codec) - This is where it gets kinda confusing for me
    8. 


    9. create an empty packet and frame to use - packets are for holding compressed data and frames are for holding uncompressed data
    10. 


    11. the format reads the first frame from the audio file into our packet
    12. 


    13. pass that packet into the codec context to be decoded
    14. 


    15. pass our frame to the codec context to receive the uncompressed audio data of the first frame
    16. 


    17. create a buffer to hold the values and try allocating samples to it from our frame
    18. 


    


    From debugging my code, I can see that step 7 succeeds and the packet that was empty receives some data. In step 8, the frame doesn't receive any data. This is what I need help with. I get that if I get the frame, assuming a stereo audio file, I should have two samples per frame, so really I just need your help to get uncompressed data into the frame.

    


    I've scoured through the documentation for loads of different classes and I'm pretty sure I'm using the right classes and functions to achieve my goal, but evidently not (I'm also using Qt, so I'm using qDebug throughout, and QString to hold the URL for the audio file as path). So without further ado, here's my code :

    


    // Step 1 - get the formatting info of the audio file
    AVFormatContext* format = avformat_alloc_context();
    if (avformat_open_input(&format, path.toStdString().c_str(), NULL, NULL) != 0) {
        qDebug() << "Could not open file " << path;
        return -1;
    }

// Step 2 - get audio stream info from the format
    if (avformat_find_stream_info(format, NULL) < 0) {
        qDebug() << "Could not retrieve stream info from file " << path;
        return -1;
    }

// Step 3 - check that the codec required for the stream is an audio codec
    int stream_index =- 1;
    for (unsigned int i=0; inb_streams; i++) {
        if (format->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
            stream_index = i;
            break;
        }
    }

    if (stream_index == -1) {
        qDebug() << "Could not retrieve audio stream from file " << path;
        return -1;
    }

// Step 4 -get the codec context
    const AVCodec *codec = avcodec_find_decoder(format->streams[stream_index]->codecpar->codec_id);
    AVCodecContext *codecContext = avcodec_alloc_context3(codec);
    avcodec_open2(codecContext, codec, NULL);

// Step 5 - create an empty packet and frame to use
    AVPacket *packet = av_packet_alloc();
    AVFrame *frame = av_frame_alloc();

// Step 6 - the format reads the first frame from the audio file into our packet
    av_read_frame(format, packet);
// Step 7 - pass that packet into the codec context to be decoded
    avcodec_send_packet(codecContext, packet);
//Step 8 - pass our frame to the codec context to receive the uncompressed audio data of the first frame
    avcodec_receive_frame(codecContext, frame);

// Step 9 - create a buffer to hold the values and try allocating samples to it from our frame
    double *buffer;
    av_samples_alloc((uint8_t**) &buffer, NULL, 1, frame->nb_samples, AV_SAMPLE_FMT_DBL, 0);
    qDebug () << "packet: " << &packet;
    qDebug() << "frame: " <<  frame;
    qDebug () << "buffer: " << buffer;


    


    For the time being, step 9 is incomplete as you can probably tell. But for now, I need help with step 8. Am I missing a step, using the wrong function, wrong class ? Cheers.

    


  • ffmpeg transparent background with colorkey shows background bleeding

    2 novembre 2022, par Jan

    I am creating a video with transparent background video with the following command. After creation I want to overlay this video on another video

    


    ffmpeg -f lavfi -i color=red:s=1920x1080,colorkey=red,format=rgba -loop 1 -t 0.08 -i "CreditWhite.png" -filter_complex "[1:v]scale=1920:-2,setpts=if(eq(N\,0)\,0\,1+1/0.02/TB),fps=60[fg]; [0:v][fg]overlay=y=-'t*h*0.02':eof_action=endall[v]" -map "[v]" -pix_fmt yuva444p10le -vcodec prores_ks credits.mov


    


    Creating the video works fine but when I overlay this on another video (using openshot) I get a lot of color bleeding of the background colour around the edges. Any suggestions to improve the ffmpeg prompt to stop this from happening ? I tried very slightly increasing the opacity (0.06) as mentioned in another thread without success.
Video uploaded to youtube for reference

    


    UPDATE

    


    Using different colours had the same effect
enter image description here