Recherche avancée

Médias (91)

Autres articles (83)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

Sur d’autres sites (6218)

  • Ffmpeg C++ : Seeking to Frame 0

    19 février 2015, par RobotJINI

    I have been successfully using :

    avformat_seek_file(avFormatContext_, streamIndex_, 0, frame, frame, AVSEEK_FLAG_FRAME)

    This along with the example code included at the bottom has allowed me to seek to specific iframes in my videos and read frames from there until I reach the frame I want.

    The problem is the video files I am using have forward and backward interpolation so the first keyframe is not at frame 0, but something like frame 8.

    What I am looking for is a way to seek to the frames that exist before the first B-Frame in my video files. Any help would be appreciated.

    seekFrame :

       bool QVideoDecoder::seekFrame(int64_t frame)
       {

          if(!ok)
             return false;

          //printf("**** seekFrame to %d. LLT: %d. LT: %d. LLF: %d. LF: %d. LastFrameOk: %d\n",(int)frame,LastLastFrameTime,LastFrameTime,LastLastFrameNumber,LastFrameNumber,(int)LastFrameOk);

          // Seek if:
          // - we don't know where we are (Ok=false)
          // - we know where we are but:
          //    - the desired frame is after the last decoded frame (this could be optimized: if the distance is small, calling decodeSeekFrame may be faster than seeking from the last key frame)
          //    - the desired frame is smaller or equal than the previous to the last decoded frame. Equal because if frame==LastLastFrameNumber we don't want the LastFrame, but the one before->we need to seek there
          if( (LastFrameOk==false) || ((LastFrameOk==true) && (frame<=LastLastFrameNumber || frame>LastFrameNumber) ) )
          {
             //printf("\t avformat_seek_file\n");
             if(ffmpeg::avformat_seek_file(pFormatCtx,videoStream,0,frame,frame,AVSEEK_FLAG_FRAME)<0)
                return false;

             avcodec_flush_buffers(pCodecCtx);

             DesiredFrameNumber = frame;
             LastFrameOk=false;
          }
          //printf("\t decodeSeekFrame\n");

          return decodeSeekFrame(frame);

          return true;
       }

    decodeSeekFrame :

       bool QVideoDecoder::decodeSeekFrame(int after)
       {
          if(!ok)
             return false;

          //printf("decodeSeekFrame. after: %d. LLT: %d. LT: %d. LLF: %d. LF: %d. LastFrameOk: %d.\n",after,LastLastFrameTime,LastFrameTime,LastLastFrameNumber,LastFrameNumber,(int)LastFrameOk);



          // If the last decoded frame satisfies the time condition we return it
          //if( after!=-1 && ( LastDataInvalid==false && after>=LastLastFrameTime && after <= LastFrameTime))
          if( after!=-1 && ( LastFrameOk==true && after>=LastLastFrameNumber && after <= LastFrameNumber))
          {
             // This is the frame we want to return

             // Compute desired frame time
             ffmpeg::AVRational millisecondbase = {1, 1000};
             DesiredFrameTime = ffmpeg::av_rescale_q(after,pFormatCtx->streams[videoStream]->time_base,millisecondbase);

             //printf("Returning already available frame %d @ %d. DesiredFrameTime: %d\n",LastFrameNumber,LastFrameTime,DesiredFrameTime);

             return true;
          }  

          // The last decoded frame wasn't ok; either we need any new frame (after=-1), or a specific new frame with time>after

          bool done=false;
          while(!done)
          {
             // Read a frame
             if(av_read_frame(pFormatCtx, &packet)<0)
                return false;                             // Frame read failed (e.g. end of stream)

             //printf("Packet of stream %d, size %d\n",packet.stream_index,packet.size);

             if(packet.stream_index==videoStream)
             {
                // Is this a packet from the video stream -> decode video frame

                int frameFinished;
                avcodec_decode_video2(pCodecCtx,pFrame,&frameFinished,&packet);

                //printf("used %d out of %d bytes\n",len,packet.size);

                //printf("Frame type: ");
                //if(pFrame->pict_type == FF_B_TYPE)
                //   printf("B\n");
                //else if (pFrame->pict_type == FF_I_TYPE)
                //   printf("I\n");
                //else
                //   printf("P\n");


                /*printf("codecctx time base: num: %d den: %d\n",pCodecCtx->time_base.num,pCodecCtx->time_base.den);
                printf("formatctx time base: num: %d den: %d\n",pFormatCtx->streams[videoStream]->time_base.num,pFormatCtx->streams[videoStream]->time_base.den);
                printf("pts: %ld\n",pts);
                printf("dts: %ld\n",dts);*/




                // Did we get a video frame?
                if(frameFinished)
                {
                   ffmpeg::AVRational millisecondbase = {1, 1000};
                   int f = packet.dts;
                   int t = ffmpeg::av_rescale_q(packet.dts,pFormatCtx->streams[videoStream]->time_base,millisecondbase);
                   if(LastFrameOk==false)
                   {
                      LastFrameOk=true;
                      LastLastFrameTime=LastFrameTime=t;
                      LastLastFrameNumber=LastFrameNumber=f;
                   }
                   else
                   {
                      // If we decoded 2 frames in a row, the last times are okay
                      LastLastFrameTime = LastFrameTime;
                      LastLastFrameNumber = LastFrameNumber;
                      LastFrameTime=t;
                      LastFrameNumber=f;
                   }
                   //printf("Frame %d @ %d. LastLastT: %d. LastLastF: %d. LastFrameOk: %d\n",LastFrameNumber,LastFrameTime,LastLastFrameTime,LastLastFrameNumber,(int)LastFrameOk);

                   // Is this frame the desired frame?
                   if(after==-1 || LastFrameNumber>=after)
                   {
                      // It's the desired frame

                      // Convert the image format (init the context the first time)
                      int w = pCodecCtx->width;
                      int h = pCodecCtx->height;
                      img_convert_ctx = ffmpeg::sws_getCachedContext(img_convert_ctx,w, h, pCodecCtx->pix_fmt, w, h, ffmpeg::PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);

                      if(img_convert_ctx == NULL)
                      {
                         printf("Cannot initialize the conversion context!\n");
                         return false;
                      }
                      ffmpeg::sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);

                      // Convert the frame to QImage
                      LastFrame=QImage(w,h,QImage::Format_RGB888);

                      for(int y=0;ydata[0]+y*pFrameRGB->linesize[0],w*3);

                      // Set the time
                      DesiredFrameTime = ffmpeg::av_rescale_q(after,pFormatCtx->streams[videoStream]->time_base,millisecondbase);
                      LastFrameOk=true;


                      done = true;

                   } // frame of interest
                }  // frameFinished
             }  // stream_index==videoStream
             av_free_packet(&packet);      // Free the packet that was allocated by av_read_frame
          }
          //printf("Returning new frame %d @ %d. LastLastT: %d. LastLastF: %d. LastFrameOk: %d\n",LastFrameNumber,LastFrameTime,LastLastFrameTime,LastLastFrameNumber,(int)LastFrameOk);
          //printf("\n");
          return done;   // done indicates whether or not we found a frame
       }
  • Firebase function to convert YouTube to mp3

    9 octobre 2023, par satchel

    I want to deploy to Firebase cloud functions.

    


    However, I get a vague error : “Cannot analyze code” after it goes through the initial pre deploy checks successfully.

    


    But I cannot figure out the problem given the vagueness of the error.

    


    It looks right with these requirements :

    


      

    • receive a POST with JSON body of YouTube videoID as a string
    • 


    • Download locally using the YouTube download package
    • 


    • Pipe to the ffmpeg package and save mp3 to the local temp
    • 


    • Store in default bucket on firestore storage
    • 


    • Apply make public method to make public
    • 


    


    const functions = require('firebase-functions');
const admin = require('firebase-admin');
const ytdl = require('ytdl-core');
const ffmpeg = require('fluent-ffmpeg');
const fs = require('fs');
const path = require('path');
const os = require('os');

admin.initializeApp();

// Set the path to the FFmpeg binary
const ffmpegPath = path.join(__dirname, 'bin', 'ffmpeg');
ffmpeg.setFfmpegPath(ffmpegPath);

exports.audioUrl = functions.https.onRequest(async (req, res) => {
    if (req.method !== 'POST') {
        res.status(405).send('Method Not Allowed');
        return;
    }

    const videoId = req.body.videoID;
    const videoUrl = `https://www.youtube.com/watch?v=${videoId}`;
    const audioPath = path.join(os.tmpdir(), `${videoId}.mp3`);

    try {
        await new Promise((resolve, reject) => {
            ytdl(videoUrl, { filter: format => format.container === 'mp4' })
                .pipe(ffmpeg())
                .audioCodec('libmp3lame')
                .save(audioPath)
                .on('end', resolve)
                .on('error', reject);
        });

        const bucket = admin.storage().bucket();
        const file = bucket.file(`${videoId}.mp3`);
        await bucket.upload(audioPath, {
            destination: file.name,
            metadata: {
                contentType: 'audio/mp3',
            },
        });

        // Make the file publicly accessible
        await file.makePublic();

        const publicUrl = file.publicUrl();
        res.status(200).send({ url: publicUrl });
    } catch (error) {
        console.error('Error processing video:', error);
        res.status(500).send('Internal Server Error');
    }
});


    


    The following is the package.json file which is used to reference the dependencies for the function, as well as the entry point, which I believe just needs to be the name of the filename with the code :

    


    {
  "name": "firebase-functions",
  "description": "Firebase Cloud Functions",
  "main": "audioUrl.js", 
  "dependencies": {
    "firebase-admin": "^10.0.0",
    "firebase-functions": "^4.0.0",
    "ytdl-core": "^4.9.1",
    "fluent-ffmpeg": "^2.1.2"
  },
  "engines": {
    "node": "18"
  },
  "private": true
}


    


    (Edit) Here is the error :

    


     deploying functions
✔  functions: Finished running predeploy script.
i  functions: preparing codebase default for deployment
i  functions: ensuring required API cloudfunctions.googleapis.com is enabled...
i  functions: ensuring required API cloudbuild.googleapis.com is enabled...
i  artifactregistry: ensuring required API artifactregistry.googleapis.com is enabled...
✔  functions: required API cloudbuild.googleapis.com is enabled
✔  artifactregistry: required API artifactregistry.googleapis.com is enabled
✔  functions: required API cloudfunctions.googleapis.com is enabled
i  functions: Loading and analyzing source code for codebase default to determine what to deploy
Serving at port 8171

shutdown requested via /__/quitquitquit


Error: Functions codebase could not be analyzed successfully. It may have a syntax or runtime error


    


    Failed to load function definition from source: FirebaseError: Functions codebase could not be analyzed successfully. It may have a syntax or runtime error


    


    I get the same error when running the following :

    


    firebase deploy --only functions:audioUrl


    


    And I thought I might get more detailed errors using the emulator :

    


    firebase emulators:start


    


    Under the emulator I had this additional error initially :

    


    Your requested "node" version "18" doesn't match your global version "16". Using node@16 from host.


    


  • Does PTS have to start at 0 ?

    5 juillet 2018, par stevendesu

    I’ve seen a number of questions regarding video PTS values not starting at zero, or asking how to make them start at zero. I’m aware that using ffmpeg I can do something like ffmpeg -i <video> -vf="setpts=PTS-STARTPTS" <output></output></video> to fix this kind of thing

    However it’s my understanding that PTS values don’t have to start at zero. For instance, if you join a live stream then odds are it has been going on for an hour and the PTS is already somewhere around 3600000+ but your video player faithfully displays everything just fine. Therefore I would expect there to be no problem if I intentionally created a video with a PTS value starting at, say, the current wall clock time.

    I want to send a live stream using ffmpeg, but embed the current time into the stream. This can be used both for latency calculation while the stream is live, and later to determine when the stream was originally aired. From my understanding of PTS, something as simple as this should probably work :

    ffmpeg -i video.flv -vf="setpts=RTCTIME" rtmp://<output>
    </output>

    When I try this, however, ffmpeg outputs the following :

    frame=   93 fps= 20 q=-1.0 Lsize=    9434kB time=535020:39:58.70 bitrate=   0.0kbits/s speed=1.35e+11x

    Note the extremely large value for "time", the bitrate (0.0kbits), and the speed (135000000000x !!!)

    At first I thought the issue might be my timebase, so I tried the following :

    ffmpeg -i video.flv -vf="settb=1/1K,setpts=RTCTIME/1K" rtmp://<output>
    </output>

    This puts everything in terms of milliseconds (1 PTS = 1 ms) but I had the same issue (massive time, zero bitrate, and massive speed)

    Am I misunderstanding something about PTS ? Is it not allowed to start at non-zero values ? Or am I just doing something wrong ?

    Update

    After reviewing @Gyan’s answer, I formatted my command like so :

    ffmpeg -re -i video.flv -vf="settb=1/1K, setpts=(RTCTIME-RTCSTART)/1K" -output_ts_offset $(date +%s.%N) rtmp://<output>
    </output>

    This way the PTS values would match up to "milliseconds since the stream started" and would be offset by the start time of the stream (theoretically making PTS = timestamp on the server)

    This looked like it was encoding better :

    frame=  590 fps=7.2 q=22.0 size=   25330kB time=00:01:21.71 bitrate=2539.5kbits/s dup=0 drop=1350 speed=   1x

    Bitrate was now correct, time was accurate, and speed was not outrageous. The frames per second was still a bit off, though (the source video is 24 fps but it’s reporting 7.2 frames per second)

    When I tried watching the stream from the other end, the video was out of sync with the audio and played at about double normal speed for a while, then the video froze and the audio continued without it

    Furthermore, when I dumped the stream to a file (ffmpeg -i rtmp://<output> dump.mp4</output>) and look at the PTS timestamps with ffprobe (ffprobe -show_entries packet=codec_type,pts dump.mp4 | grep "video" -B 1 -A 2) the timestamps didn’t seem to show server time at all :

    ...
    --
    [PACKET]
    codec_type=video
    pts=131072
    [/PACKET]
    [PACKET]
    codec_type=video
    pts=130048
    [/PACKET]
    --
    [PACKET]
    codec_type=video
    pts=129536
    [/PACKET]
    [PACKET]
    codec_type=video
    pts=130560
    [/PACKET]
    --
    [PACKET]
    codec_type=video
    pts=131584
    [/PACKET]

    Is the problem just an incompatibility with RTMP ?

    Update 2

    I’ve removed the video filter and I’m now encoding like so :

    ffmpeg -re -i video.flv -output_ts_offset $(date +%s.%N) rtmp://<output>
    </output>

    This is encoding correctly :

    frame=  910 fps= 23 q=25.0 size=   12027kB time=00:00:38.97 bitrate=2528.2kbits/s speed=0.981x

    In order to verify that the PTS values are correct, I’m dumping the output to a file like so :

    ffmpeg -i rtmp://<output> -copyts -write_tmcd 0 dump.mp4
    </output>

    I tried saving it as dump.flv (since it’s RTMP) however this threw the error :

    [flv @ 0x5600f24b4620] Audio codec mp3 not compatible with flv

    This is a bit weird since the video isn’t mp3-encoded (it’s speex) - but whatever.

    While dumping this file the following error pops up repeatedly :

    frame=    1 fps=0.0 q=0.0 size=       0kB time=00:00:09.21 bitrate=   0.0kbits/s dup=0 dr
    43090023 frame duplication too large, skipping
    43090027 frame duplication too large, skipping
       Last message repeated 3 times
    43090031 frame duplication too large, skipping
       Last message repeated 3 times
    43090035 frame duplication too large, skipping

    Playing the resulting video in VLC plays an audio stream but displays no video. I then attempt to probe this video with ffprobe to look at the video PTS values :

    ffprobe -show_entries packet=codec_type,pts dump.mp4 | grep "video" -B 1 -A 2

    This returns only a single video frame whose PTS is not large like I would expect :

    [PACKET]
    codec_type=video
    pts=1020
    [/PACKET]

    This has been a surprisingly difficult task