Recherche avancée

Médias (91)

Autres articles (80)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Configuration spécifique pour PHP5

    4 février 2011, par

    PHP5 est obligatoire, vous pouvez l’installer en suivant ce tutoriel spécifique.
    Il est recommandé dans un premier temps de désactiver le safe_mode, cependant, s’il est correctement configuré et que les binaires nécessaires sont accessibles, MediaSPIP devrait fonctionner correctement avec le safe_mode activé.
    Modules spécifiques
    Il est nécessaire d’installer certains modules PHP spécifiques, via le gestionnaire de paquet de votre distribution ou manuellement : php5-mysql pour la connectivité avec la (...)

Sur d’autres sites (4642)

  • How to map frame extracted with ffmpeg and subtitle of a video ? (frame accuracy problem)

    14 novembre 2019, par Abitbol

    would like to generate text files for frames extracted with ffmpeg, containing subtitle of the frame if any, on a video for which I have burn the subtitles using ffmpeg also.

    I use a python script with pysrt to open the subrip file and generate the text files.
    What I am doing is that each frames is named with the frame number by ffmpeg, then and since they are extracted at a constant rate, I can easily retrieve the time position of the frame using the formula t1 = fnum/fps, where fnum is the number of the frame retrieved with the filename, and fps is the frequency passed to ffmpeg for the frame extraction.

    Even though I am using the same subtitle file to retrieve the text positions in the timeline, that the one that has been used in the video, I still get accuracy errors. Most I have some text files missing or some that shouldn’t be present.

    Because time is not really continuous when talking about frames, I have tried recalibrating t using the fps of the video wih the hardcoded subtitles, let’s call that fps vfps for video fps (I have ensured that the video fps is the same before and after subtitle burning). I get the formula : t2 = int(t1*vfps)/vfps.
    It still is not 100% accurate.

    For example, my video is at 30fps (vfps=30) and I extracted frames at 4fps (fps=4).
    The extracted frame 166 (fnum=166) shows no subtitle. In the subrip file, the previous subtitle ends at t_prev=41.330 and the next subtitle begins at t_next=41.400, which means that t_sub should satisfy : t_prev < t_sub and t_sub < t_next, but I can’t make this happen.

    Formulas I have tried :

    t1 = fnum/fps  # 41.5 > t_next
    t2 = int(fnum*vfps/fps)/vfps  # 41.5 > t_next
    # is it because of a indexing problem? No:
    t3 = (fnum-1)/fps  # 41.25 < t_prev
    t4 = int((fnum-1)*vfps/fps)/vfps  # 41.23333333 < t_prev
    t5 = int(fnum*vfps/fps - 1)/vfps  # 41.466666 > t_next
    t6 = int((fnum-1)*vfps/fps + 1)/vfps  # 41.26666 < t_prev

    Command used :

    # burning subtitles
    # (previously)
    # ffmpeg -r 25 -i nosub.mp4 -vf subtitles=sub.srt withsub.mp4
    # now:
    ffmpeg -i nosub.mp4 -vf subtitles=sub.srt withsub.mp4
    # frames extraction
    ffmpeg -i withsub.mp4 -vf fps=4 extracted/%05.bmp -hide_banner

    Why does this happen and how can I solve this ?

    One thing I have noticed is that if I extract frames of the original video and the subtitle ones, do a difference of the frames, the result is not only the subtitles, there are variations in the background (that shouldn’t happen). If I do the same experience using the same video two times, the difference is null, which means that the frame extraction is consistant.

    Code for the difference :

    ffmpeg -i withsub.mp4 -vf fps=4 extracted/%05.bmp -hide_banner
    ffmpeg -i no_sub.mp4 -vf fps=4 extracted_no_sub/%05.bmp -hide_banner
    for img in no_sub/*.bmp; do
       convert extracted/${img##*/} $img -compose minus -composite diff/${img##*/}
    done

    Thanks.

  • Encoding raw YUV420P to h264 with AVCodec on iOS

    4 janvier 2013, par Wade

    I am trying to encode a single YUV420P image gathered from a CMSampleBuffer to an AVPacket so that I can send h264 video over the network with RTMP.

    The posted code example seems to work as avcodec_encode_video2 returns 0 (Success) however got_output is also 0 (AVPacket is empty).

    Does anyone have any experience with encoding video on iOS devices that might know what I am doing wrong ?

    - (void) captureOutput:(AVCaptureOutput *)captureOutput
    didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
           fromConnection:(AVCaptureConnection *)connection {

     // sampleBuffer now contains an individual frame of raw video frames
     CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

     CVPixelBufferLockBaseAddress(pixelBuffer, 0);

     // access the data
     int width = CVPixelBufferGetWidth(pixelBuffer);
     int height = CVPixelBufferGetHeight(pixelBuffer);
     int bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
     unsigned char *rawPixelBase = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);


     // Convert the raw pixel base to h.264 format
     AVCodec *codec = 0;
     AVCodecContext *context = 0;
     AVFrame *frame = 0;
     AVPacket packet;

     //avcodec_init();
     avcodec_register_all();
     codec = avcodec_find_encoder(AV_CODEC_ID_H264);

     if (codec == 0) {
       NSLog(@"Codec not found!!");
       return;
     }

     context = avcodec_alloc_context3(codec);

     if (!context) {
       NSLog(@"Context no bueno.");
       return;
     }

     // Bit rate
     context->bit_rate = 400000; // HARD CODE
     context->bit_rate_tolerance = 10;
     // Resolution
     context->width = width;
     context->height = height;
     // Frames Per Second
     context->time_base = (AVRational) {1,25};
     context->gop_size = 1;
     //context->max_b_frames = 1;
     context->pix_fmt = PIX_FMT_YUV420P;

     // Open the codec
     if (avcodec_open2(context, codec, 0) < 0) {
       NSLog(@"Unable to open codec");
       return;
     }


     // Create the frame
     frame = avcodec_alloc_frame();
     if (!frame) {
       NSLog(@"Unable to alloc frame");
       return;
     }
     frame->format = context->pix_fmt;
     frame->width = context->width;
     frame->height = context->height;


     avpicture_fill((AVPicture *) frame, rawPixelBase, context->pix_fmt, frame->width, frame->height);

     int got_output = 0;
     av_init_packet(&packet);
     avcodec_encode_video2(context, &packet, frame, &got_output)

     // Unlock the pixel data
     CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
     // Send the data over the network
     [self uploadData:[NSData dataWithBytes:packet.data length:packet.size] toRTMP:self.rtmp_OutVideoStream];
    }

    Note : It is known that this code has memory leaks because I am not freeing the memory that is dynamically allocated.

    UPDATE

    I updated my code to use @pogorskiy method. I only try to upload the frame if got output returns 1 and clear the buffer once I am done encoding video frames.

  • Issue with output RTSP stream converted using ffmpeg filter-complex

    11 mars 2019, par Mehul Panchasara

    I have a camera feed which I am getting in RTSP for example : rtsp ://172.16.1.177:8554/test

    here are the stream details I got using ffmpeg -i rtsp://172.16.1.177:8554/test

    Input #0, rtsp, from 'rtsp://172.16.1.177:8554/test':
     Metadata:
       title           : Session streamed with GStreamer
       comment         : rtsp-server
     Duration: N/A, start: 0.710544, bitrate: N/A
       Stream #0:0: Video: h264 (Constrained Baseline), yuv420p(progressive), 1920x1080, 15 tbr, 90k tbn, 180k tbc

    Now, I am applying chromakey to the above stream which is giving me perfect output in mp4

    ffmpeg -i background.jpg -i rtsp://172.16.1.177:8554/test -filter_complex "[1:v]colorkey=0x26ff0b:0.3:0.2[ckout];[0:v][ckout]overlay[out]" -map "[out]" output.mp4

    After that, I’ve created and successfully started ffserver using the below config file

    HTTPPort 8091
    RTSPPort 8092
    HTTPBindAddress 0.0.0.0
    <feed>
       File /tmp/feed1.ffm
       FileMaxSize 2048M
       ACL allow localhost
    </feed>
    <stream>
       Feed feed1.ffm
       Format rtp
       NoAudio
       VideoCodec libx264
       VideoFrameRate 15
       VideoBitRate 1000
       VideoSize 1920x1080
       ACL allow 172.16.1.30 172.16.0.2
    </stream>

    I am trying to export output stream using below command

    ffmpeg -i background.jpg -i rtsp://172.16.1.177:8554/test -filter_complex "[1:v]colorkey=0x26ff0b:0.3:0.2[ckout];[0:v][ckout]overlay[out]" -map "[out]" http://localhost:8091/feed1.ffm

    which gives me below error

    Input #0, image2, from 'background.jpg':
     Duration: 00:00:00.04, start: 0.000000, bitrate: 23866 kb/s
       Stream #0:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 854x480 [SAR 72:72 DAR 427:240], 25 tbr, 25 tbn, 25 tbc
    Input #1, rtsp, from 'rtsp://172.16.1.177:8554/test':
     Metadata:
       title           : Session streamed with GStreamer
       comment         : rtsp-server
     Duration: N/A, start: 0.711933, bitrate: N/A
       Stream #1:0: Video: h264 (Constrained Baseline), yuv420p(progressive), 1920x1080, 15 tbr, 90k tbn, 180k tbc
    [tcp @ 0x7fdb88706680] Connection to tcp://localhost:8091 failed (Connection refused), trying next address
    [tcp @ 0x7fdb88402920] Connection to tcp://localhost:8091 failed (Connection refused), trying next address
    Filter overlay has an unconnected output

    I don’t have much experience with either ffmpeg or ffserver, so I don’t exactly know why there is the issue with unconnected output

    Filter overlay has an unconnected output