Recherche avancée

Médias (0)

Mot : - Tags -/optimisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (34)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Submit enhancements and plugins

    13 avril 2011

    If you have developed a new extension to add one or more useful features to MediaSPIP, let us know and its integration into the core MedisSPIP functionality will be considered.
    You can use the development discussion list to request for help with creating a plugin. As MediaSPIP is based on SPIP - or you can use the SPIP discussion list SPIP-Zone.

  • D’autres logiciels intéressants

    12 avril 2011, par

    On ne revendique pas d’être les seuls à faire ce que l’on fait ... et on ne revendique surtout pas d’être les meilleurs non plus ... Ce que l’on fait, on essaie juste de le faire bien, et de mieux en mieux...
    La liste suivante correspond à des logiciels qui tendent peu ou prou à faire comme MediaSPIP ou que MediaSPIP tente peu ou prou à faire pareil, peu importe ...
    On ne les connais pas, on ne les a pas essayé, mais vous pouvez peut être y jeter un coup d’oeil.
    Videopress
    Site Internet : (...)

Sur d’autres sites (4935)

  • audio issues when merging a video with an audio file using python ffmpeg and moviepy

    29 février 2024, par Stevenb123

    I’m trying to create a code that syncs between an audio file and a background file in terms of duration.
When creating the merged video I hear a cut or a loop sound of the last sentence for like 0.2 seconds.
I have tried to solve it in many different ways, listed below.

    


    Has anyone solved this issue ? I saw many people having a similar problem.
I’m using Ubuntu version 20.04 and ffmpeg version 4.2.7

    


    This is my code :

    


    def merge_videos_with_subs(
    background_path, audio_path, subs, output_directory, output_filename
):
    try:
        # Load background video and audio
        background_clip = VideoFileClip(background_path)
        background_clip = background_clip.without_audio()
        audio_clip = AudioFileClip(audio_path)

        # Adjust video duration to match audio duration
        audio_duration = audio_clip.duration

        # If the background video is longer, trim it to match the audio duration
        if background_clip.duration > audio_duration:
            background_clip = background_clip.subclip(0, audio_duration)
        # If the audio is longer, loop the background video
        else:
            background_clip = background_clip.loop(duration=audio_duration)

        # Set audio of the background clip
        background_clip = background_clip.set_audio(audio_clip)

        # Overlay subtitles on the video
        final_clip = CompositeVideoClip(
            [background_clip, subtitles.set_pos(("center", "bottom"))]
        )

        # Ensure the output directory exists
        os.makedirs(output_directory, exist_ok=True)

        # Define the output path
        output_path = os.path.join(output_directory, output_filename)

        # Write the merged video with subtitles
        final_clip.write_videofile(
            output_path, codec="libx264", audio_codec="aac", threads=4, fps=24
        )

        # Close the clips
        final_clip.close()
        background_clip.close()
        audio_clip.close()

        print(f"Merged video with subtitles saved to: {output_path}")
    except Exception as e:
        print(f"Error merging videos: {e}")


    


    I’ve tried changing the codec, tried cutting 0.2 seconds of the audio before and after the merge or silencing it, nothing seems to help.
    
When I run my code without the background subclip to match the audio it worked flawlessy.
If I let the background run until its full duration or make it loop there were no audio issues.
    
Looks like the issue is on the cutting part.

    


  • FFMPEG API - Recording video and audio - Performance issues

    24 juillet 2015, par Solidus

    I’m developing an app which is able to record video from a webcam and audio from a microphone. I also need to process user input at the same time as they can, for example, draw on top of the video, in real time. I’ve been using QT but unfortunately the camera module does not work on windows which led me to use ffmpeg to record the video/audio.

    My Camera module is now working well besides a slight problem with syncing (which I asked for a solution in another thread). The problem is that when I start recording the performance drops (around 50-60% cpu usage). I’m not sure where or if I can improve on this. I figure the problem comes from the running threads. I have followed the dranger tutorial to help me develop my code and I am use 3 threads in total. One for capturing the video and audio packets, adding them to a queue, one to process the video packets and one to process the audio packets. While processing the video packets I also have to convert the frame (which comes raw from the camera feed) to a RGB format so that QT is able to show it. The frame also has to be converted to YUV420P to be saved to the mp4 file. This could also be hindering performance.

    Every time I try to get a packet from the queue I verify if it is empty and, if it is, I tell the thread to sleep until more data is available as this helps saving CPU usage. The problem is sometimes the threads don’t wake up on time and the queue starts filling up with packages adding a cumulative delay, which never stops.

    Bellow is a part of the code I am using for the 3 threads.

    One thread is capturing both audio and video packets and adding them on a queue :

    void FFCapture::grabFrames(int videoStream, int audioStream) {
       grabActive = true;
       videoQueue.clear();
       audioQueue.clear();

       AVPacket pkt;
       while (av_read_frame(iFormatContext, &pkt) >= 0 && active) {
           if(pkt.stream_index == videoStream) {
               enqueueVideoPacket(pkt);
           }
           else if(pkt.stream_index == audioStream) {
               enqueueAudioPacket(pkt);
           }
           else {
               av_free_packet(&pkt);
           }
           //QThread::msleep(20);
       }
       //Wake up threads that might be sleeping
       videoWait.wakeOne();
       audioWait.wakeOne();
       grabActive = false;
       cleanupAll();
    }

    And then I have one thread for each stream (video and audio) :

    void FFCapture::captureVideoFrames() {
       videoActive = true;
       outStream.videoPTS = 2;

       int min = 0;
       if(cacheMs > 0) {
           QThread::msleep(cacheMs);
           min = getVideoQueueSize();
       }

       while(active || (!active && (getVideoQueueSize() > min))) {
           qDebug() << "Video:" << videoQueue.size() << min;
           AVPacket pkt;
           if(dequeueVideoPacket(pkt, min) >= 0) {
               if(processVideoPacket(&pkt) < 0) {
                   av_free_packet(&pkt);
                   break;
               }
               av_free_packet(&pkt);
           }
       }
       videoActive = false;
       cleanupAll();
    }

    void FFCapture::captureAudioFrames() {
       audioActive = true;
       outStream.audioPTS = 0;

       int min = 0;
       if(cacheMs > 0) {
           QThread::msleep(cacheMs);
           min = getAudioQueueSize();
       }

       while(active || (!active && (getAudioQueueSize() > min))) {
           qDebug() << "Audio:" << audioQueue.size() << min;
           AVPacket pkt;

           if(dequeueAudioPacket(pkt, min) >= 0) {
               if(recording) {
                   if(processAudioPacket(&pkt) < 0) break;
               }
               else av_free_packet(&pkt);
           }
       }
       audioActive = false;
       cleanupAll();
    }

    When I remove a packet from the queue I verify if it is empty and if it is I tell the thread to wait for more data. The code is as follows :

    void FFCapture::enqueueVideoPacket(const AVPacket &pkt) {
       QMutexLocker locker(&videoQueueMutex);
       videoQueue.enqueue(pkt);
       videoWait.wakeOne();
    }

    int FFCapture::dequeueVideoPacket(AVPacket &pkt, int sizeConstraint) {
       QMutexLocker locker(&videoQueueMutex);
       while(1) {
           if(videoQueue.size() > sizeConstraint) {
               pkt = videoQueue.dequeue();
               return 0;
           }
           else if(!active) {
               return -1;
           }
           else {
               videoWait.wait(&videoQueueMutex);
           }
       }
       return -2; //Should never happen. Just to avoid compile error.
    }
  • Issues with Video Recording Duration and Smooth Playback when Using v4l2 to MP4 (FFmpeg)

    9 décembre 2024, par Reena

    I'm trying to record a video from a USB device with v4l2 framework and save it in MP4 format using FFmpeg. My sample code successfully captures and saves the video, but I'm running into some issues :

    


    The recorded video duration is shorter than expected. For instance :

    


    When recording a 1-minute video at 1280x720, the output file only has 58 or 59 seconds.
For 1920x1080, the duration is even more off — only about 28 or 30 seconds, instead of the expected 1 minute.
Additionally, the video is not smooth. There are noticeable drops in frames and playback inconsistencies.

    


    My setup :

    


    Using a USB device with v4l2 framework
Saving the video in MP4 format
Tested with different resolutions (1280x720, 1920x1080)
I've attached my sample code below. Could someone help me figure out why I'm experiencing these issues with video duration and smooth playback ?

    


    Any advice, fixes, or suggestions would be greatly appreciated !

    


    #include &#xA;#include &#xA;#include &#xA;#include &#xA;#include <sys></sys>ioctl.h>&#xA;#include <sys></sys>mman.h>&#xA;#include <linux></linux>videodev2.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavutil></libavutil>imgutils.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libswscale></libswscale>swscale.h>&#xA;#include &#xA;#include <sys></sys>time.h>&#xA;#include &#xA;&#xA;#define WIDTH 1280&#xA;#define HEIGHT 720&#xA;#define FPS 30&#xA;#define DURATION 10 // Recording duration in seconds&#xA;#define BUFFER_COUNT 4 // Number of buffers&#xA;&#xA;struct buffer {&#xA;    void *start;&#xA;    size_t length;&#xA;};&#xA;&#xA;struct buffer *buffers;&#xA;&#xA;void open_device(int *fd, const char *device) {&#xA;    *fd = open(device, O_RDWR | O_NONBLOCK);&#xA;    if (*fd &lt; 0) {&#xA;        perror("Cannot open video device");&#xA;        exit(1);&#xA;    }&#xA;}&#xA;&#xA;void init_mmap(int fd) {&#xA;    struct v4l2_requestbuffers req;&#xA;    memset(&amp;req, 0, sizeof(req));&#xA;    req.count = BUFFER_COUNT;&#xA;    req.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;&#xA;    req.memory = V4L2_MEMORY_MMAP;&#xA;&#xA;    if (ioctl(fd, VIDIOC_REQBUFS, &amp;req) &lt; 0) {&#xA;        perror("Requesting buffer");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    buffers = calloc(req.count, sizeof(*buffers));&#xA;    for (size_t i = 0; i &lt; req.count; &#x2B;&#x2B;i) {&#xA;        struct v4l2_buffer buf;&#xA;        memset(&amp;buf, 0, sizeof(buf));&#xA;        buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;&#xA;        buf.memory = V4L2_MEMORY_MMAP;&#xA;        buf.index = i;&#xA;&#xA;        if (ioctl(fd, VIDIOC_QUERYBUF, &amp;buf) &lt; 0) {&#xA;            perror("Querying buffer");&#xA;            exit(1);&#xA;        }&#xA;&#xA;        buffers[i].length = buf.length;&#xA;        buffers[i].start = mmap(NULL, buf.length, PROT_READ | PROT_WRITE, MAP_SHARED, fd, buf.m.offset);&#xA;&#xA;        if (MAP_FAILED == buffers[i].start) {&#xA;            perror("mmap");&#xA;            exit(1);&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;void start_capturing(int fd) {&#xA;    for (size_t i = 0; i &lt; BUFFER_COUNT; &#x2B;&#x2B;i) {&#xA;        struct v4l2_buffer buf;&#xA;        memset(&amp;buf, 0, sizeof(buf));&#xA;        buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;&#xA;        buf.memory = V4L2_MEMORY_MMAP;&#xA;        buf.index = i;&#xA;&#xA;        if (ioctl(fd, VIDIOC_QBUF, &amp;buf) &lt; 0) {&#xA;            perror("Queue buffer");&#xA;            exit(1);&#xA;        }&#xA;    }&#xA;&#xA;    enum v4l2_buf_type type = V4L2_BUF_TYPE_VIDEO_CAPTURE;&#xA;    if (ioctl(fd, VIDIOC_STREAMON, &amp;type) &lt; 0) {&#xA;        perror("Start capture");&#xA;        exit(1);&#xA;    }&#xA;}&#xA;&#xA;void stop_capturing(int fd) {&#xA;    enum v4l2_buf_type type = V4L2_BUF_TYPE_VIDEO_CAPTURE;&#xA;&#xA;    if (ioctl(fd, VIDIOC_STREAMOFF, &amp;type) &lt; 0) {&#xA;        perror("Stop capture");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    printf("Video capture stopped.\n");&#xA;}&#xA;&#xA;void unmap_buffers() {&#xA;    for (size_t i = 0; i &lt; BUFFER_COUNT; &#x2B;&#x2B;i) {&#xA;        if (munmap(buffers[i].start, buffers[i].length) &lt; 0) {&#xA;            perror("munmap");&#xA;            exit(1);&#xA;        }&#xA;    }&#xA;&#xA;    free(buffers);&#xA;}&#xA;&#xA;void initialize_ffmpeg(AVFormatContext **fmt_ctx, AVCodecContext **codec_ctx, AVStream **video_stream, const char *filename) {&#xA;    av_register_all();&#xA;&#xA;    AVOutputFormat *fmt = av_guess_format(NULL, filename, NULL);&#xA;    if (!fmt) {&#xA;        fprintf(stderr, "Could not determine output format\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    if (avformat_alloc_output_context2(fmt_ctx, fmt, NULL, filename) &lt; 0) {&#xA;        fprintf(stderr, "Could not allocate format context\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    AVCodec *codec = avcodec_find_encoder(AV_CODEC_ID_H264);&#xA;    if (!codec) {&#xA;        fprintf(stderr, "Codec not found\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    *video_stream = avformat_new_stream(*fmt_ctx, NULL);&#xA;    if (!*video_stream) {&#xA;        fprintf(stderr, "Could not create stream\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    *codec_ctx = avcodec_alloc_context3(codec);&#xA;    if (!*codec_ctx) {&#xA;        fprintf(stderr, "Could not allocate codec context\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    (*codec_ctx)->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    (*codec_ctx)->width = WIDTH;&#xA;    (*codec_ctx)->height = HEIGHT;&#xA;    (*codec_ctx)->time_base = (AVRational){1, FPS};&#xA;    (*codec_ctx)->framerate = (AVRational){FPS, 1};&#xA;    (*codec_ctx)->pix_fmt = AV_PIX_FMT_YUV420P;&#xA;    (*codec_ctx)->gop_size = 10;&#xA;    (*codec_ctx)->max_b_frames = 1;&#xA;&#xA;    av_opt_set(*codec_ctx, "preset", "fast", 0);&#xA;    av_opt_set_int(*codec_ctx, "crf", 23, 0);&#xA;&#xA;    (*video_stream)->time_base = (*codec_ctx)->time_base;&#xA;    (*video_stream)->codecpar->codec_id = fmt->video_codec;&#xA;    (*video_stream)->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    (*video_stream)->codecpar->width = (*codec_ctx)->width;&#xA;    (*video_stream)->codecpar->height = (*codec_ctx)->height;&#xA;    (*video_stream)->codecpar->format = (*codec_ctx)->pix_fmt;&#xA;&#xA;    if ((*fmt_ctx)->oformat->flags &amp; AVFMT_GLOBALHEADER) {&#xA;        (*codec_ctx)->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;    }&#xA;&#xA;    if (avcodec_open2(*codec_ctx, codec, NULL) &lt; 0) {&#xA;        fprintf(stderr, "Could not open codec\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    if (avcodec_parameters_from_context((*video_stream)->codecpar, *codec_ctx) &lt; 0) {&#xA;        fprintf(stderr, "Could not copy codec parameters\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    if (!(fmt->flags &amp; AVFMT_NOFILE)) {&#xA;        if (avio_open(&amp;(*fmt_ctx)->pb, filename, AVIO_FLAG_WRITE) &lt; 0) {&#xA;            fprintf(stderr, "Could not open output file\n");&#xA;            exit(1);&#xA;        }&#xA;    }&#xA;&#xA;    if (avformat_write_header(*fmt_ctx, NULL) &lt; 0) {&#xA;        fprintf(stderr, "Could not write header\n");&#xA;        exit(1);&#xA;    }&#xA;}&#xA;&#xA;void capture_and_encode(int fd, AVFormatContext *fmt_ctx, AVCodecContext *codec_ctx, struct SwsContext *sws_ctx, AVStream *video_stream, int duration) {&#xA;    struct v4l2_buffer buffer;&#xA;    AVFrame *frame = av_frame_alloc();&#xA;    AVPacket packet;&#xA;    av_init_packet(&amp;packet);&#xA;&#xA;    frame->format = codec_ctx->pix_fmt;&#xA;    frame->width = codec_ctx->width;&#xA;    frame->height = codec_ctx->height;&#xA;    av_image_alloc(frame->data, frame->linesize, codec_ctx->width, codec_ctx->height, codec_ctx->pix_fmt, 32);&#xA;&#xA;    struct timespec start_time;&#xA;    clock_gettime(CLOCK_MONOTONIC, &amp;start_time);&#xA;    double elapsed_time = 0;&#xA;    int64_t pts_counter = 0;&#xA;    int frame_count = 0;&#xA;&#xA;    while (elapsed_time &lt; duration) {&#xA;        fd_set fds;&#xA;        struct timeval tv;&#xA;        int r;&#xA;&#xA;        FD_ZERO(&amp;fds);&#xA;        FD_SET(fd, &amp;fds);&#xA;&#xA;        tv.tv_sec = 2;&#xA;        tv.tv_usec = 0;&#xA;&#xA;        r = select(fd &#x2B; 1, &amp;fds, NULL, NULL, &amp;tv);&#xA;        if (r == -1) {&#xA;            perror("select");&#xA;            exit(1);&#xA;        }&#xA;&#xA;        if (r == 0) {&#xA;            fprintf(stderr, "select timeout\n");&#xA;            exit(1);&#xA;        }&#xA;&#xA;        memset(&amp;buffer, 0, sizeof(buffer));&#xA;        buffer.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;&#xA;        buffer.memory = V4L2_MEMORY_MMAP;&#xA;&#xA;        if (ioctl(fd, VIDIOC_DQBUF, &amp;buffer) &lt; 0) {&#xA;            if (errno == EAGAIN) continue;&#xA;            perror("Could not dequeue buffer");&#xA;            exit(1);&#xA;        }&#xA;&#xA;        uint8_t *src_slices[1] = {buffers[buffer.index].start};&#xA;        int src_stride[1] = {WIDTH * 2}; // UYVY is 2 bytes per pixel&#xA;&#xA;        sws_scale(sws_ctx, src_slices, src_stride, 0, HEIGHT, frame->data, frame->linesize);&#xA;&#xA;        frame->pts = pts_counter;&#xA;        pts_counter &#x2B;= av_rescale_q(1, (AVRational){1, FPS}, codec_ctx->time_base);&#xA;&#xA;        if (avcodec_send_frame(codec_ctx, frame) &lt; 0) {&#xA;            fprintf(stderr, "Error sending frame\n");&#xA;            exit(1);&#xA;        }&#xA;&#xA;        while (avcodec_receive_packet(codec_ctx, &amp;packet) == 0) {&#xA;            av_packet_rescale_ts(&amp;packet, codec_ctx->time_base, video_stream->time_base);&#xA;            packet.stream_index = video_stream->index;&#xA;&#xA;            if (av_interleaved_write_frame(fmt_ctx, &amp;packet) &lt; 0) {&#xA;                fprintf(stderr, "Error writing frame\n");&#xA;                exit(1);&#xA;            }&#xA;&#xA;            av_packet_unref(&amp;packet);&#xA;        }&#xA;        printf("Processed frame %d\n", frame_count);&#xA;&#xA;        if (ioctl(fd, VIDIOC_QBUF, &amp;buffer) &lt; 0) {&#xA;            perror("Could not requeue buffer");&#xA;            exit(1);&#xA;        }&#xA;        frame_count&#x2B;&#x2B;;&#xA;        struct timespec current_time;&#xA;        clock_gettime(CLOCK_MONOTONIC, &amp;current_time);&#xA;        elapsed_time = (current_time.tv_sec - start_time.tv_sec) &#x2B; (current_time.tv_nsec - start_time.tv_nsec) / 1e9;&#xA;        printf("Elapsed time: %f seconds\n", elapsed_time);&#xA;    }&#xA;&#xA;    av_freep(&amp;frame->data[0]);&#xA;    av_frame_free(&amp;frame);&#xA;    printf("Total frames processed: %d\n", frame_count);&#xA;}&#xA;&#xA;int main(int argc, char *argv[]) {&#xA;    if (argc != 2) {&#xA;        fprintf(stderr, "Usage: %s <output file="file">\n", argv[0]);&#xA;        exit(1);&#xA;    }&#xA;&#xA;    const char *output_file = argv[1];&#xA;    int fd;&#xA;    open_device(&amp;fd, "/dev/video2");&#xA;&#xA;    struct v4l2_format fmt;&#xA;    memset(&amp;fmt, 0, sizeof(fmt));&#xA;    fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;&#xA;    fmt.fmt.pix.width = WIDTH;&#xA;    fmt.fmt.pix.height = HEIGHT;&#xA;    fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_UYVY;&#xA;    fmt.fmt.pix.field = V4L2_FIELD_NONE;&#xA;&#xA;    if (ioctl(fd, VIDIOC_S_FMT, &amp;fmt) &lt; 0) {&#xA;        perror("Setting pixel format");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    if (fmt.fmt.pix.pixelformat != V4L2_PIX_FMT_UYVY) {&#xA;        fprintf(stderr, "Device does not support UYVY format\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    init_mmap(fd);&#xA;    start_capturing(fd);&#xA;&#xA;    AVFormatContext *fmt_ctx = NULL;&#xA;    AVCodecContext *codec_ctx = NULL;&#xA;    AVStream *video_stream = NULL;&#xA;&#xA;    initialize_ffmpeg(&amp;fmt_ctx, &amp;codec_ctx, &amp;video_stream, output_file);&#xA;&#xA;    struct SwsContext *sws_ctx = sws_getContext(WIDTH, HEIGHT, AV_PIX_FMT_UYVY422,&#xA;                                                WIDTH, HEIGHT, AV_PIX_FMT_YUV420P,&#xA;                                                SWS_BICUBIC, NULL, NULL, NULL);&#xA;&#xA;    if (!sws_ctx) {&#xA;        fprintf(stderr, "Could not initialize SwsContext\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    capture_and_encode(fd, fmt_ctx, codec_ctx, sws_ctx, video_stream, DURATION);&#xA;&#xA;    sws_freeContext(sws_ctx);&#xA;    av_write_trailer(fmt_ctx);&#xA;    avcodec_free_context(&amp;codec_ctx);&#xA;    avformat_free_context(fmt_ctx);&#xA;    stop_capturing(fd);&#xA;    unmap_buffers();&#xA;    close(fd);&#xA;&#xA;    return 0;&#xA;}&#xA;</output>

    &#xA;

    Thank you !

    &#xA;