Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • Including FFmpeg.framework Into My IOS App

    28 mars, par Alpi

    I'm trying to manually integrate ffmpegkit.framework into my Expo Bare Workflow iOS app (built with React Native + native modules via Xcode) because the ffmpegkit will be deprecated and the binaries will be deleted.

    So far

    • I've downloaded the latest LTS release of FFmpegkit from here.
    • I've created 3 files: FFmpegModule.m , FFmpegModule.swift and SoundBud-Bridging-Header.
    • Added the frameworks to my projectDir/ios manually, which shows in my XCode under projectDir/Frameworks
    • Added all the frameworks into "Frameworks, Libraries and Embedded Content" and make them "Embed and Sign"
    • As Framework Search Path in Project Settings, I've set it to "$(PROJECT_DIR)" and recursive
    • In "Build Phases" I've added all the frameworks under "Embed Frameworks",set the destination to "Frameworks" and checked "Code Sign on Copy" to all of them and unchecked "Copy Only When Installing"
    • Also under "Link Binary With Libraries" I've added all the frameworks and marked them "Required"

    Here are the errors I'm getting:

    • The framework is not recognized by Swift (No such module 'ffmpegkit')
    • A build cycle error: Cycle inside SoundBud; building could produce unreliable results. Target 'SoundBud' has copy command from '.../Frameworks/ffmpegkit.framework' ...

    Below you can see my swift file and the ffmpegkit module file: Swift:

    import Foundation
    import ffmpegkit
    import React
    
    @objc(FFmpegModule)
    class FFmpegModule: NSObject, RCTBridgeModule {
    
    static func moduleName() -> String {
    return "FFmpegModule"
    }
    
    @objc
    func runCommand(_ command: String, resolver resolve: @escaping RCTPromiseResolveBlock, 
    rejecter reject: @escaping RCTPromiseRejectBlock) {
    FFmpegKit.executeAsync(command) { session in
      let returnCode = session?.getReturnCode()
      resolve(returnCode?.getValue())
    }
    }
    
    @objc
    static func requiresMainQueueSetup() -> Bool {
    return false
    }
    }
    

    and the module:

    framework module ffmpegkit {
    
    header "AbstractSession.h"
    header "ArchDetect.h"
    header "AtomicLong.h"
    header "Chapter.h"
    header "FFmpegKit.h"
    header "FFmpegKitConfig.h"
    header "FFmpegSession.h"
    header "FFmpegSessionCompleteCallback.h"
    header "FFprobeKit.h"
    header "FFprobeSession.h"
    header "FFprobeSessionCompleteCallback.h"
    header "Level.h"
    header "Log.h"
    header "LogCallback.h"
    header "LogRedirectionStrategy.h"
    header "MediaInformation.h"
    header "MediaInformationJsonParser.h"
    header "MediaInformationSession.h"
    header "MediaInformationSessionCompleteCallback.h"
    header "Packages.h"
    header "ReturnCode.h"
    header "Session.h"
    header "SessionState.h"
    header "Statistics.h"
    header "StatisticsCallback.h"
    header "StreamInformation.h"
    header "ffmpegkit_exception.h"
    
    export *
    }
    

    I can provide you with more info if you need it. I've been trying non stop for 7 days and it's driving me crazy. I would appreciate any help greatly

  • How to add info to a video and share it with others in React Native

    28 mars, par Sanjay Kalal

    I am looking for a solution in which I can pick a video from the gallery and add info or overlay inside it and then share the video with the added info in react native. I tried using ffmpeg but it is not working properly

    I need a proper solution in which I can peak the video, add info to it and share it. I tried ffmpeg but it is not working.

  • Incorrect length of video produced with ffmpeg libraries

    28 mars, par ivan.ukr

    I'm writing a C program that takes series of PNG images and converts them into a video. Video consists of the initial black screen and then each of those images, shown for the same constant amount of time, say 200 ms. I'm using libx264 as codec and mp4 as output format. I'm compiling my program with GCC 12 on Ubuntu 22.04 LTS. I'm using ffmpeg version from Ubuntu repositories. In order to achieve above behavior I've set time base to 1/5 in the both stream and codec.

    // assume imageDuration = 200
    AVRational timeBase;
    av_reduce(&timeBase.num, &timeBase.den, imageDuration, 1000, INT_MAX);
    
    const AVCodec *c = avcodec_find_encoder(codecId);
    AVStream *s = avformat_new_stream(fc, c);
    s->time_base = timeBase;
    s->nb_frames = numImages + 1; // inital black screen + images
    s->duration = numImages + 1;
    
    AVCodecContext *cc = avcodec_alloc_context3(c);
    cc->width = width;
    cc->height = height;
    cc->pix_fmt = pixelFormat;
    cc->time_base = timeBase;
    
    // ffmpeg headers suggest: Set to time_base ticks per frame.
    // Default 1, e.g., H.264/MPEG-2 set it to 2.
    cc->ticks_per_frame = 2;
    
    cc->framerate = av_inv_q(timeBase);
    if (fc->oformat->flags & AVFMT_GLOBALHEADER) {
        cc->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
    }
    
    

    Then I'm encoding 11 frames.

    Finally, I'm getting video with the following characteristics:

    $ ffprobe v.mp4
    
    ....
    
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'v.mp4':
      Metadata:
        major_brand     : isom
        minor_version   : 512
        compatible_brands: isomiso2avc1mp41
        encoder         : Lavf58.76.100
      Duration: 00:00:00.01, start: 0.000000, bitrate: 68414 kb/s
      Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 640x360, 69073 kb/s, 10333.94 fps, 10240 tbr, 10240 tbn, 10 tbc (default)
        Metadata:
          handler_name    : VideoHandler
          vendor_id       : [0][0][0][0]
    
    

    Please pay attention to:

    Duration: 00:00:00.01
    

    and

    10333.94 fps
    

    That's totally NOT what I've expected (2.2s video length and 5 fps frame rate).

    Note: The content of video is correct, this can be verified by looking into the generated video file frame-by-frame in some program like Avidemux. But video length and frame rate are incorrect.

    Please advise, how to fix this?

  • How to write H264 raw stream into mp4 using ffmpeg directly

    28 mars, par Yelsin

    I want to wrap the H264 Nalus(x264 encoded) into mp4 using ffmpeg(SDK 2.1), but the output mp4 file could not play. I don't know how to set the pts and dts. Here's my code, using the code from Raw H264 frames in mpegts container using libavcodec and muxing.c from www.ffmpeg.org. My H264 stream has no B-Frame, every nalu starts with 00 00 00 01,the stream begins with sps pps then the h264 data.

    #include "stdafx.h"
    #include 
    #include 
    #include "Stream2Mp4.h"
    
    #include opt.h>
    #include mathematics.h>
    #include timestamp.h>
    #include avformat.h>
    #include swresample.h>
    #include swresample.h>
    
    #define STREAM_FRAME_RATE 25
    #define STREAM_PIX_FMT    AV_PIX_FMT_YUV420P /* default pix_fmt */
    
    static int ptsInc = 0;
    static int vi = -1;
    static int waitkey = 1;
    
    // < 0 = error
    // 0 = I-Frame
    // 1 = P-Frame
    // 2 = B-Frame
    // 3 = S-Frame
    int getVopType( const void *p, int len )
    {   
    if ( !p || 6 >= len )
        return -1;
    
    unsigned char *b = (unsigned char*)p;
    
    // Verify NAL marker
    if ( b[ 0 ] || b[ 1 ] || 0x01 != b[ 2 ] )
    {   b++;
    if ( b[ 0 ] || b[ 1 ] || 0x01 != b[ 2 ] )
        return -1;
    } // end if
    
    b += 3;
    
    // Verify VOP id
    if ( 0xb6 == *b )
    {   b++;
    return ( *b & 0xc0 ) >> 6;
    } // end if
    
    switch( *b )
    {   case 0x65 : return 0;
    case 0x61 : return 1;
    case 0x01 : return 2;
    } // end switch
    
    return -1;
    }
    
    int get_nal_type( void *p, int len )
    {
    if ( !p || 5 >= len )
        return -1;
    
    unsigned char *b = (unsigned char*)p;
    
    // Verify NAL marker
    if ( b[ 0 ] || b[ 1 ] || 0x01 != b[ 2 ] )
    {   b++;
    if ( b[ 0 ] || b[ 1 ] || 0x01 != b[ 2 ] )
        return -1;
    } // end if
    
    b += 3;
    
    return *b;
    }
    
    
    /* Add an output stream */
    AVStream *add_stream(AVFormatContext *oc, AVCodec **codec, enum AVCodecID codec_id)
    {
    AVCodecContext *c;
    AVStream *st;
    
    /* find the encoder */
    *codec = avcodec_find_encoder(codec_id);
    if (!*codec)
    {
        printf("could not find encoder for '%s' \n", avcodec_get_name(codec_id));
        exit(1);
    }
    
    st = avformat_new_stream(oc, *codec);
    if (!st)
    {
        printf("could not allocate stream \n");
        exit(1);
    }
    st->id = oc->nb_streams-1;
    c = st->codec;
    vi = st->index;
    
    switch ((*codec)->type)
    {
    case AVMEDIA_TYPE_AUDIO:
        c->sample_fmt = (*codec)->sample_fmts ? (*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
        c->bit_rate = 64000;
        c->sample_rate = 44100;
        c->channels = 2;
        break;
    
    case AVMEDIA_TYPE_VIDEO:
        c->codec_id = codec_id;
        c->bit_rate = 90000;
        c->width = 480;
        c->height = 354;
        c->time_base.den = 15;
        c->time_base.num = 1;
        c->gop_size = 12;
        c->pix_fmt = STREAM_PIX_FMT;
        if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO)
        {
            c->max_b_frames = 2;
        }
        if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO)
        {
            c->mb_decision = 2;
        }
        break;
    
    default:
        break;
    }
    
    if (oc->oformat->flags & AVFMT_GLOBALHEADER)
    {
        c->flags |= CODEC_FLAG_GLOBAL_HEADER;
    }
    
    return st;
    }
    
    
    
    void open_video(AVFormatContext *oc, AVCodec *codec, AVStream *st)
    {
    int ret;
    AVCodecContext *c = st->codec;
    
    /* open the codec */
    ret = avcodec_open2(c, codec, NULL);
    if (ret < 0)
    {
        printf("could not open video codec");
        //exit(1);
    }
    
    }
    
    int CreateMp4(AVFormatContext *&m_pOc, void *p, int len)
    {
    int ret; 
    const char* pszFileName = "output002.mp4";
    AVOutputFormat *fmt;
    AVCodec *video_codec;
    AVStream *m_pVideoSt;
    
    if (0x67 != get_nal_type(p, len))
    {
        printf("can not detect nal type");
        return -1;
    }
    av_register_all();
    
    avformat_alloc_output_context2(&m_pOc, NULL, NULL, pszFileName);
    if (!m_pOc)
    {
        printf("Could not deduce output format from file extension: using MPEG. \n");
        avformat_alloc_output_context2(&m_pOc, NULL, "mpeg", pszFileName);
    }
    if (!m_pOc)
    {
        return 1;
    }
    
    fmt = m_pOc->oformat;
    
    if (fmt->video_codec != AV_CODEC_ID_NONE)
    {
        m_pVideoSt = add_stream(m_pOc, &video_codec, fmt->video_codec);
    }
    
    if (m_pVideoSt)
    {
        open_video(m_pOc, video_codec, m_pVideoSt); 
    }
    
    av_dump_format(m_pOc, 0, pszFileName, 1);
    
    /* open the output file, if needed */
    if (!(fmt->flags & AVFMT_NOFILE))
    {
        ret = avio_open(&m_pOc->pb, pszFileName, AVIO_FLAG_WRITE);
        if (ret < 0)
        {
            printf("could not open '%s': %s\n", pszFileName);
            return 1;
        }
    }
    
    /* Write the stream header, if any */
    ret = avformat_write_header(m_pOc, NULL);
    if (ret < 0)
    {
        printf("Error occurred when opening output file");
        return 1;
    }
    }
    
    
    /* write h264 data to mp4 file*/
    
    
    void WriteVideo(AVFormatContext *&m_pOc,void* data, int nLen)
    {
    int ret;
    
    if ( 0 > vi )
    {
        printf("vi less than 0");
        //return -1;
    }
    AVStream *pst = m_pOc->streams[ vi ];
    
    // Init packet
    AVPacket pkt;
    
    AVCodecContext *c = pst->codec;
    
    av_init_packet( &pkt );
    pkt.flags |= ( 0 >= getVopType( data, nLen ) ) ? AV_PKT_FLAG_KEY : 0;   
    
    pkt.stream_index = pst->index;
    pkt.data = (uint8_t*)data;
    pkt.size = nLen;
    
    // Wait for key frame
    if ( waitkey )
        if ( 0 == ( pkt.flags & AV_PKT_FLAG_KEY ) )
            return ;
        else
            waitkey = 0;
    
    
    pkt.pts = (ptsInc++) * (90000/STREAM_FRAME_RATE);
    //pkt.dts = (ptsInc++) * (90000/STREAM_FRAME_RATE);
    
    ret = av_interleaved_write_frame( m_pOc, &pkt );
    if (ret < 0)
    {
        printf("cannot write frame");
    }
    
    
    }
    
    void CloseMp4(AVFormatContext *&m_pOc)
    {
    waitkey = -1;
    vi = -1;
    
    if (m_pOc)
        av_write_trailer(m_pOc);
    
    if (m_pOc && !(m_pOc->oformat->flags & AVFMT_NOFILE))
        avio_close(m_pOc->pb);
    
    if (m_pOc)
    {
        avformat_free_context(m_pOc);
        m_pOc = NULL;
    }
    
    }
    

    could anybody help me? Thank you very much!

  • Empty audio backends for torchaudio list audio backends, with ffmpeg installed on system libraries

    28 mars, par Alberto Agudo Dominguez

    I installed torchaudio 2.5.1 and a system install of ffmpeg on Windows and get:

    PS C:\Users\> ffmpeg -version
    ffmpeg version 2025-01-05-git-19c95ecbff-essentials_build-www.gyan.dev Copyright (c) 2000-2025 the FFmpeg developers
    built with gcc 14.2.0 (Rev1, Built by MSYS2 project)
    configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-zlib --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-sdl2 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-dxva2 --enable-d3d11va --enable-d3d12va --enable-ffnvcodec --enable-libvpl --enable-nvdec --enable-nvenc --enable-vaapi --enable-libgme --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libtheora --enable-libvo-amrwbenc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-librubberband
    libavutil      59. 54.101 / 59. 54.101
    libavcodec     61. 31.100 / 61. 31.100
    libavfilter    10.  6.101 / 10.  6.101
    libswresample   5.  4.100 /  5.  4.100
    libpostproc    58.  4.100 / 58.  4.100
    
    PS C:\Users\> python
    Python 3.12.5 (tags/v3.12.5:ff3bc82, Aug  6 2024, 20:45:27) [MSC v.1940 64 bit (AMD64)] on win32
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import torchaudio
    >>> torchaudio.list_audio_backends()
    []
    

    Hence ffmpeg is added to the path and recognized by the console, but not by torchaudio.