Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • Convert video to HLS in iOS app without triggering GPL (FFmpegKit alternative ?) [closed]

    7 juillet, par Aziz Bibitov

    I'm building an iOS app in Swift that needs to convert local video files to HLS format (.m3u8). Initially, I used the ffmpeg-kit-ios-full-gpl package from FFmpegKit, which works well. However, since this build includes GPL-licensed components (such as libx264), I'm concerned that using it would require my app to be released under the GPL, which is not compatible with App Store distribution.

    That said, my needs are fairly basic: I only need to convert H.264 .mp4 video files into HLS format.

    My Questions:

    1. Is there a safe way to use FFmpegKit—such as the full-libarary-lgpl variant—that guarantees no GPL components are used for this task?
    2. Are there any iOS-native or third-party tools that can reliably convert H.264 .mp4 video files to HLS on-device without introducing GPL concerns?
    3. Is using Apple’s AVAssetExportSession a viable alternative for exporting to HLS? I haven't found much official documentation about using it for HLS output.

    Any guidance on how to perform HLS conversion in an App Store–safe (non-GPL) way would be much appreciated.

  • What codec in OpenCV can do a stronger compression than default 'mp4v' ? [closed]

    7 juillet, par ullix

    My project does video recording from a microscope camera, and one experiment can run for days and even weeks. This takes plenty of filespace, so I want as strong a compression as possible. Quality is not a major concern.

    Currently I use the default 'mp4v' codec with opencv on Python. It works.

    I tried all codecs Google could find, and surprisingly, only very few worked. And those few that did work were even worse in compression.

    Where is the limitation? Is it Opencv (4.11.0.86)? Is it FFMPEG (7.1.1)? Is it the distribution (Ubuntu Mate 25.04)? Is it the CPU (AMD Ryzen AI 9 HX 370 w/ Radeon 890M × 24)?

    EDIT: I have uploaded an example clip (1 min) with some explanations to youtube: https://www.youtube.com/watch?v=wW8w2Pppnng Frames are FullHD, constant 10FPS

    The stripped-down code is:

    import cv2
    fourcc = cv2.VideoWriter_fourcc(*'mp4v')
    out    = cv2.VideoWriter("myvideo.mp4", fourcc, 19,(1920, 1080))
    cam    = cv2.VideoCapture(0)
    # loop
      success, image = cam.read()              
      cv2.imshow(title, image)                 
      key = cv2.waitKey(1)                     
      out.write(image)
    ...
    out.release()
    

    What alternative codec can I use which gives better compression?

  • FFmpeg WASM Compilation with Hardware-Accelerated H.264/HEVC Decoding

    7 juillet, par gt-devel

    I'm working on a media processing SDK with a substantial C++ codebase that currently uses FFmpeg for video decoding on native platforms (Windows/Linux). I need to port this to browsers while preserving both the existing C++ architecture and performance characteristics. The WASM approach is critical for us because it allows leveraging our existing optimized C++ media processing pipeline without a complete JavaScript rewrite, while maintaining the performance benefits of compiled native code.

    The Challenge: WebAssembly runs in a browser sandbox that typically doesn't allow direct GPU access, which conflicts with our hardware-accelerated video decoding requirements. Pure JavaScript solutions would require abandoning our mature C++ codebase and likely result in significant performance degradation.

    My Questions:

    1. Is it technically feasible to compile FFmpeg with hardware acceleration support (NVENC/VAAPI/VideoToolbox) for WASM targets? Additionally, can the underlying hardware acceleration dependencies (like CUDA runtime, Intel Media SDK, or platform-specific GPU drivers) be compiled as WASM modules, and would this approach serve the purpose of enabling hardware acceleration in the browser environment?

    2. Are there any emerging browser APIs or proposals (like WebGPU, WebCodecs API) that could provide a pathway for hardware-accelerated video decoding in WASM modules while preserving our C++ architecture?

    3. Has anyone successfully implemented hardware-accelerated video decoding in a browser environment using WASM, or are there alternative approaches that would allow us to maintain our existing C++ codebase and performance requirements?

    Context:

    • Extensive C++ media processing pipeline with FFmpeg 7.1.0
    • Target streams: H.264 and HEVC
    • Performance requirements make software-only decoding insufficient
    • Rewriting the entire codebase in JavaScript is not feasible due to complexity and performance constraints

    Any insights, experiences, or alternative architectural suggestions that preserve our C++ investment would be greatly appreciated!

    I attempted to compile FFmpeg for WebAssembly using Emscripten with hardware acceleration enabled. My approach was to call the FFmpeg configure script from a Linux bash environment using:

    emconfigure ./configure \
      --target-os=none \
      --arch=x86_32 \
      --enable-cross-compile \
      --disable-debug \
      --disable-x86asm \
      --disable-inline-asm \
      --disable-stripping \
      --disable-programs \
      --disable-doc \
      --disable-all \
      --enable-avcodec \
      --enable-gray \
      --enable-avformat \
      --enable-avfilter \
      --enable-avdevice \
      --enable-avutil \
      --enable-swresample \
      --enable-swscale \
      --enable-filters \
      --enable-protocol=file \
      --enable-decoder=h264 \
      --enable-vaapi \
      --enable-hwaccel=h264_vaapi \
      --enable-gpl \
      --enable-pthreads \
      --extra-cflags="-O3 -I$ffmpegIncludesDir -I$libvaIncludesDir" \
      --extra-cxxflags="-O3 -I$ffmpegIncludesDir -I$libvaIncludesDir" \
      --extra-ldflags="--initial-memory=33554432 --no-entry --relocatable -L$ffmpegLibrariesDir -L$libvaLibrariesDir" \
      --nm="emnm -g" \
      --ar=emar \
      --as="$EMSDK/upstream/bin/wasm-as" \
      --ranlib=emranlib \
      --cc=emcc \
      --cxx=em++ \
      --objcc=emcc \
      --dep-cc=emcc
    

    Since I specified x86_32 architecture, I provided the i386 version of the libva (VAAPI) library to match the target architecture. However, the configuration failed with the error "unknown file type: /usr/lib/i386-linux-gnu/libva.so" in the resulting config.log file.

    This error suggests that Emscripten's toolchain cannot process native Linux shared libraries (.so files), which makes sense since these are compiled for native execution rather than WebAssembly. The configuration specifically targets VAAPI hardware acceleration for H.264 decoding, but this approach appears fundamentally flawed since VAAPI requires direct hardware access that isn't available in the browser sandbox, and the native libraries cannot be linked into a WASM module.

    This experience has led me to question whether the hardware acceleration dependencies can be meaningfully compiled for WASM, or if alternative approaches are needed.

  • Is there a way to force FFMPEG to decode a video stream with alpha from ​a WebM video encoded with libvpx-vp9 ?

    6 juillet, par David

    I have a ​WebM file with one video stream that was encoded with VP9 (libvpx-vp9).

    I wrote a C++ program to extract the frames from the video stream and save them out as PNG's. This works fine except that the resulting PNG's are missing alpha.

    If I extract the frames from the same WebM file using FFMPEG the resulting PNG's do contain alpha. Here is the output from FFMPEG:

    $ ffmpeg -c:v libvpx-vp9 -i temp/anim.webm temp/output-%3d.png
    
    [libvpx-vp9 @ 0000024732b106c0] v1.10.0-rc1-11-gcb0d8ce31
        Last message repeated 1 times
    Input #0, matroska,webm, from 'temp/anim.webm':
      Metadata:
        ENCODER         : Lavf58.45.100
      Duration: 00:00:04.04, start: 0.000000, bitrate: 112 kb/s
      Stream #0:0: Video: vp9 (Profile 0), yuva420p(tv), 640x480, SAR 1:1 DAR 4:3, 25 fps, 25 tbr, 1k tbn, 1k tbc (default)
        Metadata:
          alpha_mode      : 1
          ENCODER         : Lavc58.91.100 libvpx-vp9
          DURATION        : 00:00:04.040000000
    

    FFMPEG identifies the stream format as yuva420p.

    Here is the output from my program when av_dump_format is called:

    Input #0, matroska,webm, from 'temp/anim.webm':
      Metadata:
        ENCODER         : Lavf58.45.100
      Duration: 00:00:04.04, start: 0.000000, bitrate: 112 kb/s
      Stream #0:0: Video: vp9 (Profile 0), yuv420p(tv), 640x480, SAR 1:1 DAR 4:3, 25 fps, 25 tbr, 1k tbn, 1k tbc (default)
        Metadata:
          alpha_mode      : 1
          ENCODER         : Lavc58.91.100 libvpx-vp9
          DURATION        : 00:00:04.040000000
    

    Notice that the detected stream format is yuv420p (the alpha is missing).

    Does anybody know how to force the stream format to use alpha?

    My setup code resembles the following (error handling is omitted)

    auto result = avformat_open_input(&formatContext, fileName.c_str(), nullptr, nullptr);
    auto result = avformat_find_stream_info(formatContext, nullptr);
    streamIndex = av_find_best_stream(formatContext, mediaType, -1, -1, nullptr, 0);
    auto stream = formatContext->streams[streamIndex];
    const auto codecIdentifier{ AV_CODEC_ID_VP9 };
    auto decoder = avcodec_find_decoder(codecIdentifier);
    pCodecContext = avcodec_alloc_context3(decoder);
    auto result = avcodec_open2(pCodecContext, decoder, &options);
    // AV_PIX_FMT_YUV420P - missing alpha
    auto pixelFormat = pCodecContext->pix_fmt;
    

    Gyan pointed out what the problem was. Here is the corrected code:

    In case anybody else runs into this issue in the future here is the code (error handling omitted):

    auto formatContext = avformat_alloc_context();
    formatContext->video_codec_id = AV_CODEC_ID_VP9;
    const auto decoder = avcodec_find_decoder_by_name("libvpx-vp9");
    formatContext->video_codec = decoder;
    avformat_open_input(&formatContext, fileName.c_str(), nullptr, nullptr);
    avformat_find_stream_info(formatContext.get(), nullptr);
    for (unsigned int streamIndex = 0; streamIndex < formatContext->nb_streams; ++streamIndex) {
        // Displayed the stream format as yuva420p (contains alpha)
        av_dump_format(formatContext, static_cast(streamIndex), fileName.toStdString().c_str(), 0);
    }
    

    Thanks,

  • Randomly extract video frames from multiple files [closed]

    5 juillet, par PatraoPedro

    I have a folder with hundreds of video files (*.avi), each one with more or less an hour long. What I would like to achieve is a piece of code that could go through each one of those videos and randomly select two or three frames from each file and then stitch it back together or alternatively save the frames in a folder as jpegs.

    Initially I thought I could do this using R but quickly I've realised that I would need something else possibly working together with R.

    Is it possible to call FFMPEG from R to do the task above?