Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • ffmpeg does not make a video from my images

    10 mai, par QuantumFool

    I've currently got some images that I'd like to display in the form of a movie. So, I pocked around a bit and found ffmpeg. This is the tutorial I have been going with:

    http://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/

    Since I don't care about reading, I skipped right to the writing section. As far as I can tell, this is what my program should say:

    import subprocess as sp
    FFMPEG_BIN = "ffmpeg" #I'm on Ubuntu
    command = [ FFMPEG_BIN,
                '-y',
                '-f', 'rawvideo',
                '-vcodec', 'rawvideo',
                '-s', '1000x1000',
                '-pix_fmt', 'rgb24',
                '-r', '24',
                '-i', '-',
                '-an',
                '-vcodec', 'mpeg',
                'my_output_videofile.mp4' ]
    
    
    pipe = sp.Popen( command, stdin = sp.PIPE, stderr = sp.PIPE)
    

    However, when I run this in spyder, I get the error message:

    Traceback (most recent call last):
      File "", line 1, in 
      File "/usr/lib/python2.7/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 540, in runfile
        execfile(filename, namespace)
      File "/home/xander/Downloads/python-meep/makeVideo.py", line 15, in 
        pipe = sp.Popen( command, stdin = sp.PIPE, stderr = sp.PIPE )
      File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
        errread, errwrite)
      File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
        raise child_exception
      OSError: [Errno 2] No such file or directory
    

    Why is that happening? I'm really suspicious: I never mention the names of my pictures ("Image0.jpeg", "Image1.jpeg", ..., "Image499.jpeg", "Image500.jpeg"). Any help will be greatly appreciated!

    P.S. The guy in the tutorial also says that some codecs require a bitrate; I tried that and it didn't work either.

  • Multiple mdhd ? Error while processing Video

    10 mai, par Kaunain

    I am facing a weird error in FFMPG while doing any operation of a particular video. I did a lot of R&D but did not found any solution regarding this. Can anyone of you please look into it, it will be so kind.

    here is a command i am using to mute the video, but end up with the following error.

    fun videoMuteCmd(
        filePath: Uri,
        outputPath: String,
    ): Array {
        return arrayOf(
            "-y", "-i", filePath.toString(),
           
            "-c", "copy", "-an", outputPath
        )
    }
    

    ERROR

    : LogMessage{executionId=3010, level=AV_LOG_ERROR, text='[mov,mp4,m4a,3gp,3g2,mj2 @ 0xb400007c1243e000] Multiple mdhd? '}

    : LogMessage{executionId=3010, level=AV_LOG_ERROR, text='[mov,mp4,m4a,3gp,3g2,mj2 @ 0xb400007c1243e000] error reading header '}

    LogMessage{executionId=3010, level=AV_LOG_ERROR, text='/storage/emulated/0/Android/media/com.whatsapp/WhatsApp/Media/WhatsApp Video/VID-20240510-WA0037.mp4: Invalid data found when processing input '}

    Please look into this error and please suggest the possible fix.

  • Live AAC and H264 data into live stream

    10 mai, par tzuleger

    I have a remote camera that captures H264 encoded video data and AAC encoded audio data, places the data into a custom ring buffer, which then is sent to a Node.js socket server, where the packet of information is detected as audio or video and then handled accordingly. That data should turn into a live stream, the protocol doesn't matter, but the delay has to be around ~4 seconds and can be played on iOS and Android devices.

    After reading hundreds of pages of documentation, questions, or solutions on the internet, I can't seem to find anything about handling two separate streams of AAC and H264 data to create a live stream.

    Despite attempting many different ways of achieving this goal, even having a working implementation of HLS, I want to revisit ALL options of live streaming, and I am hoping someone out there can give me advice or guidance to specific documentation on how to achieve this goal.

    To be specific, this is our goal:

    • Stream AAC and H264 data from remote cellular camera to a server which will do some work on that data to live stream to one user (possibly more users in the future) on a mobile iOS or Android device
    • Delay of the live stream should be a maximum of ~4 seconds, if the user has bad signal, then a longer delay is okay, as we obviously cannot do anything about that.
    • We should not have to re-encode our data. We've explored WebRTC, but that requires OPUS audio packets and thus requires us to re-encode the data, which would be expensive for our server to run.

    Any and all help, ranging from re-visiting an old approach we took to exploring new ones, is appreciated.

    I can provide code snippets as well for our current implementation of LLHLS if it helps, but I figured this post is already long enough.

    I've tried FFmpeg with named pipes, I expected it to just work, but FFmpeg kept blocking on the first named pipe input. I thought of just writing the data out to two files and then using FFmpeg, but it's continuous data and I don't have enough knowledge on FFmpeg on how I could use that type of implementation to create one live stream.

    I've tried implementing our own RTSP server on the camera using Gstreamer (our camera had its RTSP server stripped out, wasn't my call) but the camera's flash storage cannot handle having GStreamer on it, so that wasn't an option.

    My latest attempt was using a derivation of hls-parser to create an HLS manifest and mux.js to create MP4 containers for .m4s fragmented mp4 files and do an HLS live stream. This was my most successful attempt, where we successfully had a live stream going, but the delay was up to 16 seconds, as one would expect with HLS live streaming. We could drop the target duration down to 2 seconds and get about 6-8 seconds delay, but this could be unreliable, as these cameras could have no signal making it relatively expensive to send so many IDR frames with such low bandwidth.

    With the delay being the only factor left, I attempted to upgrade the implementation to support Apple's Low Latency HLS. It seems to work, as the right partial segments are getting requested and everything that makes LLHLS is working as intended, but the delay isn't going down when played on iOS' native AVPlayer, as a matter of fact, it looks like it worsened.

    I would also like to disclaim, my knowledge on media streaming is fairly limited. I've learned most of what I speak of in this post over the past 3 months by reading RFCs, documentation, and stackoverflow/reddit questions and answers. If anything appears to be confusing, it might be just my lack of understanding of it.

  • Edge of text clipped a few pixels in ffmpeg [closed]

    10 mai, par THEMOUNTAINSANDTHESKIES

    Trying to make a short video of a background image with text and some background audio. It's functional, but the right side of each line of my text is always clipped a few pixels. What am I missing?

    Here's my python code (which includes the ffmpeg command):

    font_size = 100
    text='Example Text Example Text \n Example Text'
    font_path = 'Bangers-Regular.ttf'
    image_path = 'background.jpg'
    audio_path = 'audio.wav'
    duration = 15 #or whatever the audio's length is
    output_path = 'output.mp4'
    
    font_path_escaped = font_path.replace("\\", "\\\\\\\\").replace("C:", "C\\:")
    
    lines = text.split('\n')
    num_lines = len(lines)
    line_height = font_size + 10
    start_y = (1080 - line_height * num_lines) // 2
    
    filter_complex = []
    for i, line in enumerate(lines):
        y_position = start_y + i * line_height
        filter_complex.append(
            f"drawtext=text='{line}':fontfile='{font_path_escaped}':fontsize={font_size}:"
            f"x=((w-text_w)/2):y={y_position}:"
            "fontcolor=white:borderw=6:bordercolor=black"
        )
    
    filter_complex_string = ','.join(filter_complex)
    
    command = [
        'ffmpeg',
        '-loop', '1',
        '-i', image_path,
        '-i', audio_path,
        '-filter_complex', filter_complex_string,
        '-map', '[v]',
        '-map', '1:a',
        '-c:v', 'hevc_nvenc',
        '-c:a', 'aac',
        '-pix_fmt', 'yuv420p',
        '-t', str(duration),
        '-shortest',
        '-loglevel', 'debug',
        '-y',
        output_path
    ]
    
    
    subprocess.run(command, check=True)
    print(f"Video created successfully: {output_path}")
    

    and a frame from the outputted video:

    enter image description here

  • File conversion to mp3 returning failure everytime using flutter package ffmpeg_kit_flutter

    10 mai, par Sanath balthar

    I am trying to convert a .wav audio file generated from a flutter's text to speech package - "flutter_tts" to mp3 file but it is failing everytime. I have written the below code for file conversion. I have imported the package ffmpeg_kit_flutter. It doesnt even show why the conversion is failing. I have looked up in stackoverflow and other sites but could not find any relevant solutions. I am using vscode as editor. I have attached flutter doctor output below as well. Could anyone please guide me? Let me know if you need more information.

    List command = [
                  '-i', '$filePath/998tts.wav',
                  '-c:a', 'mp3',
                  '$filePath/998.mp3'
                ];
    
     await FFmpegKitConfig.enableLogs();
                FFmpegKitConfig.enableLogCallback((log) =>print('FFmpeg log: $log'));        
              FFmpegSession result = await FFmpegKit.executeWithArguments(command);
              dynamic resultcode = await result.getReturnCode();
              dynamic resultlogs = await result.getLogsAsString();
              // FFmpegKitConfig.setLogLevel(logLevel)
              if(ReturnCode.isSuccess(resultcode)){
              print("file saved after conversion at $filePath/998.mp3 and result : Success and logs : $resultlogs");
              }
              else{
                print("Result : failure and logs : $resultlogs");
              }
    
    Flutter doctor output:
    [√] Flutter (Channel stable, 3.19.6, on Microsoft Windows [Version 10.0.22631.3296], locale en-IN)
    [√] Windows Version (Installed version of Windows is version 10 or higher)
    [√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
    [√] Chrome - develop for the web
    [!] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.9.5)
    X Visual Studio is missing necessary components. Please re-run the Visual Studio installer for the "Desktop development with C++"
    workload, and include these components:
    MSVC v142 - VS 2019 C++ x64/x86 build tools
    - If there are multiple build tool versions available, install the latest
    C++ CMake tools for Windows
    Windows 10 SDK
    [√] Android Studio (version 2023.2)
    [√] VS Code (version 1.89.0)
    [√] Connected device (3 available)
    [√] Network resources
    
    ! Doctor found issues in 1 category.
    
    

    Edit: Attaching error logs:

    I/flutter (25865): Loading ffmpeg-kit-flutter.
    D/ffmpeg-kit-flutter(25865): FFmpegKitFlutterPlugin com.arthenica.ffmpegkit.flutter.FFmpegKitFlutterPlugin@a5d9788 started listening to events on io.flutter.plugin.common.EventChannel$IncomingStreamRequestHandler$EventSinkImplementation@4cfb5f2.
    I/flutter (25865): Loaded ffmpeg-kit-flutter-android-audio-arm64-v8a-6.0.3.
    I/flutter (25865): Result : failure and logs : ffmpeg version n6.0 Copyright (c) 2000-2023 the FFmpeg developers
    I/flutter (25865):   built with Android (7155654, based on r399163b1) clang version 11.0.5 (https://android.googlesource.com/toolchain/llvm-project 87f1315dfbea7c137aa2e6d362dbb457e388158d)
    
    I/flutter (25865):   configuration: --cross-prefix=aarch64-linux-android- --sysroot=/Users/sue/Library/Android/sdk/ndk/22.1.7171670/toolchains/llvm/prebuilt/darwin-x86_64/sysroot --prefix=/Users/sue/Projects/arthenica/ffmpeg-kit/prebuilt/android-arm64/ffmpeg --pkg-config=/opt/homebrew/bin/pkg-config --enable-version3 --arch=aarch64 --cpu=armv8-a --target-os=android --enable-neon --enable-asm --enable-inline-asm --ar=aarch64-linux-android-ar --cc=aarch64-linux-android24-clang --cxx=aarch64-linux-android24-clang++ --ranlib=aarch64-linux-android-ranlib --strip=aarch64-linux-android-strip --nm=aarch64-linux-android-nm --extra-libs='-L/Users/sue/Projects/arthenica/ffmpeg-kit/prebuilt/android-arm64/cpu-features/lib -lndk_compat' --disable-autodetect --enable-cross-compile