Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • How to change metadata with ffmpeg/avconv without creating a new file ?

    16 mars, par Stephan Kulla

    I am writing a python script for producing audio and video podcasts. There are a bunch of recorded media files (audio and video) and text files containing the meta information.

    Now I want to program a function which shall add the information from the meta data text files to all media files (the original and the converted ones). Because I have to handle many different file formats (wav, flac, mp3, mp4, ogg, ogv...) it would be great to have a tool which add meta data to arbitrary formats.

    My Question:

    How can I change the metadata of a file with ffmpeg/avconv without changing the audio or video of it and without creating a new file? Is there another commandline/python tool which would do the job for me?

    What I tried so far:

    I thought ffmpeg/avconv could be such a tool, because it can handle nearly all media formats. I hoped, that if I set -i input_file and the output_file to the same file, ffmpeg/avconv will be smart enough to leave the file unchanged. Then I could set -metadata key=value and just the metadata will be changed.

    But I noticed, that if I type avconv -i test.mp3 -metadata title='Test title' test.mp3 the audio test.mp3 will be reconverted in another bitrate.

    So I thought to use -c copy to copy all video and audio information. Unfortunately also this does not work:

    :~$ du -h test.wav # test.wav is 303 MB big
    303M    test.wav
    
    :~$ avconv -i test.wav -c copy -metadata title='Test title' test.wav
    avconv version 0.8.3-4:0.8.3-0ubuntu0.12.04.1, Copyright (c) 2000-2012 the
    Libav    developers
    built on Jun 12 2012 16:37:58 with gcc 4.6.3
    [wav @ 0x846b260] max_analyze_duration reached
    Input #0, wav, from 'test.wav':
    Duration: 00:29:58.74, bitrate: 1411 kb/s
        Stream #0.0: Audio: pcm_s16le, 44100 Hz, 2 channels, s16, 1411 kb/s
    File 'test.wav' already exists. Overwrite ? [y/N] y
    Output #0, wav, to 'test.wav':
    Metadata:
        title           : Test title
        encoder         : Lavf53.21.0
        Stream #0.0: Audio: pcm_s16le, 44100 Hz, 2 channels, 1411 kb/s
    Stream mapping:
    Stream #0:0 -> #0:0 (copy)
    Press ctrl-c to stop encoding
    size=     896kB time=5.20 bitrate=1411.3kbits/s    
    video:0kB audio:896kB global headers:0kB muxing overhead 0.005014%
    
    :~$ du -h test.wav # file size of test.wav changed dramatically
    900K    test.wav
    

    You see, that I cannot use -c copy if input_file and output_file are the same. Of course I could produce a temporarily file:

    :-$ avconv -i test.wav -c copy -metadata title='Test title' test_temp.mp3
    :-$ mv test_tmp.mp3 test.mp3
    

    But this solution would create (temporarily) a new file on the filesystem and is therefore not preferable.

  • Can m3u8 files have mp4 file urls ?

    15 mars, par 89neuron

    I am in a situation where I have my flv video converted to mp4 and then I am streaming this as http url using my nginx server. For multibitrate supoport on html5 I have created a m3u8 file like this :

    #EXTM3U
    #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=200111, RESOLUTION=512x288
    http://streamer.abc.com:8080/videos/arvind1.mp4
    #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=3000444, RESOLUTION=400x300
    http://streamer.abc.com:8080/videos/arvind1.mp4
    #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=400777, RESOLUTION=400x300
    http://streamer.abc.com:8080/videos/arvind1.mp4
    #EXT-X-ENDLIST
    

    But jwplayer is not playing this saying playlist not loaded. Specifically "No playable sources found". Please help.

  • Using FFmpeg encode and UDP with a Webcam ?

    14 mars, par Rendres

    I'm trying to get frames from a Webcam using OpenCV, encode them with FFmpeg and send them using UDP.

    I did before a similar project that instead of sending the packets with UDP, it saved them in a video file.

    My code is.

    #include 
    #include 
    #include 
    #include 
    
    extern "C" {
    #include avcodec.h>
    #include avformat.h>
    #include opt.h>
    #include imgutils.h>
    #include mathematics.h>
    #include swscale.h>
    #include swresample.h>
    }
    
    #include opencv.hpp>
    
    using namespace std;
    using namespace cv;
    
    #define WIDTH 640
    #define HEIGHT 480
    #define CODEC_ID AV_CODEC_ID_H264
    #define STREAM_PIX_FMT AV_PIX_FMT_YUV420P
    
    static AVFrame *frame, *pFrameBGR;
    
    int main(int argc, char **argv)
    {
    VideoCapture cap(0);
    const char *url = "udp://127.0.0.1:8080";
    
    AVFormatContext *formatContext;
    AVStream *stream;
    AVCodec *codec;
    AVCodecContext *c;
    AVDictionary *opts = NULL;
    
    int ret, got_packet;
    
    if (!cap.isOpened())
    {
        return -1;
    }
    
    av_log_set_level(AV_LOG_TRACE);
    
    av_register_all();
    avformat_network_init();
    
    avformat_alloc_output_context2(&formatContext, NULL, "h264", url);
    if (!formatContext)
    {
        av_log(NULL, AV_LOG_FATAL, "Could not allocate an output context for '%s'.\n", url);
    }
    
    codec = avcodec_find_encoder(CODEC_ID);
    if (!codec)
    {
        av_log(NULL, AV_LOG_ERROR, "Could not find encoder.\n");
    }
    
    stream = avformat_new_stream(formatContext, codec);
    
    c = avcodec_alloc_context3(codec);
    
    stream->id = formatContext->nb_streams - 1;
    stream->time_base = (AVRational){1, 25};
    
    c->codec_id = CODEC_ID;
    c->bit_rate = 400000;
    c->width = WIDTH;
    c->height = HEIGHT;
    c->time_base = stream->time_base;
    c->gop_size = 12;
    c->pix_fmt = STREAM_PIX_FMT;
    
    if (formatContext->flags & AVFMT_GLOBALHEADER)
        c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
    
    av_dict_set(&opts, "preset", "fast", 0);
    
    av_dict_set(&opts, "tune", "zerolatency", 0);
    
    ret = avcodec_open2(c, codec, NULL);
    if (ret < 0)
    {
        av_log(NULL, AV_LOG_ERROR, "Could not open video codec.\n");
    }
    
    pFrameBGR = av_frame_alloc();
    if (!pFrameBGR)
    {
        av_log(NULL, AV_LOG_ERROR, "Could not allocate video frame.\n");
    }
    
    frame = av_frame_alloc();
    if (!frame)
    {
        av_log(NULL, AV_LOG_ERROR, "Could not allocate video frame.\n");
    }
    
    frame->format = c->pix_fmt;
    frame->width = c->width;
    frame->height = c->height;
    
    ret = avcodec_parameters_from_context(stream->codecpar, c);
    if (ret < 0)
    {
        av_log(NULL, AV_LOG_ERROR, "Could not open video codec.\n");
    }
    
    av_dump_format(formatContext, 0, url, 1);
    
    ret = avformat_write_header(formatContext, NULL);
    if (ret != 0)
    {
        av_log(NULL, AV_LOG_ERROR, "Failed to connect to '%s'.\n", url);
    }
    
    Mat image(Size(HEIGHT, WIDTH), CV_8UC3);
    SwsContext *swsctx = sws_getContext(WIDTH, HEIGHT, AV_PIX_FMT_BGR24, WIDTH, HEIGHT, AV_PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);
    int frame_pts = 0;
    
    while (1)
    {
        cap >> image;
    
        int numBytesYUV = av_image_get_buffer_size(STREAM_PIX_FMT, WIDTH, HEIGHT, 1);
        uint8_t *bufferYUV = (uint8_t *)av_malloc(numBytesYUV * sizeof(uint8_t));
    
        avpicture_fill((AVPicture *)pFrameBGR, image.data, AV_PIX_FMT_BGR24, WIDTH, HEIGHT);
        avpicture_fill((AVPicture *)frame, bufferYUV, STREAM_PIX_FMT, WIDTH, HEIGHT);
    
        sws_scale(swsctx, (uint8_t const *const *)pFrameBGR->data, pFrameBGR->linesize, 0, HEIGHT, frame->data, frame->linesize);
    
        AVPacket pkt = {0};
        av_init_packet(&pkt);
    
        frame->pts = frame_pts;
    
        ret = avcodec_encode_video2(c, &pkt, frame, &got_packet);
        if (ret < 0)
        {
            av_log(NULL, AV_LOG_ERROR, "Error encoding frame\n");
        }
    
        if (got_packet)
        {
            pkt.pts = av_rescale_q_rnd(pkt.pts, c->time_base, stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
            pkt.dts = av_rescale_q_rnd(pkt.dts, c->time_base, stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
            pkt.duration = av_rescale_q(pkt.duration, c->time_base, stream->time_base);
            pkt.stream_index = stream->index;
    
            return av_interleaved_write_frame(formatContext, &pkt);
    
            cout << "Seguro que si" << endl;
        }
        frame_pts++;
    }
    
    avcodec_free_context(&c);
    av_frame_free(&frame);
    avformat_free_context(formatContext);
    
    return 0;
    }
    

    The code compiles but it returns Segmentation fault in the function av_interleaved_write_frame(). I've tried several implementations or several codecs (in this case I'm using libopenh264, but using mpeg2video returns the same segmentation fault). I tried also with av_write_frame() but it returns the same error.

    As I told before, I only want to grab frames from a webcam connected via USB, encode them to H264 and send the packets through UDP to another PC.

    My console log when I run the executable is.

    [100%] Built target display
    [OpenH264] this = 0x0x244b4f0, Info:CWelsH264SVCEncoder::SetOption():ENCODER_OPTION_TRACE_CALLBACK callback = 0x7f0c302a87c0.
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:CWelsH264SVCEncoder::InitEncoder(), openh264 codec version = 5a5c4f1
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:iUsageType = 0,iPicWidth= 640;iPicHeight= 480;iTargetBitrate= 400000;iMaxBitrate= 400000;iRCMode= 0;iPaddingFlag= 0;iTemporalLayerNum= 1;iSpatialLayerNum= 1;fFrameRate= 25.000000f;uiIntraPeriod= 12;eSpsPpsIdStrategy = 0;bPrefixNalAddingCtrl = 0;bSimulcastAVC=0;bEnableDenoise= 0;bEnableBackgroundDetection= 1;bEnableSceneChangeDetect = 1;bEnableAdaptiveQuant= 1;bEnableFrameSkip= 0;bEnableLongTermReference= 0;iLtrMarkPeriod= 30, bIsLosslessLink=0;iComplexityMode = 0;iNumRefFrame = 1;iEntropyCodingModeFlag = 0;uiMaxNalSize = 0;iLTRRefNum = 0;iMultipleThreadIdc = 1;iLoopFilterDisableIdc = 0 (offset(alpha/beta): 0,0;iComplexityMode = 0,iMaxQp = 51;iMinQp = 0)
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:sSpatialLayers[0]: .iVideoWidth= 640; .iVideoHeight= 480; .fFrameRate= 25.000000f; .iSpatialBitrate= 400000; .iMaxSpatialBitrate= 400000; .sSliceArgument.uiSliceMode= 1; .sSliceArgument.iSliceNum= 0; .sSliceArgument.uiSliceSizeConstraint= 1500;uiProfileIdc = 66;uiLevelIdc = 41
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:SliceArgumentValidationFixedSliceMode(), unsupported setting with Resolution and uiSliceNum combination under RC on! So uiSliceNum is changed to 6!
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:Setting MaxSpatialBitrate (400000) the same at SpatialBitrate (400000) will make the    actual bit rate lower than SpatialBitrate
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:bEnableFrameSkip = 0,bitrate can't be controlled for RC_QUALITY_MODE,RC_BITRATE_MODE and RC_TIMESTAMP_MODE without enabling skip frame.
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:Change QP Range from(0,51) to (12,42)
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WELS CPU features/capacities (0x4007fe3f) detected:   HTT:      Y, MMX:      Y, MMXEX:    Y, SSE:      Y, SSE2:     Y, SSE3:     Y, SSSE3:    Y, SSE4.1:   Y, SSE4.2:   Y, AVX:      Y, FMA:      Y, X87-FPU:  Y, 3DNOW:    N, 3DNOWEX:  N, ALTIVEC:  N, CMOV:     Y, MOVBE:    Y, AES:      Y, NUMBER OF LOGIC PROCESSORS ON CHIP: 8, CPU CACHE LINE SIZE (BYTES):        64
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WelsInitEncoderExt() exit, overall memory usage: 4542878 bytes
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WelsInitEncoderExt(), pCtx= 0x0x245a400.
    Output #0, h264, to 'udp://192.168.100.39:8080':
    Stream #0:0, 0, 1/25: Video: h264 (libopenh264), 1 reference frame, yuv420p, 640x480 (0x0), 0/1, q=2-31, 400 kb/s, 25 tbn
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:RcUpdateIntraComplexity iFrameDqBits = 385808,iQStep= 2016,iIntraCmplx = 777788928
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:[Rc]Layer 0: Frame timestamp = 0, Frame type = 2, encoding_qp = 30, average qp = 30, max qp = 33, min qp = 27, index = 0, iTid = 0, used = 385808, bitsperframe = 16000, target = 64000, remainingbits = -257808, skipbuffersize = 200000
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerNum = 2,iFrameSize = 48252
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerId = 0,iNalType = 0,iNalCount = 2, first Nal Length=18,uiSpatialId = 0,uiTemporalId = 0,iSubSeqId = 0
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerId = 1,iNalType = 1,iNalCount = 6, first Nal Length=6057,uiSpatialId = 0,uiTemporalId = 0,iSubSeqId = 0
    [libopenh264 @ 0x244aa00] 6 slices
    ./scriptBuild.sh: line 20: 10625 Segmentation fault      (core dumped) ./display
    

    As you can see, FFmpeg uses libopenh264 and configures it correctly. However, no matter what. It always returns the same Segmentation fault error...

    I've used commands like this.

    ffmpeg -s 640x480 -f video4linux2 -i /dev/video0 -r 30 -vcodec libopenh264 -an -f h264 udp://127.0.0.1:8080
    

    And it works perfectly, but I need to process the frames before sending them. Thats why I'm trying to use the libs.

    My FFmpeg version is.

    ffmpeg version 3.3.6 Copyright (c) 2000-2017 the FFmpeg developers
    built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
    configuration: --disable-yasm --enable-shared --enable-libopenh264 --cc='gcc -fPIC'
    libavutil      55. 58.100 / 55. 58.100
    libavcodec     57. 89.100 / 57. 89.100
    libavformat    57. 71.100 / 57. 71.100
    libavdevice    57.  6.100 / 57.  6.100
    libavfilter     6. 82.100 /  6. 82.100
    libswscale      4.  6.100 /  4.  6.100
    libswresample   2.  7.100 /  2.  7.100
    

    I tried to get more information of the error using gbd, but it didn't give me debugging info.

    How can I solve this problem?

  • c++ - using FFmpeg encode and UDP with a Webcam

    14 mars, par Rendres

    I'm trying to get frames from a Webcam using OpenCV, encode them with FFmpeg and send them using UDP.

    I did before a similar project that instead of sending the packets with UDP, it saved them in a video file.

    My code is.

    #include 
    #include 
    #include 
    #include 
    
    extern "C" {
    #include avcodec.h>
    #include avformat.h>
    #include opt.h>
    #include imgutils.h>
    #include mathematics.h>
    #include swscale.h>
    #include swresample.h>
    }
    
    #include opencv.hpp>
    
    using namespace std;
    using namespace cv;
    
    #define WIDTH 640
    #define HEIGHT 480
    #define CODEC_ID AV_CODEC_ID_H264
    #define STREAM_PIX_FMT AV_PIX_FMT_YUV420P
    
    static AVFrame *frame, *pFrameBGR;
    
    int main(int argc, char **argv)
    {
    VideoCapture cap(0);
    const char *url = "udp://127.0.0.1:8080";
    
    AVFormatContext *formatContext;
    AVStream *stream;
    AVCodec *codec;
    AVCodecContext *c;
    AVDictionary *opts = NULL;
    
    int ret, got_packet;
    
    if (!cap.isOpened())
    {
        return -1;
    }
    
    av_log_set_level(AV_LOG_TRACE);
    
    av_register_all();
    avformat_network_init();
    
    avformat_alloc_output_context2(&formatContext, NULL, "h264", url);
    if (!formatContext)
    {
        av_log(NULL, AV_LOG_FATAL, "Could not allocate an output context for '%s'.\n", url);
    }
    
    codec = avcodec_find_encoder(CODEC_ID);
    if (!codec)
    {
        av_log(NULL, AV_LOG_ERROR, "Could not find encoder.\n");
    }
    
    stream = avformat_new_stream(formatContext, codec);
    
    c = avcodec_alloc_context3(codec);
    
    stream->id = formatContext->nb_streams - 1;
    stream->time_base = (AVRational){1, 25};
    
    c->codec_id = CODEC_ID;
    c->bit_rate = 400000;
    c->width = WIDTH;
    c->height = HEIGHT;
    c->time_base = stream->time_base;
    c->gop_size = 12;
    c->pix_fmt = STREAM_PIX_FMT;
    
    if (formatContext->flags & AVFMT_GLOBALHEADER)
        c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
    
    av_dict_set(&opts, "preset", "fast", 0);
    
    av_dict_set(&opts, "tune", "zerolatency", 0);
    
    ret = avcodec_open2(c, codec, NULL);
    if (ret < 0)
    {
        av_log(NULL, AV_LOG_ERROR, "Could not open video codec.\n");
    }
    
    pFrameBGR = av_frame_alloc();
    if (!pFrameBGR)
    {
        av_log(NULL, AV_LOG_ERROR, "Could not allocate video frame.\n");
    }
    
    frame = av_frame_alloc();
    if (!frame)
    {
        av_log(NULL, AV_LOG_ERROR, "Could not allocate video frame.\n");
    }
    
    frame->format = c->pix_fmt;
    frame->width = c->width;
    frame->height = c->height;
    
    ret = avcodec_parameters_from_context(stream->codecpar, c);
    if (ret < 0)
    {
        av_log(NULL, AV_LOG_ERROR, "Could not open video codec.\n");
    }
    
    av_dump_format(formatContext, 0, url, 1);
    
    ret = avformat_write_header(formatContext, NULL);
    if (ret != 0)
    {
        av_log(NULL, AV_LOG_ERROR, "Failed to connect to '%s'.\n", url);
    }
    
    Mat image(Size(HEIGHT, WIDTH), CV_8UC3);
    SwsContext *swsctx = sws_getContext(WIDTH, HEIGHT, AV_PIX_FMT_BGR24, WIDTH, HEIGHT, AV_PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);
    int frame_pts = 0;
    
    while (1)
    {
        cap >> image;
    
        int numBytesYUV = av_image_get_buffer_size(STREAM_PIX_FMT, WIDTH, HEIGHT, 1);
        uint8_t *bufferYUV = (uint8_t *)av_malloc(numBytesYUV * sizeof(uint8_t));
    
        avpicture_fill((AVPicture *)pFrameBGR, image.data, AV_PIX_FMT_BGR24, WIDTH, HEIGHT);
        avpicture_fill((AVPicture *)frame, bufferYUV, STREAM_PIX_FMT, WIDTH, HEIGHT);
    
        sws_scale(swsctx, (uint8_t const *const *)pFrameBGR->data, pFrameBGR->linesize, 0, HEIGHT, frame->data, frame->linesize);
    
        AVPacket pkt = {0};
        av_init_packet(&pkt);
    
        frame->pts = frame_pts;
    
        ret = avcodec_encode_video2(c, &pkt, frame, &got_packet);
        if (ret < 0)
        {
            av_log(NULL, AV_LOG_ERROR, "Error encoding frame\n");
        }
    
        if (got_packet)
        {
            pkt.pts = av_rescale_q_rnd(pkt.pts, c->time_base, stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
            pkt.dts = av_rescale_q_rnd(pkt.dts, c->time_base, stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
            pkt.duration = av_rescale_q(pkt.duration, c->time_base, stream->time_base);
            pkt.stream_index = stream->index;
    
            return av_interleaved_write_frame(formatContext, &pkt);
    
            cout << "Seguro que si" << endl;
        }
        frame_pts++;
    }
    
    avcodec_free_context(&c);
    av_frame_free(&frame);
    avformat_free_context(formatContext);
    
    return 0;
    }
    

    The code compiles but it returns Segmentation fault in the function av_interleaved_write_frame(). I've tried several implementations or several codecs (in this case I'm using libopenh264, but using mpeg2video returns the same segmentation fault). I tried also with av_write_frame() but it returns the same error.

    As I told before, I only want to grab frames from a webcam connected via USB, encode them to H264 and send the packets through UDP to another PC.

    My console log when I run the executable is.

    [100%] Built target display
    [OpenH264] this = 0x0x244b4f0, Info:CWelsH264SVCEncoder::SetOption():ENCODER_OPTION_TRACE_CALLBACK callback = 0x7f0c302a87c0.
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:CWelsH264SVCEncoder::InitEncoder(), openh264 codec version = 5a5c4f1
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:iUsageType = 0,iPicWidth= 640;iPicHeight= 480;iTargetBitrate= 400000;iMaxBitrate= 400000;iRCMode= 0;iPaddingFlag= 0;iTemporalLayerNum= 1;iSpatialLayerNum= 1;fFrameRate= 25.000000f;uiIntraPeriod= 12;eSpsPpsIdStrategy = 0;bPrefixNalAddingCtrl = 0;bSimulcastAVC=0;bEnableDenoise= 0;bEnableBackgroundDetection= 1;bEnableSceneChangeDetect = 1;bEnableAdaptiveQuant= 1;bEnableFrameSkip= 0;bEnableLongTermReference= 0;iLtrMarkPeriod= 30, bIsLosslessLink=0;iComplexityMode = 0;iNumRefFrame = 1;iEntropyCodingModeFlag = 0;uiMaxNalSize = 0;iLTRRefNum = 0;iMultipleThreadIdc = 1;iLoopFilterDisableIdc = 0 (offset(alpha/beta): 0,0;iComplexityMode = 0,iMaxQp = 51;iMinQp = 0)
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:sSpatialLayers[0]: .iVideoWidth= 640; .iVideoHeight= 480; .fFrameRate= 25.000000f; .iSpatialBitrate= 400000; .iMaxSpatialBitrate= 400000; .sSliceArgument.uiSliceMode= 1; .sSliceArgument.iSliceNum= 0; .sSliceArgument.uiSliceSizeConstraint= 1500;uiProfileIdc = 66;uiLevelIdc = 41
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:SliceArgumentValidationFixedSliceMode(), unsupported setting with Resolution and uiSliceNum combination under RC on! So uiSliceNum is changed to 6!
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:Setting MaxSpatialBitrate (400000) the same at SpatialBitrate (400000) will make the    actual bit rate lower than SpatialBitrate
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:bEnableFrameSkip = 0,bitrate can't be controlled for RC_QUALITY_MODE,RC_BITRATE_MODE and RC_TIMESTAMP_MODE without enabling skip frame.
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:Change QP Range from(0,51) to (12,42)
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WELS CPU features/capacities (0x4007fe3f) detected:   HTT:      Y, MMX:      Y, MMXEX:    Y, SSE:      Y, SSE2:     Y, SSE3:     Y, SSSE3:    Y, SSE4.1:   Y, SSE4.2:   Y, AVX:      Y, FMA:      Y, X87-FPU:  Y, 3DNOW:    N, 3DNOWEX:  N, ALTIVEC:  N, CMOV:     Y, MOVBE:    Y, AES:      Y, NUMBER OF LOGIC PROCESSORS ON CHIP: 8, CPU CACHE LINE SIZE (BYTES):        64
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WelsInitEncoderExt() exit, overall memory usage: 4542878 bytes
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WelsInitEncoderExt(), pCtx= 0x0x245a400.
    Output #0, h264, to 'udp://192.168.100.39:8080':
    Stream #0:0, 0, 1/25: Video: h264 (libopenh264), 1 reference frame, yuv420p, 640x480 (0x0), 0/1, q=2-31, 400 kb/s, 25 tbn
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:RcUpdateIntraComplexity iFrameDqBits = 385808,iQStep= 2016,iIntraCmplx = 777788928
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:[Rc]Layer 0: Frame timestamp = 0, Frame type = 2, encoding_qp = 30, average qp = 30, max qp = 33, min qp = 27, index = 0, iTid = 0, used = 385808, bitsperframe = 16000, target = 64000, remainingbits = -257808, skipbuffersize = 200000
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerNum = 2,iFrameSize = 48252
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerId = 0,iNalType = 0,iNalCount = 2, first Nal Length=18,uiSpatialId = 0,uiTemporalId = 0,iSubSeqId = 0
    [libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerId = 1,iNalType = 1,iNalCount = 6, first Nal Length=6057,uiSpatialId = 0,uiTemporalId = 0,iSubSeqId = 0
    [libopenh264 @ 0x244aa00] 6 slices
    ./scriptBuild.sh: line 20: 10625 Segmentation fault      (core dumped) ./display
    

    As you can see, FFmpeg uses libopenh264 and configures it correctly. However, no matter what. It always returns the same Segmentation fault error...

    I've used commands like this.

    ffmpeg -s 640x480 -f video4linux2 -i /dev/video0 -r 30 -vcodec libopenh264 -an -f h264 udp://127.0.0.1:8080
    

    And it works perfectly, but I need to process the frames before sending them. Thats why I'm trying to use the libs.

    My FFmpeg version is.

    ffmpeg version 3.3.6 Copyright (c) 2000-2017 the FFmpeg developers
    built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
    configuration: --disable-yasm --enable-shared --enable-libopenh264 --cc='gcc -fPIC'
    libavutil      55. 58.100 / 55. 58.100
    libavcodec     57. 89.100 / 57. 89.100
    libavformat    57. 71.100 / 57. 71.100
    libavdevice    57.  6.100 / 57.  6.100
    libavfilter     6. 82.100 /  6. 82.100
    libswscale      4.  6.100 /  4.  6.100
    libswresample   2.  7.100 /  2.  7.100
    

    I tried to get more information of the error using gbd, but it didn't give me debugging info.

    How can I solve this problem? I don't know what else can I try...

    Thank you!

  • Discord bot : Fix ‘FFMPEG not found’

    14 mars, par Travis Sova

    I want to make my Discord bot join voice chat, but every time I run the command, I get an error message in the log(cmd) saying, FFMPEG not found.

    Picture of the error:

    Error: FFMPEG not found

    This is the code:

    client.on('message', message => {
      // Voice only works in guilds, if the message does not come from a guild,
      // we ignore it
      if (!message.guild) return;
    
      if (message.content === '/join') {
        // Only try to join the sender's voice channel if they are in one themselves
        if (message.member.voiceChannel) {
          message.member.voiceChannel.join()
            .then(connection => { // Connection is an instance of VoiceConnection
              message.reply('I have successfully connected to the channel!');
            })
            .catch(console.log);
        } else {
          message.reply('You need to join a voice channel first!');
        }
      }
    });
    

    this is my package.json file:

    {
      "name": "x",
      "version": "1.0.0",
      "main": "index.js",
      "scripts": {
        "start": "node index.js",
        "dev": "nodemon index.js"
      },
      "dependencies": {
        "discord.js": "^11.4.2",
        "dotenv": "^6.2.0",
        "ffmpeg": "0.0.4",
        "opusscript": "0.0.6"
      },
      "devDependencies": {
        "nodemon": "^1.18.9"
      }
    }