Recherche avancée

Médias (3)

Mot : - Tags -/pdf

Sur d’autres sites (1907)

  • Can't synchronize frame from two webcam streams side by side using ffmpeg

    15 juin 2019, par Peppe

    i’m trying to stream two webcam in sbs mode using ffmpeg websocket.

    It works but there is a delay of some second between the two scene.

    Here is what I do :

    ffmpeg -f v4l2   -framerate 30 -video_size 1280x720 -input_format mjpeg
    -i /dev/video1 -f v4l2 -framerate 30  -input_format mjpeg
    -video_size 1280x720 -i /dev/video0 -filter_complex "
    [0:v][1:v]hstack [left+right]" -map [left+right] -r 30 -fflags nobuffer
    -f mpegts -codec:v mpeg1video -s 2560x720 -b:v 800k -bf 0 http://localhost:8081/secretsecret

    And this is the output :

    ffmpeg version 4.1.3-0ubuntu1 Copyright (c) 2000-2019 the FFmpeg developers
     built with gcc 8 (Ubuntu 8.3.0-6ubuntu1)
     configuration: --prefix=/usr --extra-version=0ubuntu1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable- libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
     libavutil      56. 22.100 / 56. 22.100
     libavcodec     58. 35.100 / 58. 35.100
     libavformat    58. 20.100 / 58. 20.100
     libavdevice    58.  5.100 / 58.  5.100
     libavfilter     7. 40.101 /  7. 40.101
     libavresample   4.  0.  0 /  4.  0.  0
     libswscale      5.  3.100 /  5.  3.100
     libswresample   3.  3.100 /  3.  3.100
     libpostproc    55.  3.100 / 55.  3.100
    Input #0, video4linux2,v4l2, from '/dev/video2':
       Duration: N/A, start: 9539.835119, bitrate: N/A
       Stream #0:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 1280x720, 25 fps, 25 tbr, 1000k tbn, 1000k tbc
    Input #1, video4linux2,v4l2, from '/dev/video4':
     Duration: N/A, start: 9541.474622, bitrate: N/A
       Stream #1:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 1280x720, 25 fps, 25 tbr, 1000k tbn, 1000k tbc
    Stream mapping:
     Stream #0:0 (mjpeg) -> hstack:input0
     Stream #1:0 (mjpeg) -> hstack:input1
     hstack -> Stream #0:0 (mpeg1video)
    Press [q] to stop, [?] for help
    [swscaler @ 0x55aa2c266f00] deprecated pixel format used, make sure you did set range correctly
    Output #0, mpegts, to 'http://localhost:8081/supersecret':
     Metadata:
    encoder         : Lavf58.20.100
    Stream #0:0: Video: mpeg1video, yuv420p, 2560x720, q=2-31, 800 kb/s, 25 fps, 90k tbn, 25 tbc (default)
    Metadata:
     encoder         : Lavc58.35.100 mpeg1video
    Side data:
     cpb: bitrate max/min/avg: 0/0/800000 buffer size: 0 vbv_delay: -1
    [video4linux2,v4l2 @ 0x55aa2c1d4e40] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)

    It works but there is a delay of 1 second between the two screens.

    also tried with :

    ffmpeg -f v4l2 -framerate 25 -video_size 1280x720 -input_format mjpeg -i \
    /dev/video2 -f v4l2 -framerate 25  -input_format mjpeg -video_size 1280x720 \
    -i /dev/video4 -filter_complex " \
    nullsrc=size=2560x720 [background]; \
    [0:v] setpts=PTS-STARTPTS, scale=1280x720 [left]; \
    [1:v] setpts=PTS-STARTPTS, scale=1280x720 [right]; \
    [background][left]       overlay=shortest=1       [background+left]; \
    [background+left][right] overlay=shortest=1:x=1280 [left+right] \
    " -map [left+right] -f mpegts -codec:v mpeg1video -s 2560x720 -b:v 800k -bf 0     http://localhost:8081/supersecret

    but same problem....
    How can I fix ?

  • tools/python : add script to convert TensorFlow model (.pb) to native model (.model)

    13 juin 2019, par Guo, Yejun
    tools/python : add script to convert TensorFlow model (.pb) to native model (.model)
    

    For example, given TensorFlow model file espcn.pb,
    to generate native model file espcn.model, just run :
    python convert.py espcn.pb

    In current implementation, the native model file is generated for
    specific dnn network with hard-code python scripts maintained out of ffmpeg.
    For example, srcnn network used by vf_sr is generated with
    https://github.com/HighVoltageRocknRoll/sr/blob/master/generate_header_and_model.py#L85

    In this patch, the script is designed as a general solution which
    converts general TensorFlow model .pb file into .model file. The script
    now has some tricky to be compatible with current implemention, will
    be refined step by step.

    The script is also added into ffmpeg source tree. It is expected there
    will be many more patches and community needs the ownership of it.

    Another technical direction is to do the conversion in c/c++ code within
    ffmpeg source tree. While .pb file is organized with protocol buffers,
    it is not easy to do such work with tiny c/c++ code, see more discussion
    at http://ffmpeg.org/pipermail/ffmpeg-devel/2019-May/244496.html. So,
    choose the python script.

    Signed-off-by : Guo, Yejun <yejun.guo@intel.com>

    • [DH] .gitignore
    • [DH] tools/python/convert.py
    • [DH] tools/python/convert_from_tensorflow.py
  • Trying to use ffmpeg to create slideshow from ISO-8601 named pictures. Getting output with no playable streams

    19 juin 2019, par Robert Ellegate

    I’m trying to create a slideshow of images that are irregular in dimension/orientation but all named with the same ISO-8601 date format.

    I’ve normalized the filenames so they are all YYYYMMDD.jpg. I have tried using the globular pattern type for ffmpeg and various methods for inputting the files, including piping the concatenation of the files into ffmpeg.

    Here are the images I’m trying to use :

    $ ls *.jpg | xargs -n1 file
    20190411.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=upper-left, width=0], baseline, precision 8, 10128x3984, components 3
    20190417.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=lower-right, width=0], baseline, precision 8, 10176x3952, components 3
    20190424.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=upper-left, width=0], baseline, precision 8, 12128x3840, components 3
    20190429.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=upper-left, width=0], baseline, precision 8, 11104x3888, components 3
    20190430.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=lower-right, width=0], baseline, precision 8, 10992x3920, components 3
    20190501.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=lower-right, width=0], baseline, precision 8, 10528x3936, components 3
    20190502.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=lower-right, width=0], baseline, precision 8, 10992x3792, components 3
    20190508.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=lower-right, width=0], baseline, precision 8, 11008x3808, components 3
    20190515.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=lower-right, width=0], baseline, precision 8, 10416x3760, components 3
    20190516.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=lower-right, width=0], baseline, precision 8, 10928x3760, components 3
    20190517.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=lower-right, width=0], baseline, precision 8, 10720x3840, components 3
    20190522.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6552x1688, components 3
    20190523.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6572x1700, components 3
    20190524.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6468x1659, components 3
    20190528.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 5424x1644, components 3
    20190529.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=7, model=Pixel 2 XL, height=0, manufacturer=Google, orientation=[*0*], datetime=2019:05:29 16:38:01, width=0]
    20190531.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6584x1693, components 3
    20190603.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6536x1690, components 3
    20190604.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 5748x1618, components 3
    20190606.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6196x1690, components 3
    20190607.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6112x1674, components 3
    20190610.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6440x1670, components 3
    20190611.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6312x1694, components 3
    20190612.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6176x1689, components 3

    And these are the various ffmpeg commands I’ve tried using :

    cat *.jpg | ffmpeg -framerate 1/5 -c:v libx264 -r 30 -pix_fmt yuv420p out.mp4
    cat *.jpg | ffmpeg -f image2pipe -i - output.mkv
    ffmpeg -framerate 1/5 -pattern_type glob -i '*.jpg' out.mp4
    ffmpeg -framerate 1/5 -pattern_type glob -i '*.jpg' -c:v libx264 -vf fps=25 -pix_fmt yuv420p out.mp4

    I’m trying to create a video that shows each image for 5 seconds in order, but I’m getting a mp4 video file with no playable streams.