Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (31)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

Sur d’autres sites (5415)

  • Using ffmpeg and ffserver to create a 2x1 live stream fails with unconnected output error

    31 janvier 2021, par weevilknievel

    I want to combine 2 RTSP streams (CCTV cameras) into a horizontal 2x1 strip and convert to webm for use in a HTML5 webpage.

    



    I am able to convert the streams into an mpeg or avi file easily, but as soon as I try to post it to ffserver ffm feed, I hit a wall.

    



    Here is my working ffmpeg command which writes out to a file :

    



    ffmpeg -rtsp_transport tcp -thread_queue_size 12 -i "rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=1&stream=1.sdp" -i "rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=2&stream=1.sdp" -filter_complex "[0:v][1:v] hstack=inputs=2 [v]" -map "[v]" -r 30 output.mpg


    



    It seems the ffmpeg process is outputting two streams and one of them is MPEG ? In the above command I had to put a frame rate "-r 30" into the command otherwise I got an error which said that MPEG 1/2 did not support a framerate of 5/1 ??

    



    As soon as I try and stream to ffserver, I get an error :

    



    ffmpeg -rtsp_transport tcp -thread_queue_size 12 -i "rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=1&stream=1.sdp" -i "rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=2&stream=1.sdp" -filter_complex "[0:v][1:v] hstack=inputs=2 [v]" -map "[v]" -r 30 http://localhost:8090/feed5.ffm


    



    The error I get is "Filter hstack has an unconnected output" :

    



    ffmpeg version N-86111-ga441aa90e8-static http://johnvansickle.com/ffmpeg/  Copyright (c) 2000-2017 the FFmpeg developers
  built with gcc 5.4.1 (Debian 5.4.1-8) 20170304
  configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-5 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gray --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimg
  libavutil      55. 63.100 / 55. 63.100
  libavcodec     57. 96.101 / 57. 96.101
  libavformat    57. 72.101 / 57. 72.101
  libavdevice    57.  7.100 / 57.  7.100
  libavfilter     6. 89.101 /  6. 89.101
  libswscale      4.  7.101 /  4.  7.101
  libswresample   2.  8.100 /  2.  8.100
  libpostproc    54.  6.100 / 54.  6.100
Input #0, rtsp, from 'rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=1&stream=1.sdp':
  Duration: N/A, start: 0.166667, bitrate: N/A
    Stream #0:0: Video: h264 (High), yuv420p(progressive), 352x288, 6 fps, 6 tbr, 90k tbn, 180k tbc
Input #1, rtsp, from 'rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=2&stream=1.sdp':
  Duration: N/A, start: 0.166667, bitrate: N/A
    Stream #1:0: Video: h264 (High), yuv420p(progressive), 352x288, 6 fps, 6 tbr, 90k tbn, 180k tbc
Filter hstack has an unconnected output


    



    My ffserver conf is very basic :

    



    HTTPPort 8090&#xA;HTTPBindAddress 0.0.0.0&#xA;MaxHTTPConnections 2000&#xA;MaxClients 1000&#xA;MaxBandwidth 100000&#xA;CustomLog -&#xA;&#xA;<feed>&#xA;File /mnt/ramdisk/feed5.ffm&#xA;Truncate&#xA;FileMaxSize 5M&#xA;ACL allow 127.0.0.1&#xA;</feed>&#xA;&#xA;<stream>&#xA;     Format webm&#xA;     Feed feed5.ffm&#xA;     VideoCodec libvpx&#xA;     VideoSize 640x360&#xA;     NoAudio&#xA;   AVOptionVideo quality realtime&#xA;   StartSendOnKey&#xA;   VideoBitRate 140&#xA;</stream>&#xA;&#xA;<stream>&#xA;Format status&#xA;ACL allow localhost&#xA;ACL allow 192.168.0.0 192.168.255.255&#xA;</stream>&#xA;<redirect>&#xA;URL http://www.ffmpeg.org/&#xA;</redirect>&#xA;

    &#xA;&#xA;

    Edit :

    &#xA;&#xA;

    Here is the debug output from the ffmpeg command above :

    &#xA;&#xA;

    ffmpeg -v debug -rtsp_transport tcp -thread_queue_size 12 -i "rtsp://192.168.1.132:554/user=admin&amp;password=mypassword&amp;channel=1&amp;stream=1.sdp" -i "rtsp://192.168.1.132:554/user=admin&amp;password=mypassword&amp;channel=2&amp;stream=1.sdp" -filter_complex "[0:v][1:v] hstack=inputs=2 [v]" -map "[v]" -r 30 http://localhost:8090/feed5.ffm&#xA;ffmpeg version N-86111-ga441aa90e8-static http://johnvansickle.com/ffmpeg/  Copyright (c) 2000-2017 the FFmpeg developers&#xA;  built with gcc 5.4.1 (Debian 5.4.1-8) 20170304&#xA;  configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-5 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gray --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimg&#xA;  libavutil      55. 63.100 / 55. 63.100&#xA;  libavcodec     57. 96.101 / 57. 96.101&#xA;  libavformat    57. 72.101 / 57. 72.101&#xA;  libavdevice    57.  7.100 / 57.  7.100&#xA;  libavfilter     6. 89.101 /  6. 89.101&#xA;  libswscale      4.  7.101 /  4.  7.101&#xA;  libswresample   2.  8.100 /  2.  8.100&#xA;  libpostproc    54.  6.100 / 54.  6.100&#xA;Splitting the commandline.&#xA;Reading option &#x27;-v&#x27; ... matched as option &#x27;v&#x27; (set logging level) with argument &#x27;debug&#x27;.&#xA;Reading option &#x27;-rtsp_transport&#x27; ... matched as AVOption &#x27;rtsp_transport&#x27; with argument &#x27;tcp&#x27;.&#xA;Reading option &#x27;-thread_queue_size&#x27; ... matched as option &#x27;thread_queue_size&#x27; (set the maximum number of queued packets from the demuxer) with argument &#x27;12&#x27;.&#xA;Reading option &#x27;-i&#x27; ... matched as input url with argument &#x27;rtsp://192.168.1.132:554/user=admin&amp;password=mypassword&amp;channel=1&amp;stream=1.sdp&#x27;.&#xA;Reading option &#x27;-i&#x27; ... matched as input url with argument &#x27;rtsp://192.168.1.132:554/user=admin&amp;password=mypassword&amp;channel=2&amp;stream=1.sdp&#x27;.&#xA;Reading option &#x27;-filter_complex&#x27; ... matched as option &#x27;filter_complex&#x27; (create a complex filtergraph) with argument &#x27;[0:v][1:v] hstack=inputs=2 [v]&#x27;.&#xA;Reading option &#x27;-map&#x27; ... matched as option &#x27;map&#x27; (set input stream mapping) with argument &#x27;[v]&#x27;.&#xA;Reading option &#x27;-r&#x27; ... matched as option &#x27;r&#x27; (set frame rate (Hz value, fraction or abbreviation)) with argument &#x27;30&#x27;.&#xA;Reading option &#x27;http://localhost:8090/feed5.ffm&#x27; ... matched as output url.&#xA;Finished splitting the commandline.&#xA;Parsing a group of options: global .&#xA;Applying option v (set logging level) with argument debug.&#xA;Applying option filter_complex (create a complex filtergraph) with argument [0:v][1:v] hstack=inputs=2 [v].&#xA;Successfully parsed a group of options.&#xA;Parsing a group of options: input url rtsp://192.168.1.132:554/user=admin&amp;password=mypassword&amp;channel=1&amp;stream=1.sdp.&#xA;Applying option thread_queue_size (set the maximum number of queued packets from the demuxer) with argument 12.&#xA;Successfully parsed a group of options.&#xA;Opening an input file: rtsp://192.168.1.132:554/user=admin&amp;password=mypassword&amp;channel=1&amp;stream=1.sdp.&#xA;[tcp @ 0x58fb5e0] No default whitelist set&#xA;[rtsp @ 0x58f9700] SDP:&#xA;v=0&#xA;o=- 38990265062388 38990265062388 IN IP4 192.168.1.132&#xA;a=range:npt=0-&#xA;m=video 0 RTP/AVP 96&#xA;c=IN IP4 0.0.0.0&#xA;a=rtpmap:96 H264/90000&#xA;a=framerate:0S&#xA;a=fmtp:96 profile-level-id=640014; packetization-mode=1; sprop-parameter-sets=Z2QAFK2EAQwgCGEAQwgCGEAQwgCEK1CwSyA=,aO48sA==&#xA;a=control:trackID=3&#xA;&#xA;Failed to parse interval end specification &#x27;&#x27;&#xA;[rtsp @ 0x58f9700] video codec set to: h264&#xA;[rtsp @ 0x58f9700] RTP Profile IDC: 64 Profile IOP: 0 Level: 14&#xA;[rtsp @ 0x58f9700] RTP Packetization Mode: 1&#xA;[rtsp @ 0x58f9700] Extradata set to 0x58fb940 (size: 38)&#xA;[rtsp @ 0x58f9700] setting jitter buffer size to 0&#xA;[rtsp @ 0x58f9700] hello state=0&#xA;[h264 @ 0x58fca00] nal_unit_type: 7, nal_ref_idc: 3&#xA;[h264 @ 0x58fca00] nal_unit_type: 8, nal_ref_idc: 3&#xA;[h264 @ 0x58fca00] nal_unit_type: 7, nal_ref_idc: 3&#xA;[h264 @ 0x58fca00] nal_unit_type: 8, nal_ref_idc: 3&#xA;[h264 @ 0x58fca00] unknown SEI type 229&#xA;[h264 @ 0x58fca00] nal_unit_type: 7, nal_ref_idc: 3&#xA;[h264 @ 0x58fca00] nal_unit_type: 8, nal_ref_idc: 3&#xA;[h264 @ 0x58fca00] nal_unit_type: 6, nal_ref_idc: 0&#xA;[h264 @ 0x58fca00] nal_unit_type: 5, nal_ref_idc: 3&#xA;[h264 @ 0x58fca00] unknown SEI type 229&#xA;[h264 @ 0x58fca00] Reinit context to 352x288, pix_fmt: yuv420p&#xA;[h264 @ 0x58fca00] nal_unit_type: 1, nal_ref_idc: 3&#xA;    Last message repeated 5 times&#xA;[h264 @ 0x58fca00] unknown SEI type 229&#xA;    Last message repeated 1 times&#xA;[rtsp @ 0x58f9700] All info found&#xA;[rtsp @ 0x58f9700] Setting avg frame rate based on r frame rate&#xA;Input #0, rtsp, from &#x27;rtsp://192.168.1.132:554/user=admin&amp;password=mypassword&amp;channel=1&amp;stream=1.sdp&#x27;:&#xA;  Duration: N/A, start: 0.166667, bitrate: N/A&#xA;    Stream #0:0, 28, 1/90000: Video: h264 (High), 1 reference frame, yuv420p(progressive, left), 352x288, 0/1, 6 fps, 6 tbr, 90k tbn, 180k tbc&#xA;Successfully opened the file.&#xA;Parsing a group of options: input url rtsp://192.168.1.132:554/user=admin&amp;password=mypassword&amp;channel=2&amp;stream=1.sdp.&#xA;Successfully parsed a group of options.&#xA;Opening an input file: rtsp://192.168.1.132:554/user=admin&amp;password=mypassword&amp;channel=2&amp;stream=1.sdp.&#xA;[tcp @ 0x59f4280] No default whitelist set&#xA;[rtsp @ 0x59f7620] SDP:&#xA;v=0&#xA;o=- 38990265062388 38990265062388 IN IP4 192.168.1.132&#xA;a=range:npt=0-&#xA;m=video 0 RTP/AVP 96&#xA;c=IN IP4 0.0.0.0&#xA;a=rtpmap:96 H264/90000&#xA;a=framerate:0S&#xA;a=fmtp:96 profile-level-id=640014; packetization-mode=1; sprop-parameter-sets=Z2QAFK2EAQwgCGEAQwgCGEAQwgCEK1CwSyA=,aO48sA==&#xA;a=control:trackID=3&#xA;&#xA;Failed to parse interval end specification &#x27;&#x27;&#xA;[rtsp @ 0x59f7620] video codec set to: h264&#xA;[rtsp @ 0x59f7620] RTP Profile IDC: 64 Profile IOP: 0 Level: 14&#xA;[rtsp @ 0x59f7620] RTP Packetization Mode: 1&#xA;[rtsp @ 0x59f7620] Extradata set to 0x59f3670 (size: 38)&#xA;[rtp @ 0x59f62a0] No default whitelist set&#xA;[udp @ 0x58fb6a0] No default whitelist set&#xA;[udp @ 0x58fb6a0] end receive buffer size reported is 131072&#xA;[udp @ 0x591b940] No default whitelist set&#xA;[udp @ 0x591b940] end receive buffer size reported is 131072&#xA;[rtsp @ 0x59f7620] setting jitter buffer size to 500&#xA;[rtsp @ 0x59f7620] hello state=0&#xA;[h264 @ 0x59e4b20] nal_unit_type: 7, nal_ref_idc: 3&#xA;[h264 @ 0x59e4b20] nal_unit_type: 8, nal_ref_idc: 3&#xA;[h264 @ 0x59e4b20] nal_unit_type: 7, nal_ref_idc: 3&#xA;[h264 @ 0x59e4b20] nal_unit_type: 8, nal_ref_idc: 3&#xA;[h264 @ 0x59e4b20] unknown SEI type 229&#xA;[h264 @ 0x59e4b20] nal_unit_type: 7, nal_ref_idc: 3&#xA;[h264 @ 0x59e4b20] nal_unit_type: 8, nal_ref_idc: 3&#xA;[h264 @ 0x59e4b20] nal_unit_type: 6, nal_ref_idc: 0&#xA;[h264 @ 0x59e4b20] nal_unit_type: 5, nal_ref_idc: 3&#xA;[h264 @ 0x59e4b20] unknown SEI type 229&#xA;[h264 @ 0x59e4b20] Reinit context to 352x288, pix_fmt: yuv420p&#xA;[h264 @ 0x59e4b20] nal_unit_type: 1, nal_ref_idc: 3&#xA;    Last message repeated 5 times&#xA;[h264 @ 0x59e4b20] unknown SEI type 229&#xA;    Last message repeated 1 times&#xA;[rtsp @ 0x59f7620] All info found&#xA;[rtsp @ 0x59f7620] Setting avg frame rate based on r frame rate&#xA;Input #1, rtsp, from &#x27;rtsp://192.168.1.132:554/user=admin&amp;password=mypassword&amp;channel=2&amp;stream=1.sdp&#x27;:&#xA;  Duration: N/A, start: 0.166667, bitrate: N/A&#xA;    Stream #1:0, 28, 1/90000: Video: h264 (High), 1 reference frame, yuv420p(progressive, left), 352x288, 0/1, 6 fps, 6 tbr, 90k tbn, 180k tbc&#xA;Successfully opened the file.&#xA;detected 2 logical cores&#xA;[Parsed_hstack_0 @ 0x59f7cc0] Setting &#x27;inputs&#x27; to value &#x27;2&#x27;&#xA;Parsing a group of options: output url http://localhost:8090/feed5.ffm.&#xA;Applying option map (set input stream mapping) with argument [v].&#xA;Applying option r (set frame rate (Hz value, fraction or abbreviation)) with argument 30.&#xA;Successfully parsed a group of options.&#xA;Opening an output file: http://localhost:8090/feed5.ffm.&#xA;[http @ 0x59aa700] Setting default whitelist &#x27;http,https,tls,rtp,tcp,udp,crypto,httpproxy&#x27;&#xA;[http @ 0x59aa700] request: GET /feed5.ffm HTTP/1.1&#xA;User-Agent: Lavf/57.72.101&#xA;Accept: */*&#xA;Range: bytes=0-&#xA;Connection: close&#xA;Host: localhost:8090&#xA;Icy-MetaData: 1&#xA;&#xA;&#xA;[ffm @ 0x5917d00] Format ffm probed with size=2048 and score=101&#xA;[NULL @ 0x5bce3e0] Setting entry with key &#x27;video_size&#x27; to value &#x27;640x360&#x27;&#xA;[NULL @ 0x5bce3e0] Setting entry with key &#x27;b&#x27; to value &#x27;140000&#x27;&#xA;[NULL @ 0x5bce3e0] Setting entry with key &#x27;time_base&#x27; to value &#x27;1/5&#x27;&#xA;[NULL @ 0x5bce3e0] Setting entry with key &#x27;bt&#x27; to value &#x27;35000&#x27;&#xA;[NULL @ 0x5bce3e0] Setting entry with key &#x27;rc_eq&#x27; to value &#x27;tex^qComp&#x27;&#xA;[NULL @ 0x5bce3e0] Setting entry with key &#x27;maxrate&#x27; to value &#x27;280000&#x27;&#xA;[NULL @ 0x5bce3e0] Setting entry with key &#x27;bufsize&#x27; to value &#x27;280000&#x27;&#xA;[AVIOContext @ 0x5915b80] Statistics: 4096 bytes read, 0 seeks&#xA;[http @ 0x5915f00] Setting default whitelist &#x27;http,https,tls,rtp,tcp,udp,crypto,httpproxy&#x27;&#xA;[http @ 0x5915f00] request: POST /feed5.ffm HTTP/1.1&#xA;Transfer-Encoding: chunked&#xA;User-Agent: Lavf/57.72.101&#xA;Accept: */*&#xA;Connection: close&#xA;Host: localhost:8090&#xA;Icy-MetaData: 1&#xA;&#xA;&#xA;Successfully opened the file.&#xA;Filter hstack has an unconnected output&#xA;[AVIOContext @ 0x59ab080] Statistics: 0 seeks, 0 writeouts&#xA;

    &#xA;&#xA;

    I am stumped - any clues ?

    &#xA;&#xA;

    Thanks in advance.

    &#xA;

  • ADD Image overlay to ffmpeg video stream

    1er juillet 2017, par Chris

    I am new to ffmpeg and want to add an HUD to the video stream, so a few questions.

    1. What file do I need to edit.
    2. What do I need to do to achieve this.

    Thanks in advance. Also I am VERY new to all of this, I will need instructions step by step

    I saw other questions saying to add this : ffmpeg -n -i video.mp4 -i logo.png -filter_complex "[0:v]setsar=sar=1[v];[v][1]blend=all_mode='overlay':all_opacity=0.7" -movflags +faststart tmb/video.mp4

    But I dont know where to put it, i entered it in the terminal and got this :

    pi@raspberrypi:~ $ ffmpeg -n -i video.mp4 -i logo.png -filter_complex "[0:v]setsar=sar=1[v];[v][1]blend=all_mode='overlay':all_opacity=0.7" -movflags +faststart tmb/video.mp4
    ffmpeg version N-86215-gb5228e4 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 4.9.2 (Raspbian 4.9.2-10)
     configuration: --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree --extra-libs=-ldl
     libavutil      55. 63.100 / 55. 63.100
     libavcodec     57. 96.101 / 57. 96.101
     libavformat    57. 72.101 / 57. 72.101
     libavdevice    57.  7.100 / 57.  7.100
     libavfilter     6. 90.100 /  6. 90.100
     libswscale      4.  7.101 /  4.  7.101
     libswresample   2.  8.100 /  2.  8.100
     libpostproc    54.  6.100 / 54.  6.100
    video.mp4: No such file or directory

    I dont understand what i am supposed to do with the video.mp4 ?

    HERE IS THE SCRIPT THAT SENDS THE VIDEO.

    import subprocess
    import shlex
    import re
    import os
    import time
    import urllib2
    import platform
    import json
    import sys
    import base64
    import random


    import argparse

    parser = argparse.ArgumentParser(description='robot control')
    parser.add_argument('camera_id')
    parser.add_argument('video_device_number', default=0, type=int)
    parser.add_argument('--kbps', default=450, type=int)
    parser.add_argument('--brightness', default=75, type=int, help='camera brightness')
    parser.add_argument('--contrast', default=75, type=int, help='camera contrast')
    parser.add_argument('--saturation', default=15, type=int, help='camera saturation')
    parser.add_argument('--rotate180', default=False, type=bool, help='rotate image 180 degrees')
    parser.add_argument('--env', default="prod")



    args = parser.parse_args()



    server = "runmyrobot.com"
    #server = "52.52.213.92"


    from socketIO_client import SocketIO, LoggingNamespace

    # enable raspicam driver in case a raspicam is being used
    os.system("sudo modprobe bcm2835-v4l2")


    if args.env == "dev":
       print "using dev port 8122"
       port = 8122
    elif args.env == "prod":
       print "using prod port 8022"
       port = 8022
    else:
       print "invalid environment"
       sys.exit(0)


    print "initializing socket io"
    print "server:", server
    print "port:", port
    socketIO = SocketIO(server, port, LoggingNamespace)
    print "finished initializing socket io"

    #ffmpeg -f qtkit -i 0 -f mpeg1video -b 400k -r 30 -s 320x240 http://52.8.81.124:8082/hello/320/240/


    def onHandleCameraCommand(*args):
       #thread.start_new_thread(handle_command, args)
       print args


    socketIO.on('command_to_camera', onHandleCameraCommand)


    def onHandleTakeSnapshotCommand(*args):
       print "taking snapshot"
       inputDeviceID = streamProcessDict['device_answer']
       snapShot(platform.system(), inputDeviceID)
       with open ("snapshot.jpg", 'rb') as f:
           data = f.read()
       print "emit"

       socketIO.emit('snapshot', {'image':base64.b64encode(data)})

    socketIO.on('take_snapshot_command', onHandleTakeSnapshotCommand)


    def randomSleep():
       """A short wait is good for quick recovery, but sometimes a longer delay is needed or it will just keep trying and failing short intervals, like because the system thinks the port is still in use and every retry makes the system think it's still in use. So, this has a high likelihood of picking a short interval, but will pick a long one sometimes."""

       timeToWait = random.choice((0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 5))
       print "sleeping", timeToWait
       time.sleep(timeToWait)



    def getVideoPort():


       url = 'http://%s/get_video_port/%s' % (server, cameraIDAnswer)


       for retryNumber in range(2000):
           try:
               print "GET", url
               response = urllib2.urlopen(url).read()
               break
           except:
               print "could not open url ", url
               time.sleep(2)

       return json.loads(response)['mpeg_stream_port']

    def getAudioPort():


       url = 'http://%s/get_audio_port/%s' % (server, cameraIDAnswer)


       for retryNumber in range(2000):
           try:
               print "GET", url
               response = urllib2.urlopen(url).read()
               break
           except:
               print "could not open url ", url
               time.sleep(2)

       return json.loads(response)['audio_stream_port']



    def runFfmpeg(commandLine):

       print commandLine
       ffmpegProcess = subprocess.Popen(shlex.split(commandLine))
       print "command started"

       return ffmpegProcess



    def handleDarwin(deviceNumber, videoPort, audioPort):


       p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "qtkit", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)

       out, err = p.communicate()

       print err

       deviceAnswer = raw_input("Enter the number of the camera device for your robot from the list above: ")
       commandLine = 'ffmpeg -f qtkit -i %s -f mpeg1video -b 400k -r 30 -s 320x240 http://%s:%s/hello/320/240/' % (deviceAnswer, server, videoPort)

       process = runFfmpeg(commandLine)

       return {'process': process, 'device_answer': deviceAnswer}


    def handleLinux(deviceNumber, videoPort, audioPort):

       print "sleeping to give the camera time to start working"
       randomSleep()
       print "finished sleeping"


       #p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "qtkit", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
       #out, err = p.communicate()
       #print err


       os.system("v4l2-ctl -c brightness={brightness} -c contrast={contrast} -c saturation={saturation}".format(brightness=args.brightness,
                                                                                                                contrast=args.contrast,
                                                                                                                saturation=args.saturation))


       if deviceNumber is None:
           deviceAnswer = raw_input("Enter the number of the camera device for your robot: ")
       else:
           deviceAnswer = str(deviceNumber)


       #commandLine = '/usr/local/bin/ffmpeg -s 320x240 -f video4linux2 -i /dev/video%s -f mpeg1video -b 1k -r 20 http://runmyrobot.com:%s/hello/320/240/' % (deviceAnswer, videoPort)
       #commandLine = '/usr/local/bin/ffmpeg -s 640x480 -f video4linux2 -i /dev/video%s -f mpeg1video -b 150k -r 20 http://%s:%s/hello/640/480/' % (deviceAnswer, server, videoPort)
       # For new JSMpeg
       #commandLine = '/usr/local/bin/ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video%s -f mpegts -codec:v mpeg1video -s 640x480 -b:v 250k -bf 0 http://%s:%s/hello/640/480/' % (deviceAnswer, server, videoPort) # ClawDaddy
       #commandLine = '/usr/local/bin/ffmpeg -s 1280x720 -f video4linux2 -i /dev/video%s -f mpeg1video -b 1k -r 20 http://runmyrobot.com:%s/hello/1280/720/' % (deviceAnswer, videoPort)


       if args.rotate180:
           rotationOption = "-vf transpose=2,transpose=2"
       else:
           rotationOption = ""

       # video with audio
       videoCommandLine = '/usr/local/bin/ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video%s %s -f mpegts -codec:v mpeg1video -s 640x480 -b:v %dk -bf 0 -muxdelay 0.001 http://%s:%s/hello/640/480/' % (deviceAnswer, rotationOption, args.kbps, server, videoPort)
       audioCommandLine = '/usr/local/bin/ffmpeg -f alsa -ar 44100 -ac 1 -i hw:1 -f mpegts -codec:a mp2 -b:a 32k -muxdelay 0.001 http://%s:%s/hello/640/480/' % (server, audioPort)


       print videoCommandLine
       print audioCommandLine

       videoProcess = runFfmpeg(videoCommandLine)
       audioProcess = runFfmpeg(audioCommandLine)

       return {'video_process': videoProcess, 'audioProcess': audioProcess, 'device_answer': deviceAnswer}



    def handleWindows(deviceNumber, videoPort):

       p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "dshow", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)


       out, err = p.communicate()
       lines = err.split('\n')

       count = 0

       devices = []

       for line in lines:

           #if "]  \"" in line:
           #    print "line:", line

           m = re.search('.*\\"(.*)\\"', line)
           if m != None:
               #print line
               if m.group(1)[0:1] != '@':
                   print count, m.group(1)
                   devices.append(m.group(1))
                   count += 1


       if deviceNumber is None:
           deviceAnswer = raw_input("Enter the number of the camera device for your robot from the list above: ")
       else:
           deviceAnswer = str(deviceNumber)

       device = devices[int(deviceAnswer)]
       commandLine = 'ffmpeg -s 640x480 -f dshow -i video="%s" -f mpegts -codec:v mpeg1video -b 200k -r 20 http://%s:%s/hello/640/480/' % (device, server, videoPort)


       process = runFfmpeg(commandLine)

       return {'process': process, 'device_answer': device}



    def handleWindowsScreenCapture(deviceNumber, videoPort):

       p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "dshow", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)


       out, err = p.communicate()

       lines = err.split('\n')

       count = 0

       devices = []

       for line in lines:

           #if "]  \"" in line:
           #    print "line:", line

           m = re.search('.*\\"(.*)\\"', line)
           if m != None:
               #print line
               if m.group(1)[0:1] != '@':
                   print count, m.group(1)
                   devices.append(m.group(1))
                   count += 1


       if deviceNumber is None:
           deviceAnswer = raw_input("Enter the number of the camera device for your robot from the list above: ")
       else:
           deviceAnswer = str(deviceNumber)



       device = devices[int(deviceAnswer)]
       commandLine = 'ffmpeg -f dshow -i video="screen-capture-recorder" -vf "scale=640:480" -f mpeg1video -b 50k -r 20 http://%s:%s/hello/640/480/' % (server, videoPort)

       print "command line:", commandLine

       process = runFfmpeg(commandLine)

       return {'process': process, 'device_answer': device}




    def snapShot(operatingSystem, inputDeviceID, filename="snapshot.jpg"):    

       try:
           os.remove('snapshot.jpg')
       except:
           print "did not remove file"

       commandLineDict = {
           'Darwin': 'ffmpeg -y -f qtkit -i %s -vframes 1 %s' % (inputDeviceID, filename),
           'Linux': '/usr/local/bin/ffmpeg -y -f video4linux2 -i /dev/video%s -vframes 1 -q:v 1000 -vf scale=320:240 %s' % (inputDeviceID, filename),
           'Windows': 'ffmpeg -y -s 320x240 -f dshow -i video="%s" -vframes 1 %s' % (inputDeviceID, filename)}

       print commandLineDict[operatingSystem]
       os.system(commandLineDict[operatingSystem])



    def startVideoCapture():

       videoPort = getVideoPort()
       audioPort = getAudioPort()
       print "video port:", videoPort
       print "audio port:", audioPort

       #if len(sys.argv) >= 3:
       #    deviceNumber = sys.argv[2]
       #else:
       #    deviceNumber = None
       deviceNumber = args.video_device_number

       result = None
       if platform.system() == 'Darwin':
           result = handleDarwin(deviceNumber, videoPort, audioPort)
       elif platform.system() == 'Linux':
           result = handleLinux(deviceNumber, videoPort, audioPort)
       elif platform.system() == 'Windows':
           #result = handleWindowsScreenCapture(deviceNumber, videoPort)
           result = handleWindows(deviceNumber, videoPort, audioPort)
       else:
           print "unknown platform", platform.system()

       return result


    def timeInMilliseconds():
       return int(round(time.time() * 1000))



    def main():

       print "main"

       streamProcessDict = None


       twitterSnapCount = 0

       while True:



           socketIO.emit('send_video_status', {'send_video_process_exists': True,
                                               'camera_id':cameraIDAnswer})


           if streamProcessDict is not None:
               print "stopping previously running ffmpeg (needs to happen if this is not the first iteration)"
               streamProcessDict['process'].kill()

           print "starting process just to get device result" # this should be a separate function so you don't have to do this
           streamProcessDict = startVideoCapture()
           inputDeviceID = streamProcessDict['device_answer']
           print "stopping video capture"
           streamProcessDict['process'].kill()

           #print "sleeping"
           #time.sleep(3)
           #frameCount = int(round(time.time() * 1000))

           videoWithSnapshots = False
           while videoWithSnapshots:

               frameCount = timeInMilliseconds()

               print "taking single frame image"
               snapShot(platform.system(), inputDeviceID, filename="single_frame_image.jpg")

               with open ("single_frame_image.jpg", 'rb') as f:

                   # every so many frames, post a snapshot to twitter
                   #if frameCount % 450 == 0:
                   if frameCount % 6000 == 0:
                           data = f.read()
                           print "emit"
                           socketIO.emit('snapshot', {'frame_count':frameCount, 'image':base64.b64encode(data)})
                   data = f.read()

               print "emit"
               socketIO.emit('single_frame_image', {'frame_count':frameCount, 'image':base64.b64encode(data)})
               time.sleep(0)

               #frameCount += 1


           if False:
            if platform.system() != 'Windows':
               print "taking snapshot"
               snapShot(platform.system(), inputDeviceID)
               with open ("snapshot.jpg", 'rb') as f:
                   data = f.read()
               print "emit"

               # skip sending the first image because it's mostly black, maybe completely black
               #todo: should find out why this black image happens
               if twitterSnapCount > 0:
                   socketIO.emit('snapshot', {'image':base64.b64encode(data)})




           print "starting video capture"
           streamProcessDict = startVideoCapture()


           # This loop counts out a delay that occurs between twitter snapshots.
           # Every 50 seconds, it kills and restarts ffmpeg.
           # Every 40 seconds, it sends a signal to the server indicating status of processes.
           period = 2*60*60 # period in seconds between snaps
           for count in range(period):
               time.sleep(1)

               if count % 20 == 0:
                   socketIO.emit('send_video_status', {'send_video_process_exists': True,
                                                       'camera_id':cameraIDAnswer})

               if count % 40 == 30:
                   print "stopping video capture just in case it has reached a state where it's looping forever, not sending video, and not dying as a process, which can happen"
                   streamProcessDict['video_process'].kill()
                   streamProcessDict['audio_process'].kill()
                   time.sleep(1)

               if count % 80 == 75:
                   print "send status about this process and its child process ffmpeg"
                   ffmpegProcessExists = streamProcessDict['process'].poll() is None
                   socketIO.emit('send_video_status', {'send_video_process_exists': True,
                                                       'ffmpeg_process_exists': ffmpegProcessExists,
                                                       'camera_id':cameraIDAnswer})

               #if count % 190 == 180:
               #    print "reboot system in case the webcam is not working"
               #    os.system("sudo reboot")

               # if the video stream process dies, restart it
               if streamProcessDict['video_process'].poll() is not None or streamProcessDict['audio_process'].poll():
                   # wait before trying to start ffmpeg
                   print "ffmpeg process is dead, waiting before trying to restart"
                   randomSleep()
                   streamProcessDict = startVideoCapture()

           twitterSnapCount += 1

    if __name__ == "__main__":


       #if len(sys.argv) > 1:
       #    cameraIDAnswer = sys.argv[1]
       #else:
       #    cameraIDAnswer = raw_input("Enter the Camera ID for your robot, you can get it by pointing a browser to the runmyrobot server %s: " % server)

       cameraIDAnswer = args.camera_id


       main()

    ERROR :

    ffmpeg -n -f mpegts -i http://54.183.232.63:12221 -i logo.png -filter_complex "[0:v]setsar=sar=1[v];[v][1]blend=all_mode='overlay':all_opacity=0.7" -movflags +faststart tmb/video.mp4
    ffmpeg version N-86215-gb5228e4 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 4.9.2 (Raspbian 4.9.2-10)
     configuration: --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree --extra-libs=-ldl
     libavutil      55. 63.100 / 55. 63.100
     libavcodec     57. 96.101 / 57. 96.101
     libavformat    57. 72.101 / 57. 72.101
     libavdevice    57.  7.100 / 57.  7.100
     libavfilter     6. 90.100 /  6. 90.100
     libswscale      4.  7.101 /  4.  7.101
     libswresample   2.  8.100 /  2.  8.100
     libpostproc    54.  6.100 / 54.  6.100
    [mpegts @ 0x1a57390] Could not detect TS packet size, defaulting to non-FEC/DVHS
    http://54.183.232.63:12221: could not find codec parameters
  • Video rotating to left by 90 degree when converted using ffmpeg

    15 juillet 2017, par Herdesh Verma

    I developed a below code :

    extern "C"
    {
    #include <libavutil></libavutil>imgutils.h>
    #include <libavutil></libavutil>opt.h>
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavutil></libavutil>mathematics.h>
    #include <libavutil></libavutil>samplefmt.h>
    #include <libavutil></libavutil>timestamp.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavfilter></libavfilter>avfiltergraph.h>
    #include <libswscale></libswscale>swscale.h>
    }
    #include
    static AVFormatContext *fmt_ctx = NULL;

    static int frame_index = 0;

    static int j = 0, nbytes=0;
    uint8_t *video_outbuf = NULL;
    static AVPacket *pAVPacket=NULL;
    static int value=0;
    static AVFrame *pAVFrame=NULL;
    static AVFrame *outFrame=NULL;
    static AVStream *video_st=NULL;
    static AVFormatContext *outAVFormatContext=NULL;
    static AVCodec *outAVCodec=NULL;
    static AVOutputFormat *output_format=NULL;
    static AVCodecContext *video_dec_ctx = NULL, *audio_dec_ctx;
    static AVCodecContext *outAVCodecContext=NULL;
    static int width, height;
    static enum AVPixelFormat pix_fmt;
    static AVStream *video_stream = NULL, *audio_stream = NULL;
    static const char *src_filename = NULL;
    static const char *video_dst_filename = NULL;
    static const char *audio_dst_filename = NULL;
    static FILE *video_dst_file = NULL;
    static FILE *audio_dst_file = NULL;
    static uint8_t *video_dst_data[4] = {NULL};
    static int      video_dst_linesize[4];
    static int video_dst_bufsize;
    static int video_stream_idx = -1, audio_stream_idx = -1;
    static AVPacket *pkt=NULL;
    static AVPacket *pkt1=NULL;
    static AVFrame *frame = NULL;
    //static AVPacket pkt;
    static int video_frame_count = 0;
    static int audio_frame_count = 0;
    static int refcount = 0;
    AVCodec *codec;
    static struct SwsContext *sws_ctx;
    AVCodecContext *c= NULL;
    int i, out_size, size, x, y, outbuf_size;
    AVFrame *picture;
    uint8_t *outbuf, *picture_buf;
    int video_outbuf_size;
    int w, h;
    AVPixelFormat pixFmt;
    uint8_t *data[4];
    int linesize[4];

    static int open_codec_context(int *stream_idx,
                             AVCodecContext **dec_ctx, AVFormatContext
    *fmt_ctx, enum AVMediaType type)
    {
    int ret, stream_index;
    AVStream *st;
    AVCodec *dec = NULL;
    AVDictionary *opts = NULL;
    ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
    if (ret &lt; 0) {
       printf("Could not find %s stream in input file '%s'\n",
               av_get_media_type_string(type), src_filename);
       return ret;
    } else {
       stream_index = ret;
       st = fmt_ctx->streams[stream_index];
       /* find decoder for the stream */
       dec = avcodec_find_decoder(st->codecpar->codec_id);
       if (!dec) {
           printf("Failed to find %s codec\n",
                   av_get_media_type_string(type));
           return AVERROR(EINVAL);
       }
       /* Allocate a codec context for the decoder */
       *dec_ctx = avcodec_alloc_context3(dec);
       if (!*dec_ctx) {
           printf("Failed to allocate the %s codec context\n",
                   av_get_media_type_string(type));
           return AVERROR(ENOMEM);
       }
       /* Copy codec parameters from input stream to output codec context */
       if ((ret = avcodec_parameters_to_context(*dec_ctx, st->codecpar)) &lt; 0) {
           printf("Failed to copy %s codec parameters to decoder context\n",
                   av_get_media_type_string(type));
           return ret;
       }
       /* Init the decoders, with or without reference counting */
       av_dict_set(&amp;opts, "refcounted_frames", refcount ? "1" : "0", 0);
       if ((ret = avcodec_open2(*dec_ctx, dec, &amp;opts)) &lt; 0) {
           printf("Failed to open %s codec\n",
                   av_get_media_type_string(type));
           return ret;
       }
       *stream_idx = stream_index;
    }
    return 0;
    }



    int main (int argc, char **argv)
    {
    int ret = 0, got_frame;
    src_filename = argv[1];
    video_dst_filename = argv[2];
    audio_dst_filename = argv[3];
    av_register_all();
    avcodec_register_all();
    printf("Registered all\n");

    /* open input file, and allocate format context */
    if (avformat_open_input(&amp;fmt_ctx, src_filename, NULL, NULL) &lt; 0) {
       printf("Could not open source file %s\n", src_filename);
       exit(1);
    }

    /* retrieve stream information */
    if (avformat_find_stream_info(fmt_ctx, NULL) &lt; 0) {
       printf("Could not find stream information\n");
       exit(1);
    }

    if (open_codec_context(&amp;video_stream_idx, &amp;video_dec_ctx, fmt_ctx,
    AVMEDIA_TYPE_VIDEO) >= 0) {
       video_stream = fmt_ctx->streams[video_stream_idx];
       avformat_alloc_output_context2(&amp;outAVFormatContext, NULL, NULL,
    video_dst_filename);
       if (!outAVFormatContext)
       {
               printf("\n\nError : avformat_alloc_output_context2()");
               return -1;
       }
    }

    if (open_codec_context(&amp;audio_stream_idx, &amp;audio_dec_ctx, fmt_ctx,
    AVMEDIA_TYPE_AUDIO) >= 0) {
       audio_stream = fmt_ctx->streams[audio_stream_idx];
       audio_dst_file = fopen(audio_dst_filename, "wb");
       if (!audio_dst_file) {
           printf("Could not open destination file %s\n", audio_dst_filename);
           ret = 1;
           goto end;
       }
    }
    /* dump input information to stderr */
    av_dump_format(fmt_ctx, 0, src_filename, 0);

    if (!audio_stream &amp;&amp; !video_stream) {
       printf("Could not find audio or video stream in the input, aborting\n");
       ret = 1;
       goto end;
    }

       output_format = av_guess_format(NULL, video_dst_filename, NULL);
       if( !output_format )
       {
        printf("\n\nError : av_guess_format()");
        return -1;
       }

       video_st = avformat_new_stream(outAVFormatContext ,NULL);
       if( !video_st )
       {
               printf("\n\nError : avformat_new_stream()");
         return -1;
       }

       outAVCodecContext = avcodec_alloc_context3(outAVCodec);
       if( !outAVCodecContext )
       {
         printf("\n\nError : avcodec_alloc_context3()");
         return -1;
       }


       outAVCodecContext = video_st->codec;
       outAVCodecContext->codec_id = AV_CODEC_ID_MPEG4;// AV_CODEC_ID_MPEG4; //
    AV_CODEC_ID_H264 // AV_CODEC_ID_MPEG1VIDEO
       outAVCodecContext->codec_type = AVMEDIA_TYPE_VIDEO;
       outAVCodecContext->pix_fmt  = AV_PIX_FMT_YUV420P;
       outAVCodecContext->bit_rate = 400000; // 2500000
       outAVCodecContext->width = 1920;
       //outAVCodecContext->width = 500;
       outAVCodecContext->height = 1080;
       //outAVCodecContext->height = 500;
       outAVCodecContext->gop_size = 3;
       outAVCodecContext->max_b_frames = 2;
       outAVCodecContext->time_base.num = 1;
       outAVCodecContext->time_base.den = 30; // 15fps

       if (outAVCodecContext->codec_id == AV_CODEC_ID_H264)
       {
        av_opt_set(outAVCodecContext->priv_data, "preset", "slow", 0);
       }

       outAVCodec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);
       if( !outAVCodec )
       {
        printf("\n\nError : avcodec_find_encoder()");
        return -1;
       }

       /* Some container formats (like MP4) require global headers to be
    present
          Mark the encoder so that it behaves accordingly. */

       if ( outAVFormatContext->oformat->flags &amp; AVFMT_GLOBALHEADER)
       {
               outAVCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }

       value = avcodec_open2(outAVCodecContext, outAVCodec, NULL);
       if( value &lt; 0)
       {
               printf("\n\nError : avcodec_open2()");
               return -1;
       }

    /* create empty video file */
       if ( !(outAVFormatContext->flags &amp; AVFMT_NOFILE) )
       {
        if( avio_open2(&amp;outAVFormatContext->pb , video_dst_filename,
    AVIO_FLAG_WRITE ,NULL, NULL) &lt; 0 )
        {
         printf("\n\nError : avio_open2()");
        }
       }

       if(!outAVFormatContext->nb_streams)
       {
               printf("\n\nError : Output file dose not contain any stream");
         return -1;
       }

       /* imp: mp4 container or some advanced container file required header
    information*/
       value = avformat_write_header(outAVFormatContext , NULL);
       if(value &lt; 0)
       {
               printf("\n\nError : avformat_write_header()");
               return -1;
       }

       printf("\n\nOutput file information :\n\n");
       av_dump_format(outAVFormatContext , 0 ,video_dst_filename ,1);


       int flag;
       int frameFinished;


       value = 0;

       pAVPacket = (AVPacket *)av_malloc(sizeof(AVPacket));
       av_init_packet(pAVPacket);

       pAVFrame = av_frame_alloc();
       if( !pAVFrame )
       {
        printf("\n\nError : av_frame_alloc()");
        return -1;
       }

       outFrame = av_frame_alloc();//Allocate an AVFrame and set its fields to
    default values.
       if( !outFrame )
       {
        printf("\n\nError : av_frame_alloc()");
        return -1;
       }

       nbytes = av_image_get_buffer_size(outAVCodecContext-
    >pix_fmt,outAVCodecContext->width,outAVCodecContext->height,32);
       video_outbuf = (uint8_t*)av_malloc(nbytes);
       if( video_outbuf == NULL )
       {
       printf("\n\nError : av_malloc()");
       }


       value = av_image_fill_arrays( outFrame->data, outFrame->linesize,
    video_outbuf , AV_PIX_FMT_YUV420P, outAVCodecContext-
    >width,outAVCodecContext->height,1 ); // returns : the size in bytes
    required for src
       if(value &lt; 0)
       {
       printf("\n\nError : av_image_fill_arrays()");
       }

       SwsContext* swsCtx_ ;

       // Allocate and return swsContext.
       // a pointer to an allocated context, or NULL in case of error
       // Deprecated : Use sws_getCachedContext() instead.
       swsCtx_ = sws_getContext(video_dec_ctx->width,
                               video_dec_ctx->height,
                               video_dec_ctx->pix_fmt,
                               video_dec_ctx->width,
                               video_dec_ctx->height,
                               video_dec_ctx->pix_fmt,
                               SWS_BICUBIC, NULL, NULL, NULL);


       AVPacket outPacket;

       int got_picture;

       while( av_read_frame( fmt_ctx , pAVPacket ) >= 0 )
       {
               if(pAVPacket->stream_index == video_stream_idx)
               {
                       value = avcodec_decode_video2(video_dec_ctx , pAVFrame ,
     &amp;frameFinished , pAVPacket );
                       if( value &lt; 0)
                       {
                               printf("Error : avcodec_decode_video2()");
                       }

                       if(frameFinished)// Frame successfully decoded :)
                       {
                               sws_scale(swsCtx_, pAVFrame->data, pAVFrame-
    >linesize,0, video_dec_ctx->height, outFrame->data,outFrame->linesize);
    //                              sws_scale(swsCtx_, pAVFrame->data, pAVFrame-
    >linesize,0, video_dec_ctx->height, outFrame->data,outFrame->linesize);
                               av_init_packet(&amp;outPacket);
                               outPacket.data = NULL;    // packet data will be
    allocated by the encoder
                               outPacket.size = 0;

                               avcodec_encode_video2(outAVCodecContext ,
    &amp;outPacket ,outFrame , &amp;got_picture);

                               if(got_picture)
                               {
                                       if(outPacket.pts != AV_NOPTS_VALUE)
                                               outPacket.pts =
    av_rescale_q(outPacket.pts, video_st->codec->time_base, video_st-
    >time_base);
                                       if(outPacket.dts != AV_NOPTS_VALUE)
                                               outPacket.dts =
    av_rescale_q(outPacket.dts, video_st->codec->time_base, video_st-
    >time_base);

                                       printf("Write frame %3d (size= %2d)\n",
    j++, outPacket.size/1000);
                                       if(av_write_frame(outAVFormatContext ,
    &amp;outPacket) != 0)
                                       {
                                               printf("\n\nError :
    av_write_frame()");
                                       }

                               av_packet_unref(&amp;outPacket);
                               } // got_picture

                       av_packet_unref(&amp;outPacket);
                       } // frameFinished

               }
       }// End of while-loop

       value = av_write_trailer(outAVFormatContext);
       if( value &lt; 0)
       {
               printf("\n\nError : av_write_trailer()");
       }


       //THIS WAS ADDED LATER
       av_free(video_outbuf);

    end:
       avcodec_free_context(&amp;video_dec_ctx);
       avcodec_free_context(&amp;audio_dec_ctx);
       avformat_close_input(&amp;fmt_ctx);
       if (video_dst_file)
           fclose(video_dst_file);
      if (audio_dst_file)
          fclose(audio_dst_file);
      //av_frame_free(&amp;frame);
      av_free(video_dst_data[0]);
      return ret &lt; 0;
    }

    Problem with above code is that it rotates a video to left by 90 degree.

    Snapshot of video given as input to above program

    Snapshot of output video. It is rotated by 90 degree to left.

    I compiled program using below command :

    g++ -D__STDC_CONSTANT_MACROS -Wall -g ScreenRecorder.cpp -I/home/harry/Documents/compressor/ffmpeg-3.3/ -I/root/android-ndk-r14b/platforms/android-21/arch-x86_64/usr/include/ -c -o ScreenRecorder.o -w

    And linked it using below command :

    g++ -Wall -g ScreenRecorder.o -I/home/harry/Documents/compressor/ffmpeg-3.3/ -I/root/android-ndk-r14b/platforms/android-21/arch-x86_64/usr/include/ -L/usr/lib64 -L/lib64 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7/ -L/home/harry/Documents/compressor/ffmpeg-3.3/ffmpeg-build -L/root/android-ndk-r14b/platforms/android-21/arch-x86_64/usr/lib64 -o ScreenRecorder.exe -lavformat -lavcodec -lavutil -lavdevice -lavfilter -lswscale -lx264 -lswresample -lm -lpthread -ldl -lstdc++ -lc -lrt

    Program is being run using below command :

    ./ScreenRecorder.exe vertical.MOV videoH.mp4 audioH.mp3

    Note :
    - Source video is taken from iphone and is of .mov format.
    - Output video is being stored in .mp4 file.

    Can anyone please tell me why it is rotating video by 90 degree ?

    One thing i noticed in dump is shown below :

     Duration: 00:00:06.04, start: 0.000000, bitrate: 17087 kb/s
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080, 17014 kb/s, 29.98 fps, 29.97 tbr, 600 tbn, 1200 tbc (default)
    Metadata:
     rotate          : 90
     creation_time   : 2017-07-09T10:56:42.000000Z
     handler_name    : Core Media Data Handler
     encoder         : H.264
    Side data:
     displaymatrix: rotation of -90.00 degrees

    it says displaymatrix: rotation of -90.00 degrees. Is it responsible for rotating video by 90 degree ?