Recherche avancée

Médias (91)

Autres articles (97)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

Sur d’autres sites (6791)

  • Thread safety of FFmpeg when using av_lockmgr_register

    12 août 2013, par Stocastico

    My application uses FFmpeg to read video streams. So far, I ensured thread safety by defining my own global lock and looking for all the methods inside FFmpeg libraries which are not thread safe.
    This makes the code a bit messy, so while looking for better ideas I found this answer, but apparently I couldn't make use of the suggestions.
    I tried testing it in my own environment, but I always get critical heap error. Here's the test code

    class TestReader
    {
    public:
    TestReader( std::string sVid )
    {
      m_sVid = sVid;
      m_cVidPtr.reset( new VideoReader() );
    }

    ~TestReader()
    {}

    void operator() ()
    {
       readVideoThread();
    }

    private:
    int readVideoThread()
    {
      m_cVidPtr->init( m_sVid.c_str() );
      MPEGFrame::pointer cFramePtr;

      for ( int i=0; i< 500; i++ )
      {
        cFramePtr = m_cVidPtr->getNextFrame();
      }

      return 0;
    }
    boost::shared_ptr<videoreader> m_cVidPtr;
    std::string m_sVid;
    };

    /*****************************************************************************/
    int lockMgrCallback(void** cMutex, enum AVLockOp op)
    {
    if (nullptr == cMutex)
      return -1;

    switch(op)
    {
    case AV_LOCK_CREATE:
      {
        *cMutex = nullptr;
        boost::mutex* m = new boost::mutex();
        *cMutex = static_cast(m);
        break;
      }
    case AV_LOCK_OBTAIN:
      {
        boost::mutex* m =  static_cast(*cMutex);
        m->lock();
        break;
      }
    case AV_LOCK_RELEASE:
      {
        boost::mutex * m = static_cast(*cMutex);
        m->unlock();
        break;
      }
    case AV_LOCK_DESTROY:
      {
        boost::mutex * m = static_cast(*cMutex);
        delete m;
        break;
      }
    default:
      break;
    }
    return 0;
    }

    int testFFmpegMultiThread( std::string sVideo )
    {
    if ( ::av_lockmgr_register( &amp;lockMgrCallback ) )
    {
      std::cout &lt;&lt; "Could not initialize lock manager!" &lt;&lt; std::endl;
      return -1;
    }
    TestReader c1(sVideo);
    TestReader c2(sVideo);
    boost::thread t1( c1 );
    boost::thread t2( c2 );

    t1.join();
    t2.join();

    return 0;
    }
    </videoreader>

    The classes VideoReader and MPEGFrame are just wrappers and have always worked perfectly in single threaded scenarios, or in multi-threaded scenario managed using my own global lock.
    Am I missing something obvious ? Can anybody point me to some working code ? Thanks in advance

  • Rails 5 - Video streaming using Carrierwave uploaded video size constraint on the server

    21 mars 2020, par Milind

    I have a working Rails 5 apps using Reactjs for frontend and React dropzone uploader to upload video files using carrierwave.

    So far, what is working great is listed below -

    1. User can upload videos and videos are encoded based on the selection made by user - HLS or MPEG-DASH for online streaming.
    2. Once the video is uploaded on the server, it starts streaming it by :-
      • FIRST,copying video on /tmp folder.
      • Running a bash script that uses ffmpeg to transcode uploaded video using predefined commands to produce new fragments of videos inside /tmp folder.
      • Once the background job is done, all the videos are uploaded on AWS S3, which is how the default carrierwave works
    3. So, when multiple videos are uploaded, they are all copied in /tmp folder and then transcoded and eventually uploaded to S3.

    My questions, where i am looking some help are listed below -

    1- The above process is good for small videos, BUT what if there are many concurrent users uploading 2GB of videos ? I know this will kill my server as my /tmp folder will keep on increasing and consume all the memory, making it to die hard.How can I allow concurrent videos to upload videos without effecting my server’s memory consumption ?

    2- Is there a way where I can directly upload the videos on AWS-S3 first, and then use one more proxy server/child application to encode videos from S3, download it to the child server, convert it and again upload it to the destination ? but this is almost the same but doing it on cloud, where memory consumption can be on-demand but will be not cost-effective.

    3- Is there some easy and cost-effective way by which I can upload large videos, transcode them and upload it to AWS S3, without effecting my server memory. Am i missing some technical architecture here.

    4- How Youtube/Netflix works, I know they do the same thing in a smart way but can someone help me to improve this ?

    Thanks in advance.

  • RaspberryPi HLS streaming with nginx and ffmpeg ; v4l2 error : ioctl(VIDIOC_STREAMON) : Protocol error

    22 janvier 2021, par Mirco Weber

    I'm trying to realize a baby monitoring with a Raspberry Pi (Model 4B, 4GB RAM) and an ordinary Webcam (with integrated Mic).&#xA;I followed this Tutorial : https://github.com/DeTeam/webcam-stream/blob/master/Tutorial.md

    &#xA;

    Shortly described :

    &#xA;

      &#xA;
    1. I installed and configured an nginx server with rtmp module enabled.
    2. &#xA;

    3. I installed ffmpeg with this configuration —enable-gpl —enable-nonfree —enable-mmal —enable-omx-rpi
    4. &#xA;

    5. I tried to stream ;)
    6. &#xA;

    &#xA;

    The configuration of nginx seems to be working (sometimes streaming works, the server starts without any complication and when the server is up and running, the webpage is displayed).&#xA;The configuration of ffmpeg seems to be fine as well, since streaming sometimes works...

    &#xA;

    I was trying a couple of different ffmpeg-commands ; all of them are sometimes working and sometimes resulting in an error.&#xA;The command looks like following :

    &#xA;

    ffmpeg -re&#xA;-f v4l2&#xA;-i /dev/video0&#xA;-f alsa&#xA;-ac 1&#xA;-thread_queue_size 4096&#xA;-i hw:CARD=Camera,DEV=0&#xA;-profile:v high&#xA;-level:v 4.1&#xA;-vcodec h264_omx&#xA;-r 10&#xA;-b:v 512k&#xA;-s 640x360&#xA;-acodec aac&#xA;-strict&#xA;-2&#xA;-ac 2&#xA;-ab 32k&#xA;-ar 44100&#xA;-f flv&#xA;rtmp://localhost/show/stream;&#xA;

    &#xA;

    Note : I rearranged the code to make it easier to read. In the terminal, it is all in one line.&#xA;Note : There is no difference when using -f video4linux2 instead of -f v4l2

    &#xA;

    The camera is recognized by the system :

    &#xA;

    pi@raspberrypi:~ $ v4l2-ctl --list-devices&#xA;bcm2835-codec-decode (platform:bcm2835-codec):&#xA;    /dev/video10&#xA;    /dev/video11&#xA;    /dev/video12&#xA;&#xA;bcm2835-isp (platform:bcm2835-isp):&#xA;    /dev/video13&#xA;    /dev/video14&#xA;    /dev/video15&#xA;    /dev/video16&#xA;&#xA;HD Web Camera: HD Web Camera (usb-0000:01:00.0-1.2):&#xA;    /dev/video0&#xA;    /dev/video1&#xA;

    &#xA;

    When only using -i /dev/video0, audio transmission never worked.&#xA;The output of arecord -L was :

    &#xA;

    pi@raspberrypi:~ $ arecord -L&#xA;default&#xA;    Playback/recording through the PulseAudio sound server&#xA;null&#xA;    Discard all samples (playback) or generate zero samples (capture)&#xA;jack&#xA;    JACK Audio Connection Kit&#xA;pulse&#xA;    PulseAudio Sound Server&#xA;usbstream:CARD=Headphones&#xA;    bcm2835 Headphones&#xA;    USB Stream Output&#xA;sysdefault:CARD=Camera&#xA;    HD Web Camera, USB Audio&#xA;    Default Audio Device&#xA;front:CARD=Camera,DEV=0&#xA;    HD Web Camera, USB Audio&#xA;    Front speakers&#xA;surround21:CARD=Camera,DEV=0&#xA;    HD Web Camera, USB Audio&#xA;    2.1 Surround output to Front and Subwoofer speakers&#xA;surround40:CARD=Camera,DEV=0&#xA;    HD Web Camera, USB Audio&#xA;    4.0 Surround output to Front and Rear speakers&#xA;surround41:CARD=Camera,DEV=0&#xA;    HD Web Camera, USB Audio&#xA;    4.1 Surround output to Front, Rear and Subwoofer speakers&#xA;surround50:CARD=Camera,DEV=0&#xA;    HD Web Camera, USB Audio&#xA;    5.0 Surround output to Front, Center and Rear speakers&#xA;surround51:CARD=Camera,DEV=0&#xA;    HD Web Camera, USB Audio&#xA;    5.1 Surround output to Front, Center, Rear and Subwoofer speakers&#xA;surround71:CARD=Camera,DEV=0&#xA;    HD Web Camera, USB Audio&#xA;    7.1 Surround output to Front, Center, Side, Rear and Woofer speakers&#xA;iec958:CARD=Camera,DEV=0&#xA;    HD Web Camera, USB Audio&#xA;    IEC958 (S/PDIF) Digital Audio Output&#xA;dmix:CARD=Camera,DEV=0&#xA;    HD Web Camera, USB Audio&#xA;    Direct sample mixing device&#xA;dsnoop:CARD=Camera,DEV=0&#xA;    HD Web Camera, USB Audio&#xA;    Direct sample snooping device&#xA;hw:CARD=Camera,DEV=0&#xA;    HD Web Camera, USB Audio&#xA;    Direct hardware device without any conversions&#xA;plughw:CARD=Camera,DEV=0&#xA;    HD Web Camera, USB Audio&#xA;    Hardware device with all software conversions&#xA;usbstream:CARD=Camera&#xA;    HD Web Camera&#xA;    USB Stream Output&#xA;

    &#xA;

    that's why i added -i hw:CARD=Camera,DEV=0.

    &#xA;

    As mentioned above, it worked very well a couple of times with this configuration and commands.&#xA;But very often, i get the following error message when starting to stream :

    &#xA;

    pi@raspberrypi:~ $ ffmpeg -re -f video4linux2 -i /dev/video0 -f alsa -ac 1 -thread_queue_size 4096 -i hw:CARD=Camera,DEV=0 -profile:v high -level:v 4.1 -vcodec h264_omx -r 10 -b:v 512k -s 640x360 -acodec aac -strict -2 -ac 2 -ab 32k -ar 44100 -f flv rtmp://localhost/show/stream&#xA;ffmpeg version N-100673-g553eb07737 Copyright (c) 2000-2021 the FFmpeg developers&#xA;  built with gcc 8 (Raspbian 8.3.0-6&#x2B;rpi1)&#xA;  configuration: --enable-gpl --enable-nonfree --enable-mmal --enable-omx-rpi --extra-ldflags=-latomic&#xA;  libavutil      56. 63.101 / 56. 63.101&#xA;  libavcodec     58.117.101 / 58.117.101&#xA;  libavformat    58. 65.101 / 58. 65.101&#xA;  libavdevice    58. 11.103 / 58. 11.103&#xA;  libavfilter     7. 96.100 /  7. 96.100&#xA;  libswscale      5.  8.100 /  5.  8.100&#xA;  libswresample   3.  8.100 /  3.  8.100&#xA;  libpostproc    55.  8.100 / 55.  8.100&#xA;[video4linux2,v4l2 @ 0x2ea4600] ioctl(VIDIOC_STREAMON): Protocol error&#xA;/dev/video0: Protocol error&#xA;

    &#xA;

    And when I'm swithing to /dev/video1 (since this was also an output for v4l2-ctl --list-devices), I get the following error message :

    &#xA;

    pi@raspberrypi:~ $ ffmpeg -re -f v4l2 -i /dev/video1 -f alsa -ac 1 -thread_queue_size 4096 -i hw:CARD=Camera,DEV=0 -profile:v high -level:v 4.1 -vcodec h264_omx -r 10 -b:v 512k -s 640x360 -acodec aac -strict -2 -ac 2 -ab 32k -ar 44100 -f flv rtmp://localhost/show/stream&#xA;ffmpeg version N-100673-g553eb07737 Copyright (c) 2000-2021 the FFmpeg developers&#xA;  built with gcc 8 (Raspbian 8.3.0-6&#x2B;rpi1)&#xA;  configuration: --enable-gpl --enable-nonfree --enable-mmal --enable-omx-rpi --extra-ldflags=-latomic&#xA;  libavutil      56. 63.101 / 56. 63.101&#xA;  libavcodec     58.117.101 / 58.117.101&#xA;  libavformat    58. 65.101 / 58. 65.101&#xA;  libavdevice    58. 11.103 / 58. 11.103&#xA;  libavfilter     7. 96.100 /  7. 96.100&#xA;  libswscale      5.  8.100 /  5.  8.100&#xA;  libswresample   3.  8.100 /  3.  8.100&#xA;  libpostproc    55.  8.100 / 55.  8.100&#xA;[video4linux2,v4l2 @ 0x1aa4610] ioctl(VIDIOC_G_INPUT): Inappropriate ioctl for device&#xA;/dev/video1: Inappropriate ioctl for device&#xA;

    &#xA;

    When using the video0 input, the webcam's LED that recognizes an access is constantly on. When using video1not.

    &#xA;

    After hours and days of googling and tears and whiskey, for the sake of my liver, my marriage and my physical and mental health, I'm very sincerly asking for your help...&#xA;What the f**k is happening and what can I do to make it work ???

    &#xA;

    Thanks everybody :)

    &#xA;

    UPDATE 1 :

    &#xA;

      &#xA;
    1. using the full path to ffmpeg does not change anything...
    2. &#xA;

    3. /dev/video0 and /dev/video1 have access rights for everybody
    4. &#xA;

    5. sudo ffmpeg ... does not change anything as well
    6. &#xA;

    7. the problem seems to be at an "early stage". Stripping the command down to ffmpeg -i /dev/video0 results in the same problem
    8. &#xA;

    &#xA;

    UPDATE 2 :
    &#xA;It seems that everything is working when I first start another Application that needs access to the webcam and then ffmpeg...&#xA;Might be some driver issue, but when I'm looking for loaded modules with lsmod, there is absolutely no change before and after I started the application...&#xA;Any help still appreciated...

    &#xA;

    UPDATE 3 :
    &#xA;I was checking the output of dmesg.
    &#xA;When I started the first application I received this message :
    &#xA;uvcvideo: Failed to query (GET_DEF) UVC control 12 on unit 2: -32 (exp. 4).&#xA;
    And when I started ffmpeg, nothing happend but everything worked...

    &#xA;