Recherche avancée

Médias (91)

Autres articles (35)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • Encodage et transformation en formats lisibles sur Internet

    10 avril 2011

    MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
    Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
    Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...)

  • Les thèmes de MediaSpip

    4 juin 2013

    3 thèmes sont proposés à l’origine par MédiaSPIP. L’utilisateur MédiaSPIP peut rajouter des thèmes selon ses besoins.
    Thèmes MediaSPIP
    3 thèmes ont été développés au départ pour MediaSPIP : * SPIPeo : thème par défaut de MédiaSPIP. Il met en avant la présentation du site et les documents média les plus récents ( le type de tri peut être modifié - titre, popularité, date) . * Arscenic : il s’agit du thème utilisé sur le site officiel du projet, constitué notamment d’un bandeau rouge en début de page. La structure (...)

Sur d’autres sites (4027)

  • How to extract elementary video from mp4 using ffmpeg programmatically ?

    24 octobre 2019, par epipav

    I have started learning ffmpeg few weaks ago. At the moment I am able to transcode any video to mp4 using h264/AVC codec. The main scheme is something like that :

    -open input
    -demux
    -decode
    -encode
    -mux

    The actual code is below :

    #include <iostream>

    #include

    extern "C" {

     #
     ifndef __STDC_CONSTANT_MACROS# undef main /* Prevents SDL from overriding main() */ # define __STDC_CONSTANT_MACROS# endif

     # pragma comment(lib, "avcodec.lib")# pragma comment(lib, "avformat.lib")# pragma comment(lib, "swscale.lib")# pragma comment(lib, "avutil.lib")

     #include
     #include
     #include
     #include
     #include <libavutil></libavutil>opt.h>
     #include
     #include
     #include
     #include
     #include
    }

    using namespace std;

    void open_video(AVFormatContext * oc, AVCodec * codec, AVStream * st) {
     int ret;
     AVCodecContext * c;
     c = st - > codec;

     /*open codec */

     cout &lt;&lt; "probably starts here" &lt;&lt; endl;
     ret = avcodec_open2(c, codec, NULL);
     cout &lt;&lt; "and ends here" &lt;&lt; endl;

     if (ret &lt; 0) {
       cout &lt;&lt; ("Could not open video codec") &lt;&lt; endl;
     }

    }

    /*This function will add a new stream to our file.
    @param
    oc -> Format context that the new stream will be added.
    codec -> codec of the stream, this will be passed.
    codec_id ->
    chWidth->
    chHeight->
    */

    AVStream * addStream(AVFormatContext * oc, AVCodec ** codec, enum AVCodecID codec_id, int chWidth, int chHeight, int fps) {
     AVCodecContext * c;
     AVStream * st;

     //find encoder of the stream, it passes this information to @codec, later on
     //it will be used in encoding the video @ avcodec_encode_video2 in loop.
     * codec = avcodec_find_encoder(AV_CODEC_ID_H264);

     if (( * codec) == NULL)
       cout &lt;&lt; "ERROR CAN NOT FIND ENCODER! ERROR! ERROR! AVCODEC_FIND_ENCODER FAILED !!!1 "
     "" &lt;&lt; endl;

     if (!( * codec))
       printf("Could not find encoder for ' %s ' ", avcodec_get_name(codec_id));

     //create a new stream with the found codec inside oc(AVFormatContext).
     st = avformat_new_stream(oc, * codec);

     if (!st)
       cout &lt;&lt; " Cannot allocate stream " &lt;&lt; endl;

     //Setting the stream id.
     //Since, there can be other streams in this AVFormatContext,
     //we should find the first non used index. And this is oc->nb_streams(number of streams) - 1
     st - > id = oc - > nb_streams - 1;

     c = st - > codec;

     //setting the stream's codec's properties.
     c - > codec_id = codec_id;
     c - > bit_rate = 4000000;
     c - > width = chWidth;
     c - > height = chHeight;
     c - > time_base.den = fps;
     //fps;
     c - > time_base.num = 1;
     c - > gop_size = 12;
     c - > pix_fmt = AV_PIX_FMT_YUV420P;

     if (c - > codec_id == AV_CODEC_ID_MPEG2VIDEO) {
       /* just for testing, we also add B frames */
       c - > max_b_frames = 2;
     }

     if (c - > codec_id == AV_CODEC_ID_MPEG1VIDEO) {
       /* Needed to avoid using macroblocks in which some coeffs overflow.
        * This does not happen with normal video, it just happens here as
        * the motion of the chroma plane does not match the luma plane. */
       c - > mb_decision = 2;
     }

     /* Some formats want stream headers to be separate. */
     if (oc - > oformat - > flags &amp; AVFMT_GLOBALHEADER)
       c - > flags |= CODEC_FLAG_GLOBAL_HEADER;

     //returning our lovely new brand stream.
     return st;

    }

    int changeResolution(string source, int format) {
     //Data members
     struct SwsContext * sws_ctx = NULL;
     AVFrame * pFrame = NULL;
     AVFrame * outFrame = NULL;
     AVPacket packet;
     uint8_t * buffer = NULL;
     uint8_t endcode[] = {
       0,
       0,
       1,
       0xb7
     };
     AVDictionary * optionsDict = NULL;
     AVFormatContext * pFormatCtx = NULL;
     AVFormatContext * outputContext = NULL;
     AVCodecContext * pCodecCtx;
     AVCodec * pCodec;
     AVCodec * codec;
     AVCodec * videoCodec;
     AVOutputFormat * fmt;
     AVStream * video_stream;
     int changeWidth;
     int changeHeight;
     int frameFinished;
     int numBytes;
     int fps;

     int lock = 0;

     //Register all codecs &amp; other important stuff. Vital!..
     av_register_all();

     //Selects the desired resolution.
     if (format == 0) {
       changeWidth = 320;
       changeHeight = 180;
     } else if (format == 1) {
       changeWidth = 640;
       changeHeight = 480;

     } else if (format == 2) {
       changeWidth = 960;
       changeHeight = 540;

     } else if (format == 3) {
       changeWidth = 1024;
       changeHeight = 768;

     } else {
       changeWidth = 1280;
       changeHeight = 720;
     }

     // Open video file
     int aaa;
     aaa = avformat_open_input( &amp; pFormatCtx, source.c_str(), NULL, NULL);
     if (aaa != 0) {
       cout &lt;&lt; " cannot open input file \n" &lt;&lt; endl;
       cout &lt;&lt; "aaa = " &lt;&lt; aaa &lt;&lt; endl;
       return -1; // Couldn't open file    
     }

     // Retrieve stream information
     if (av_find_stream_info(pFormatCtx) &lt; 0)
       return -1; // Couldn't find stream information

     //just checking duration casually for no reason
     /*int64_t duration = pFormatCtx->duration;

     cout &lt;&lt; "the duration is " &lt;&lt; duration &lt;&lt; " " &lt;&lt; endl;*/

     //this writes the info about the file
     av_dump_format(pFormatCtx, 0, 0, 0);
     cin >> lock;

     // Find the first video stream
     int videoStream = -1;
     int i;

     for (i = 0; i &lt; 3; i++)
       if (pFormatCtx - > streams[i] - > codec - > codec_type == AVMEDIA_TYPE_VIDEO) {
         videoStream = i;
         cout &lt;&lt; " lel \n ";
         break;

       }

     if (videoStream == -1)
       return -1; // Didn't find a video stream

     // Get a pointer to the codec context for the video stream
     pCodecCtx = pFormatCtx - > streams[videoStream] - > codec;
     fps = pCodecCtx - > time_base.den;

     //Find the decoder of the input file, for the video stream
     pCodec = avcodec_find_decoder(pCodecCtx - > codec_id);

     if (pCodec == NULL) {
       fprintf(stderr, "Unsupported codec!\n");
       return -1; // Codec not found
     }

     // Open codec, you must open it first, in order to use it.
     if (avcodec_open2(pCodecCtx, pCodec, &amp; optionsDict) &lt; 0)
       return -1; // Could not open codec

     // Allocate video frame ( pFrame for taking the packets into, outFrame for processed frames to packet.)
     pFrame = avcodec_alloc_frame();
     outFrame = avcodec_alloc_frame();

     i = 0;

     int ret;
     int video_frame_count = 0;

     //Initiate the outFrame set the buffer &amp; fill the properties
     numBytes = avpicture_get_size(PIX_FMT_YUV420P, changeWidth, changeHeight);
     buffer = (uint8_t * ) av_malloc(numBytes * sizeof(uint8_t));
     avpicture_fill((AVPicture * ) outFrame, buffer, PIX_FMT_YUV420P, changeWidth, changeHeight);

     int pp;
     int frameNo = 0;

     //allocate the outputContext, it will be the AVFormatContext of our output file.
     //It will try to find the format by giving the file name.
     avformat_alloc_output_context2( &amp; outputContext, NULL, NULL, "myoutput.mp4");

     //Cant find the file extension, using MPEG as default.
     if (!outputContext) {
       printf("Could not deduce output format from file extension: using MPEG.\n");
       avformat_alloc_output_context2( &amp; outputContext, NULL, "mpeg", "myoutput.mp4");
     }

     //Still cant set file extension, exit.
     if (!outputContext) {
       return 1;
     }

     //set AVOutputFormat fmt to our outputContext's format.
     fmt = outputContext - > oformat;
     video_stream = NULL;

     //If fmt has a valid codec_id, create a new video stream.
     //This function will set the streams codec &amp; codecs desired properties.
     //Stream's codec will be passed to videoCodec for later usage.
     if (fmt - > video_codec != AV_CODEC_ID_NONE)
       video_stream = addStream(outputContext, &amp; videoCodec, fmt - > video_codec, changeWidth, changeHeight, fps);

     //open the video using videoCodec. by avcodec_open2() i.e open the codec.
     if (video_stream)
       open_video(outputContext, videoCodec, video_stream);

     //Creating our new output file.
     if (!(fmt - > flags &amp; AVFMT_NOFILE)) {
       ret = avio_open( &amp; outputContext - > pb, "toBeStreamed.264", AVIO_FLAG_WRITE);
       if (ret &lt; 0) {
         cout &lt;&lt; " cant open file " &lt;&lt; endl;
         return 1;
       }
     }

     //Writing the header of format context.
     //ret = avformat_write_header(outputContext, NULL);

     if (ret >= 0) {
       cout &lt;&lt; "writing header success !!!" &lt;&lt; endl;
     }

     //Start reading packages from input file.
     while (av_read_frame(pFormatCtx, &amp; packet) >= 0) {

       // Is this a packet from the video stream?  
       if (packet.stream_index == videoStream) {

         // Decode video package into frames
         ret = avcodec_decode_video2(pCodecCtx, pFrame, &amp; frameFinished, &amp; packet);

         if (ret &lt; 0) {
           printf(" Error decoding frame !!..");
           return ret;
         }

         if (frameFinished) {
           printf("video_frame n:%d    coded_n:%d\n", video_frame_count++, pFrame - > coded_picture_number);
         }

         av_free_packet( &amp; packet);

         //do stuff with frame, in this case we are changing the resolution.
         static struct SwsContext * img_convert_ctx_in = NULL;
         if (img_convert_ctx_in == NULL) {
           img_convert_ctx_in = sws_getContext(pCodecCtx - > width,
             pCodecCtx - > height,
             pCodecCtx - > pix_fmt,
             changeWidth,
             changeHeight,
             PIX_FMT_YUV420P,
             SWS_BICUBIC,
             NULL,
             NULL,
             NULL);

         }
         //scale the frames
         sws_scale(img_convert_ctx_in,
           pFrame - > data,
           pFrame - > linesize,
           0,
           pCodecCtx - > height,
           outFrame - > data,
           outFrame - > linesize);

         //initiate the pts value
         if (frameNo == 0)
           outFrame - > pts = 0;

         //calculate the pts value &amp; set it.
         outFrame - > pts += av_rescale_q(1, video_stream - > codec - > time_base, video_stream - > time_base);

         //encode frames into packages. Package passed in @packet.
         if (avcodec_encode_video2(outputContext - > streams[0] - > codec, &amp; packet, outFrame, &amp; pp) &lt; 0)
           cout &lt;&lt; "Encoding frames into packages, failed. " &lt;&lt; endl;

         frameNo++;

         //write the packages into file, resulting in creating a video file.
         av_interleaved_write_frame(outputContext, &amp; packet);

       }

     }

     av_free_packet( &amp; packet);
     //av_write_trailer(outputContext);

     avio_close(outputContext - > pb);

     // Free the RGB image
     av_free(buffer);
     av_free(outFrame);

     // Free the YUV frame
     av_free(pFrame);

     // Close the codec
     avcodec_close(video_stream - > codec);
     avcodec_close(pCodecCtx);

     // Close the video file
     avformat_close_input( &amp; pFormatCtx);

     return 0;
    }
    </iostream>

    at the end of the process I get my desired file with desired codec & container & resolution.

    My problem is, in a part of our project I need to get elementary video streams IN file. Such as example.264. However I can not add a stream without creating an AVFormatContext. I can not create an AVFormatContext because 264 files does not have a container,they are just raw video ?, as far as I know.

    I have tried the way in decoding_encoding.c which uses fwrite. However that example was for mpeg-2 codec and when I try to adapt that code to H264/AVC codec, I got "floating point division by zero" error from mediainfo and moreover, some of the properties of the video was not showing (such as FPS & playtime & quality factor). I think it has to do with the "endcode" the example adds at the end of the code. It is for mpeg-2. ( uint8_t endcode[] = 0, 0, 1, 0xb7  ; )

    Anyway, I would love to get a startpoint for this task. I have managed to come this far by using internet resources ( quite few & outdated for ffmpeg) but now I’m stuck a little.

  • Stream audio to multiple web browsers

    7 novembre 2019, par Robert Bain

    I am trying to play some audio on my linux server and stream it to multiple internet browsers. I have a loopback device I’m specifying as input to ffmpeg. ffmpeg is then streamed via rtp to a WebRTC server (Janus). It works, but the sound that comes out is horrible.

    Here’s the command I’m using to stream from ffmpeg to janus over rtp :

    nice --20 sudo ffmpeg -re -f alsa -i hw:Loopback,1,0 -c:a libopus -ac
    1 -b:a 64K -ar 8000 -vn -rtbufsize 250M -f rtp rtp://127.0.0.1:17666

    The WebRTC server (Janus) requires that the audio codec be opus. If I try to do 2 channel audio or increase the sampling rate, the stream slows down or sound worse. The "nice" command is to give the process higher priority.

  • Video Conferencing in HTML5 : WebRTC via Socket.io

    http://mirror.linux.org.au/linux.conf.au/2013/mp4/Code_up_your_own_video_conference_in_HTML5.mp4
    1er janvier 2014, par silvia

    Six months ago I experimented with Web sockets for WebRTC and the early implementations of PeerConnection in Chrome. Last week I gave a presentation about WebRTC at Linux.conf.au, so it was time to update that codebase.

    I decided to use socket.io for the signalling following the idea of Luc, which made the server code even smaller and reduced it to a mere reflector :

     var app = require(’http’).createServer().listen(1337) ;
     var io = require(’socket.io’).listen(app) ;
    

    io.sockets.on(’connection’, function(socket)
    socket.on(’message’, function(message)
    socket.broadcast.emit(’message’, message) ;
    ) ;
    ) ;

    Then I turned to the client code. I was surprised to see the massive changes that PeerConnection has gone through. Check out my slide deck to see the different components that are now necessary to create a PeerConnection.

    I was particularly surprised to see the SDP object now fully exposed to JavaScript and thus the ability to manipulate it directly rather than through some API. This allows Web developers to manipulate the type of session that they are asking the browsers to set up. I can imaging e.g. if they have support for a video codec in JavaScript that the browser does not provide built-in, they can add that codec to the set of choices to be offered to the peer. While it is flexible, I am concerned if this might create more problems than it solves. I guess we’ll have to wait and see.

    I was also surprised by the need to use ICE, even though in my experiment I got away with an empty list of ICE servers – the ICE messages just got exchanged through the socket.io server. I am not sure whether this is a bug, but I was very happy about it because it meant I could run the whole demo on a completely separate network from the Internet.

    The most exciting news since my talk is that Mozilla and Google have managed to get a PeerConnection working between Firefox and Chrome – this is the first cross-browser video conference call without a plugin ! The code differences are minor.

    Since the specification of the WebRTC API and of the MediaStream API are now official Working Drafts at the W3C, I expect other browsers will follow. I am also looking forward to the possibilities of :

    The best places to learn about the latest possibilities of WebRTC are webrtc.org and the W3C WebRTC WG. code.google.com has open source code that continues to be updated to the latest released and interoperable features in browsers.

    The video of my talk is in the process of being published. There is a MP4 version on the Linux Australia mirror server, but I expect it will be published properly soon. I will update the blog post when that happens.