Recherche avancée

Médias (91)

Autres articles (41)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (3802)

  • saving frames from webcam stream

    30 décembre 2018, par Alex Witsil

    I would like a routine that systematically extracts and saves the frames from webcam footage to a local directory on my personal computer.

    Specifically, I am trying to save frames from the webcam at Old Faithful geyser in Yellowstone Natl. Park. (https://www.nps.gov/yell/customcf/geyser_webcam_updated.htm)

    Ideally, I would like to :

    1. be able to control the rate at which frames are downloaded (e.g. take 1 frame every minute)
    2. use FFMPEG or R
    3. Save the actual frame and not a snapshot of the webpage

    Despite point 3 above, I’ve tried simply taking a screenshot in R using the package webshot :

    library(webshot)
    i=1
    while(i<=2) {
    webshot('https://www.nps.gov/yell/customcf/geyser_webcam_updated.htm',delay=60,paste(i,'.png',sep=""))

    i=i+1
    }

    However, from the above code I get these two images :

    enter image description here
    enter image description here
    Despite the delay in the webshot() function (60 seconds) the two images are the same not to mention the obvious play button in the middle. This method also seems a bit of a hack as it is saving a snapshot of the website and not the frames themselves.

    I am certainly open to to using more appropriate command line tools (I am just unsure of what they are). Any help is greatly appreciated !

  • Play UDP live video stream in UWP

    19 avril 2018, par Nicolas Séveno

    I need to display a live video stream in a UWP application.

    The video stream comes from a GoPro. It is transported by UDP messages. I think it is a MPEG-2 TS stream.

    I can play it successfully using FFPlay with the following command line :

    ffplay -fflags nobuffer -f:v mpegts udp://:8554

    I would like to play it with MediaPlayerElement without using a third party library.

    According to the following page :
    https://docs.microsoft.com/en-us/windows/uwp/audio-video-camera/supported-codecs
    UWP should be able to play it. (I installed the "MPEG 2 video extension" in the Windows Store).

    I tried using DatagramSocket and the MessageReceived event to receive the UDP packets, it works without problem :

    _datagramSocket = new DatagramSocket();
    _datagramSocket.MessageReceived += (s, args) =>
    {
       Debug.WriteLine("message received");
    };
    await _datagramSocket.BindServiceNameAsync(8554);

    Then I create a MseStreamSource :

    _mseStreamSource = new MseStreamSource();
    _mseStreamSource.Opened += (_, __) =>
    {
       _mseSourceBuffer = _mseStreamSource.AddSourceBuffer("video/mp2t");
    };
    this.MediaSource = MediaSource.CreateFromMseStreamSource(_mseStreamSource);

    And in the DatagramSocket.MessageReceived event I send the messages to the MseStreamSource :

    using (IInputStream stream = args.GetDataStream())
    {
       _mseSourceBuffer.AppendStream(stream);
    }

    The AppendStream method fails with error HRESULT 0x8070000B for some packets.
    If I catch the error, the MediaPlayerElement displays the message "video not supported or incorrect file name". (not sure of the message, my Windows is in French).

    Is the MseStreamSource the correct way to display the stream ? Is there a better solution ?

  • How to parallelize this for loop for rapidly converting YUV422 to RGB888 ?

    16 avril 2015, par vineet

    I am using v4l2 api to grab images from a Microsoft Lifecam and then transferring these images over TCP to a remote computer. I am also encoding the video frames into a MPEG2VIDEO using ffmpeg API. These recorded videos play too fast which is probably because not enough frames have been captured and due to incorrect FPS settings.

    The following is the code which converts a YUV422 source to a RGB888 image. This code fragment is the bottleneck in my code as it takes nearly 100 - 150 ms to execute which means I can’t log more than 6 - 10 FPS at 1280 x 720 resolution. The CPU usage is 100% as well.

    for (int line = 0; line < image_height; line++) {
       for (int column = 0; column < image_width; column++) {
           *dst++ = CLAMP((double)*py + 1.402*((double)*pv - 128.0));                                                  // R - first byte          
           *dst++ = CLAMP((double)*py - 0.344*((double)*pu - 128.0) - 0.714*((double)*pv - 128.0));    // G - next byte
           *dst++ = CLAMP((double)*py + 1.772*((double)*pu - 128.0));                                                            // B - next byte

           vid_frame->data[0][line * frame->linesize[0] + column] = *py;

           // increment py, pu, pv here

       }

    ’dst’ is then compressed as jpeg and sent over TCP and ’vid_frame’ is saved to the disk.

    How can I make this code fragment faster so that I can get atleast 30 FPS at 1280x720 resolution as compared to the present 5-6 FPS ?

    I’ve tried parallelizing the for loop across three threads using p_thread, processing one third of the rows in each thread.

    for (int line = 0; line < image_height/3; line++) // thread 1
    for (int line = image_height/3; line < 2*image_height/3; line++) // thread 2
    for (int line = 2*image_height/3; line < image_height; line++) // thread 3

    This gave me only a minor improvement of 20-30 milliseconds per frame.
    What would be the best way to parallelize such loops ? Can I use GPU computing or something like OpenMP ? Say spwaning some 100 threads to do the calculations ?

    I also noticed higher frame rates with my laptop webcam as compared to the Microsoft USB Lifecam.

    Here are other details :

    • Ubuntu 12.04, ffmpeg 2.6
    • AMG-A8 quad core processor with 6GB RAM
    • Encoder settings :
      • codec : AV_CODEC_ID_MPEG2VIDEO
      • bitrate : 4000000
      • time_base : (AVRational)1, 20
      • pix_fmt : AV_PIX_FMT_YUV420P
      • gop : 10
      • max_b_frames : 1