Recherche avancée

Médias (1)

Mot : - Tags -/ogg

Autres articles (100)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (6235)

  • How do I configure ffmpeg & openh264 so that the video file can be opened in Windows Media Player 12

    10 mars 2017, par Sacha Guyer

    I have successfully created h264/mp4 movie files with ffmpeg and the x264 library.

    Now I would like to change the h264 library from x264 to openH264. I could replace the x264 library with openH264, recompile ffmpeg and produce movie files, without changing my sources that produce the movie. The resulting movie opens fine in Quicktime on Mac, but on Windows, Windows Media Player 12 cannot play it.

    The documentation about Windows Media Player support for h264 is unclear. File types supported by Windows Media Player states in the table that Windows Media Player 12 supports mp4, but the text below says :

    Windows Media Player does not support the playback of the .mp4 file format.

    From what I have observed, Windows Media Player 12 IS capable of playing h264/mp4 files, but only when created with x264.

    Does anyone know how I need to adjust the configuration of the codec/context so that the movie plays in Windows Media Player ? Does Windows Media Player only support certain h264 profiles ?

    I noticed the warning :

    [libopenh264 @ 0x...] [OpenH264] this = 0x..., Warning:bEnableFrameSkip = 0,bitrate can’t be controlled for RC_QUALITY_MODE,RC_BITRATE_MODE and RC_TIMESTAMP_MODE without enabling skip frame

    With the configuration :

    av_dict_set(&options, "allow_skip_frames", "1", 0);

    I could get rid of this warning, but the movie still does not play. Are there other options that need to be set so that the movie plays in Windows Media Player ?

    Thank you for your help

    ffprobe output of the file that does play fine in Windows Media Player :

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test_x264.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       title           : retina
       encoder         : Lavf57.56.100
       comment         : Creation Date: 2017-03-10 07:47:39.601
     Duration: 00:00:04.17, start: 0.000000, bitrate: 17497 kb/s
       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661),
         yuv420p, 852x754, 17495 kb/s, 24 fps, 24 tbr, 24k tbn, 48 tbc (default)
       Metadata:
         handler_name    : VideoHandler

    ffprobe output of the file that does not play in Windows Media Player :

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test_openh264.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       title           : retina
       encoder         : Lavf57.56.100
       comment         : Creation Date: 2017-03-10 07:49:27.024
     Duration: 00:00:04.17, start: 0.000000, bitrate: 17781 kb/s
       Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661),
         yuv420p, 852x754, 17779 kb/s, 24 fps, 24 tbr, 24k tbn, 48k tbc (default)
       Metadata:
         handler_name    : VideoHandler
  • strange artifacts in image - where does a frame start ?

    26 mai 2017, par user3387542

    We are live broadcasting a webcam stream. No audio, video only. The current command that works great :

    # Direct replay works well:
    ffmpeg -f video4linux2 -r 10 -s 1280x720 -i /dev/video0 -c:v rawvideo -f rawvideo -pix_fmt yuv420p - | \
    ffplay -f rawvideo -pixel_format yuv420p -video_size 1280x720 -framerate 10 -

    but as soon as we try to send this data over the network (udp broadcast / gigabit lan) we are getting strange artefacts into the image.

    # server command:
    ffmpeg -f video4linux2 -r 10 -s 1280x720 -i /dev/video0 -c:v rawvideo -f rawvideo -pix_fmt yuv420p - | \
    socat - UDP-DATAGRAM:10.0.0.255:12345,broadcast

    # client command:
    socat -u udp-recv:12345,reuseaddr - | \
    ffplay -f rawvideo -pixel_format yuv420p -video_size 1280x720 -framerate 10 -

    Where do these artifacts come from and how to get rid of them ? Does this has something to do with the client not knowing where a certain video frame starts ?

    We have chosen to stream raw video to reduce latency. The final goal would be to apply opencv tools to the video and react live depending on the situation. Which works great, as long as the camera is plugged in directly into this computer. But we need to set it apart and need multiple clients.

    enter image description here

    The camera used is a Microsoft® LifeCam Studio(TM).

    $ v4l2-ctl -d 0 --list-formats
    ioctl: VIDIOC_ENUM_FMT
    Index       : 0
    Type        : Video Capture
    Pixel Format: 'YUYV'
    Name        : YUYV 4:2:2

    Index       : 1
    Type        : Video Capture
    Pixel Format: 'MJPG' (compressed)
    Name        : Motion-JPEG

    Index       : 2
    Type        : Video Capture
    Pixel Format: 'M420'
    Name        : YUV 4:2:0 (M420)

    Update

    To narrow down the issue, I tried to split it up into different tasks :

    1.0. Writing the stream to a file :

    ffmpeg -f video4linux2 -r 10 -s 1280x720 -i /dev/video0 -c:v rawvideo -f rawvideo -pix_fmt yuv420p - > ~/deltemp/rawout

    1.1. Reading the file : The result looks great, no artefacts :

    cat ~/deltemp/rawout | ffplay -f rawvideo -pixel_format yuv420p -video_size 1280x720 -framerate 10 -

    2.0 Starting the stream and broadcasting the stream as mentioned in the server command above

    2.1 Writing the UDP stream to a file. And watching the file (artifacts are back again)

    socat -u udp-recv:12345,reuseaddr - > ~/deltemp/rawout
    cat ~/deltemp/rawout | ffplay -f rawvideo -pixel_format yuv420p -video_size 1280x720 -framerate 10 -

    As test 1 showed no artifacts and test 2 did, it must be something with udp packet loss.

    Test 3 : Reducing quality to 640x480 did not help either.

  • Use deck.js as a remote presentation tool

    8 janvier 2014, par silvia

    deck.js is one of the new HTML5-based presentation tools. It’s simple to use, in particular for your basic, every-day presentation needs. You can also create more complex slides with animations etc. if you know your HTML and CSS.

    Yesterday at linux.conf.au (LCA), I gave a presentation using deck.js. But I didn’t give it from the lectern in the room in Perth where LCA is being held – instead I gave it from the comfort of my home office at the other end of the country.

    I used my laptop with in-built webcam and my Chrome browser to give this presentation. Beforehand, I had uploaded the presentation to a Web server and shared the link with the organiser of my speaker track, who was on site in Perth and had set up his laptop in the same fashion as myself. His screen was projecting the Chrome tab in which my slides were loaded and he had hooked up the audio output of his laptop to the room speaker system. His camera was pointed at the audience so I could see their reaction.

    I loaded a slide master URL :
    http://html5videoguide.net/presentations/lca_2014_webrtc/?master
    and the room loaded the URL without query string :
    http://html5videoguide.net/presentations/lca_2014_webrtc/.

    Then I gave my talk exactly as I would if I was in the same room. Yes, it felt exactly as though I was there, including nervousness and audience feedback.

    How did we do that ? WebRTC (Web Real-time Communication) to the rescue, of course !

    We used one of the modules of the rtc.io project called rtc-glue to add the video conferencing functionality and the slide navigation to deck.js. It was actually really really simple !

    Here are the few things we added to deck.js to make it work :

    • Code added to index.html to make the video connection work :
      <meta name="rtc-signalhost" content="http://rtc.io/switchboard/">
      <meta name="rtc-room" content="lca2014">
      ...
      <video id="localV" rtc-capture="camera" muted></video>
      <video id="peerV" rtc-peer rtc-stream="localV"></video>
      ...
      <script src="glue.js"></script>
      <script>
      glue.config.iceServers = [{ url: 'stun:stun.l.google.com:19302' }];
      </script>

      The iceServers config is required to punch through firewalls – you may also need a TURN server. Note that you need a signalling server – in our case we used http://rtc.io/switchboard/, which runs the code from rtc-switchboard.

    • Added glue.js library to deck.js :

      Downloaded from https://raw.github.com/rtc-io/rtc-glue/master/dist/glue.js into the source directory of deck.js.

    • Code added to index.html to synchronize slide navigation :
      glue.events.once('connected', function(signaller) {
       if (location.search.slice(1) !== '') {
         $(document).bind('deck.change', function(evt, from, to) {
           signaller.send('/slide', {
             idx: to,
             sender: signaller.id
           });
         });
       }
       signaller.on('slide', function(data) {
         console.log('received notification to change to slide: ', data.idx);
         $.deck('go', data.idx);
       });
      });

      This simply registers a callback on the slide master end to send a slide position message to the room end, and a callback on the room end that initiates the slide navigation.

    And that’s it !

    You can find my slide deck on GitHub.

    Feel free to write your own slides in this manner – I would love to have more users of this approach. It should also be fairly simple to extend this to share pointer positions, so you can actually use the mouse pointer to point to things on your slides remotely. Would love to hear your experiences !

    Note that the slides are actually a talk about the rtc.io project, so if you want to find out more about these modules and what other things you can do, read the slide deck or watch the talk when it has been published by LCA.

    Many thanks to Damon Oehlman for his help in getting this working.

    BTW : somebody should really fix that print style sheet for deck.js – I’m only ever getting the one slide that is currently showing.