Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (81)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (6194)

  • ffmpeg : VLC won’t open .sdp files generated by ffmpeg

    8 mai 2020, par tiredamage42

    TLDR : VLC or Quicktime won’t open .sdp video stream files generated by ffmpeg, even though ffplay does.

    



    Web development and ffmpeg noob, so apologies if I’m using the wrong terminology :

    



    I’m trying to stream my desktop capture (on OSX) using ffmpeg, sending it out via rtp protocol. 
As of right now I’m just testing it out by streaming it over a port in my localhost (4000). And trying to play it locally.

    



    The problem is that when I try and open the .sdp file generated by the ffmpeg command, VLC opens it and immediately stops, no errors or anything, and shows that it has a duration of 0:00. Quicktime won’t event open the file in the first place.

    



    ffplay does play the stream and I can see my desktop in the player window (with a significant loss in quality though). Even so there are a ton of warning and errors that show up intermittently (outlined below)

    



    I’m not sure if it’s a problem in the way I start the ffmpeg stream, the command is after a ton of iterations of trying to just make it work, so my options might be way wrong.

    



    command to 'serve' the desktop capture :

    



    ./ffmpeg -f avfoundation -s 1920x1080 -r 60 -i "1" -an \
-vcodec libx264 -preset ultrafast -tune zerolatency -pix_fmt yuv420p \
-sdp_file video.sdp -rtsp_transport tcp -f rtp rtp://127.0.0.1:4000


    



    SDP file that’s generated with the ffmpeg command :

    



    SDP:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 127.0.0.1
t=0 0
a=tool:libavformat 58.29.100
m=video 4000 RTP/AVP 96
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1


    



    ffplay command used to play the stream :

    



    ./ffplay -probesize 32 -analyzeduration 0 -sync ext \
-fflags nobuffer -fflags discardcorrupt -flags low_delay -framedrop \
-strict experimental -avioflags direct \
-protocol_whitelist file,rtp,udp -I video.sdp


    



    For a while before ffplay starts I see a bunch of these errors repeating (in red) :

    



    [h264 @ 0x7ff6b788de00] non-existing PPS 0 referenced
[h264 @ 0x7ff6b788de00] decode_slice_header error
[h264 @ 0x7ff6b788de00] no frame!


    



    then the window seems to 'catch up' to the stream and actually shows the desktop capture, and I get these errors and warnings on a regular interval :

    



    1- in a yellow warning color :

    



    [sdp @ 0x7fc85b830600] RTP: missed 4 packets
[sdp @ 0x7fc85b830600] max delay reached. need to consume packet


    



    2-in a red error color :

    



    [h264 @ 0x7fc85b02aa00] out of range intra chroma pred mode
[h264 @ 0x7fc85b02aa00] error while decoding MB 132 32


    



    (I have a feeling that the above errors have to do with previewing the desktop capture in the desktop I’m capturing and causing pixels in the display to overflow)

    



    Edit :
So, I solved the issue soon after posting, but will leave this up in case anyone runs into the same problem.

    



    The solution was to remove the top line in the .sdp file that said SDP:

    


  • FFmpeg - How to get the timestamp of the frame of which a thumbnail was generated ?

    11 avril 2020, par user2851148

    I am using FFmpeg to extract a screenshot through the timestamp, but I get this timestamp manually by watching the video in VLC and looking for the exact moment of the thumbnail was generated, this process is very time consuming and I need to do it with 220 videos.

    



    All this in order to get a high resolution image of the thumbnail, I also have to mention that the thumbnail file does not have the timestamp in the metadata and in the title.

    



    Would there be any way for FFmpeg to give me the exact timestamp where the thumbnail was taken ?

    



    UPDATED

    



    After a couple of hours testing with FFmpeg commands I found the solution, it is not completely automatic but it works, then the command is :

    



    ffmpeg -ss 00:02:30 -i video.mp4 -t 00:00:40 -loop 1 -i thumbnail.jpg \
   -filter_complex "scale=480:270,hue=s=0,blend=difference:shortest=1, \
    blackframe=95:30,fps=fps=23" -f null -


    



    Options to modify :

    



      

    1. "video.mp4" replace for the video file (obviously).
    2. 


    3. "thumbnail.jpg" replace for the thumbnail file.
    4. 


    5. "-ss" and "-t" are the range of time where the thumbnail likely to be.


        

      • "-ss" time start 00:02:30 (2min with 30 sec)
      • 


      • "-t" time since start 00:00:40 (2min with 30sec + 40sec)
      • 


      • If you have no idea where probably is the thumbnail, you can delete this part, only it will take longer to find it.
      • 


    6. 


    7. "480:270" replace for size of the thumbnail.
    8. 


    9. "fps=23" change the 23 for the fps exact of the "video.mp4" file.
    10. 


    



    And answer we have :

    



    [Parsed_blackframe_1] frame:3849 pblack:100 pts:160535 t:160.535000


    



    In this example, we can see that the command has given us the exact timestamp where the thumbnail was generated "160.535000" which is in seconds with microseconds.

    



    Now to extract the thumbnail in high resolution we could use the found timestamp, but consider that it would be more exact and precise to use the frame number, which in this case would be "frame:3849".

    



    Using this command, we obtain the exact image :

    



    ffmpeg -i video.mp4 -vf "select=gte(n\, 3849)" -vframes 1 high_resolution.png


    



    Well I hope this is helpful for someone who is looking for the original image of a thumbnail or in general who needs to know exactly the minute where it was taken.

    



    If someone in the future likes to make a script that can fully automate this process, I would be grateful :)

    


  • Can I use ffmpeg to encode a procedurally generated video for livestream ?

    19 mai 2020, par Mei Zhang

    I have a python script that continuously generates images (frames of a video).

    



    I would normally save these images to files and then convert them to a video with ffmpeg using the command line. What additional steps are required to adapt this workflow for livestreaming ?

    



    Instead of saving the frames to files I would like to stream them to, say, YouTube, and my script could potentially run indefinitely generating frames of the video.

    



    I'm looking for general guidelines so I can Google more details myself. I assume video platforms like YouTube have some API where I can send data for livestreaming. I have no idea what type of data format such API would expect, but I assume that just sending every single frame to the API is not how this is done.

    



    Would I need to encode my frames in memory using some library ? Can I use ffmpeg for that ?