Recherche avancée

Médias (91)

Autres articles (33)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (4645)

  • How to get frames from HDR video in scRGB color space ?

    5 mars 2018, par Виталий Синявский

    I want to create a simple video player that will show HDR video on HDR TV. For example, this "LG Chess HDR" video. It is encoded with HEVC, its bit depth is 10 bit, pixel format is YUV420P10LE and it has metadata abount BT2020 color space and PQ transfer function.

    In this NVIDIA article I found the next :

    The display driver takes the scRGB back buffer, and converts it to the
    standard expected by the display presently connected. In general, this
    means converting the color space from sRGB primaries to BT. 2020
    primaries, scaling to an appropriate level, and encoding with a
    mechanism like PQ. Also, possibly performing conversions like RGB to
    YCC if that display connection requires it.

    It means that my player should render pixels in the scRGB color space (linear encoding, sRGB primaries, full range is -0.5 through just less than +7.5). So I need to get frames from the source video in this color space somehow, preferably in FP16 pixel format (half float, 16 bits per one color channel). I come to the following simple pipeline to render videos to HDR :

    source HDR video in BT2020 color space with applied PQ -> [some video library] ->
    -> video frames with colors in scRGB color space -> [my program] ->
    -> rendered video on HDR TV with applied conversions by display driver

    I’m trying to use FFmpeg as this library and do not understand how to get frames from the source HDR video in scRGB color space.

    I use sws_scale FFmpeg method now to get frames and know about filters API. But I did not found any information and help about how to transparantly get frames in scRGB using these functionality without parsing metadata for all source videos and create custom video filters for them.

    Please, tell me what I can do to get frames in the scRGB color space using FFmpeg. Can someone tell other libraries with which I can do it ?

  • Getting troubles when I generate rtsp stream as an output with ffmpeg from static images as an input

    10 août 2013, par Ilya Yevlampiev

    I'm trying to start the rtsp stream via feeding ffmpeg with static images and feeding ffserver with ffmpeg output.

    The first problem appears from the ffserver.config :

    Port 12345
    RTSPPort 8544
    BindAddress 0.0.0.0
    MaxHTTPConnections 2000
    MaxClients 1000
    MaxBandwidth 1000
    CustomLog /var/log/ffserver-access.log
     <feed>
    File /tmp/videofeed.ffm
    FileMaxSize 3M
    #Launch ffmpeg -s 640x480 -f video4linux2 -i /dev/video0
    #Launch ffmpeg http://localhost:8090/videofeed.ffm
    Launch ffmpeg -loop 1 -f image2 -r 20 -b 9600 -i Janalif.jpg -t 30 http://127.0.0.1:8090/videofeed.ffm -report
    ACL allow 127.0.0.1
     </feed>
     <stream>
    Format rtsp
    #rtsp://localhost:5454/test1-rtsp.mpg
    Feed videofeed.ffm
    #webcam.ffm
    Format flv
    VideoCodec flv
    VideoFrameRate 30
    VideoBufferSize 80000
    VideoBitRate 200
    VideoQMin 1
    VideoQMax 5
    VideoSize 640x480
    PreRoll 1
    NoAudio
     </stream>
     <stream>
    Format status
     </stream>

    Please ignore codecs etc in stream part. The problem appears for RTSPPort, after starting the server nmap shows no binding to 8544, only 12345 port is used.

    8090/tcp  open  unknown
    12345/tcp open  netbus

    I can download mpeg stream through http from http://localhost:12345/test1-rtsp.mpg. How can I setup 8544 port working ?

    and another question is about Launch part of the stream. Am I right, that ffserver executes the content of Launch line ? If so, how can i configure ffserver to wait the stream in some particular port, but start streaming at the moment I desire ?

    P.S. The solution looks like Säkkijärven polkka, hoowever the idea behind this construct is to provide the controlled rtsp stream to emulate the camera output. In future I plan to substitute the command line for ffmpeg with some java bindings for it to produce the program-controlled images to the camera input to test the computer vision, that's why I need a way to launch ffmpeg independently on ffserver.

  • FFmpeg producing a flickering video from images

    21 juin 2018, par jjohnn91

    So I’m trying to make a video of a fractal rotating through some values, much like seen here.

    I generate the frames (1000 of them) using a different program written in Java that works just fine, so for the purposes of this scenario assume that all the images are in the target folder and also in numerical order as they need to appear in the video.

    I found the following code on the web to stitch images into a video, and I haven’t the faintest idea how it works, and when I run it, all of the images are indeed stitched into a video and placed on the desktop, but the video appears to have one specific frame just jump in at random positions. I’m not totally sure which one, but its one of the earlier frames, somewhere between 1 and 200 of the 1000.

    I’ve also tested making two half videos, one using the first 500 frames, and the other using the second 500 frames. The first video (1 -> 500) has flickering, and the second video (501 -> 1000) appears not to have flickering to my observations.

    I am seeking help in fixing the flickering behavior, and I will upload the video file to google drive later if asked. The Images are all 1920x1080, and in proper numerical order.

    Thanks in advance !

    import static org.bytedeco.javacpp.opencv_imgcodecs.*;
    import java.io.File;
    import org.bytedeco.javacpp.avcodec;
    import org.bytedeco.javacv.FFmpegFrameRecorder;
    import org.bytedeco.javacv.OpenCVFrameConverter;
    public class ImageToMovie{
       public static void main(String []args){
           String imgPath="C:\\Users\\John\\Images";
           String vidPath="C:\\Users\\John\\Desktop\\video.mp4";
           String[] links=new String[new File(imgPath).listFiles().length];
           File f=new File(imgPath);
           File[] f2=f.listFiles();
           for(int i=0;icode>