Recherche avancée

Médias (1)

Mot : - Tags -/epub

Autres articles (8)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

Sur d’autres sites (2861)

  • Unity : Converting Texture2D to YUV420P and sending with UDP using FFmpeg

    22 juin 2018, par potu1304

    In my Unity game each frame is rendered into a texture and then put together into a video using FFmpeg. Now my questions is if I am doing this right because avcodec_send_frame throws every time an exception.
    I am pretty sure that I am doing something wrong or in the wrong order or simply missing something.

    Here is the code for capturing the texture :

    void Update() {
           //StartCoroutine(CaptureFrame());

           if (rt == null)
           {
               rect = new Rect(0, 0, captureWidth, captureHeight);
               rt = new RenderTexture(captureWidth, captureHeight, 24);
               frame = new Texture2D(captureWidth, captureHeight, TextureFormat.RGB24, false);
           }

           Camera camera = this.GetComponent<camera>(); // NOTE: added because there was no reference to camera in original script; must add this script to Camera
           camera.targetTexture = rt;
           camera.Render();

           RenderTexture.active = rt;
           frame.ReadPixels(rect, 0, 0);
           frame.Apply();

           camera.targetTexture = null;
           RenderTexture.active = null;

           byte[] fileData = null;
           fileData = frame.GetRawTextureData();
           encoding(fileData, fileData.Length);

       }
    </camera>

    And here is the code for encoding and sending the byte data :

    private unsafe void encoding(byte[] bytes, int size)
       {
           Debug.Log("Encoding...");
           AVCodec* codec;
           codec = ffmpeg.avcodec_find_encoder(AVCodecID.AV_CODEC_ID_H264);
           int ret, got_output = 0;

           AVCodecContext* codecContext = null;
           codecContext = ffmpeg.avcodec_alloc_context3(codec);
           codecContext->bit_rate = 400000;
           codecContext->width = captureWidth;
           codecContext->height = captureHeight;
           //codecContext->time_base.den = 25;
           //codecContext->time_base.num = 1;

           AVRational timeBase = new AVRational();
           timeBase.num = 1;
           timeBase.den = 25;
           codecContext->time_base = timeBase;
           //AVStream* videoAVStream = null;
           //videoAVStream->time_base = timeBase;



           AVRational frameRate = new AVRational();
           frameRate.num = 25;
           frameRate.den = 1;
           codecContext->framerate = frameRate;

           codecContext->gop_size = 10;
           codecContext->max_b_frames = 1;
           codecContext->pix_fmt = AVPixelFormat.AV_PIX_FMT_YUV420P;

           AVFrame* inputFrame;
           inputFrame = ffmpeg.av_frame_alloc();
           inputFrame->format = (int)codecContext->pix_fmt;
           inputFrame->width = captureWidth;
           inputFrame->height = captureHeight;
           inputFrame->linesize[0] = inputFrame->width;

           AVPixelFormat dst_pix_fmt = AVPixelFormat.AV_PIX_FMT_YUV420P, src_pix_fmt = AVPixelFormat.AV_PIX_FMT_RGBA;
           int src_w = 1920, src_h = 1080, dst_w = 1920, dst_h = 1080;
           SwsContext* sws_ctx;

           GCHandle pinned = GCHandle.Alloc(bytes, GCHandleType.Pinned);
           IntPtr address = pinned.AddrOfPinnedObject();

           sbyte** inputData = (sbyte**)address;
           sws_ctx = ffmpeg.sws_getContext(src_w, src_h, src_pix_fmt,
                                dst_w, dst_h, dst_pix_fmt,
                                0, null, null, null);

           fixed (int* lineSize = new int[1])
           {
               lineSize[0] = 4 * captureHeight;
               // Convert RGBA to YUV420P
               ffmpeg.sws_scale(sws_ctx, inputData, lineSize, 0, codecContext->width, inputFrame->extended_data, inputFrame->linesize);
           }

           inputFrame->pts = counter++;

           if (ffmpeg.avcodec_send_frame(codecContext, inputFrame) &lt; 0)
               throw new ApplicationException("Error sending a frame for encoding!");

           AVPacket pkt;
           pkt = new AVPacket();
           //pkt.data = inData;
           AVPacket* packet = &amp;pkt;
           ffmpeg.av_init_packet(packet);

           Debug.Log("pkt.size " + pkt.size);
           pinned.Free();
           AVDictionary* options = null;
           ffmpeg.av_dict_set(&amp;options, "pkt_size", "1300", 0);
           ffmpeg.av_dict_set(&amp;options, "buffer_size", "65535", 0);
           AVIOContext* server = null;
           ffmpeg.avio_open2(&amp;server, "udp://192.168.0.1:1111", ffmpeg.AVIO_FLAG_WRITE, null, &amp;options);
           Debug.Log("encoded");
           ret = ffmpeg.avcodec_encode_video2(codecContext, &amp;pkt, inputFrame, &amp;got_output);
           ffmpeg.avio_write(server, pkt.data, pkt.size);
           ffmpeg.av_free_packet(&amp;pkt);
           pkt.data = null;
           pkt.size = 0;
       }

    And every time I start the game

     if (ffmpeg.avcodec_send_frame(codecContext, inputFrame) &lt; 0)
               throw new ApplicationException("Error sending a frame for encoding!");

    throws the exception.
    Any help in fixing the issue would be greatly appreciated :)

  • Java/OpenCV - How to do a lossless h264 video writing in openCV ?

    15 août 2018, par JohnDoeAnon

    in the last time I had some struggle with the VideoWriter in openCV under java. I want to write a video file in a *.mp4 container with h.264 codec - but I see no option to toggle bitrate or quality in openCV VideoWriter. I did build openCV with ffmpeg as backend. I just want to write the video file in exact quality values as the original input video.
    I also have some code to do the job

    import org.opencv.core.Mat;
    import org.opencv.core.Size;
    import org.opencv.videoio.VideoWriter;
    import org.opencv.videoio.Videoio;

    public class VideoOutput
    {
        private final int H264_CODEC = 33;

        private VideoWriter writer;

        private String filename;

        public VideoOutput (String filename)
        {
           writer = null;

           this.filename = filename;
       }

       public void initialize(double framesPerSecond, int height, int width) throws Exception
       {

           this.writer = new VideoWriter();

           this.writer.open(filename, H264_CODEC, framesPerSecond, new Size(width, height));

           if(!writer.isOpened())
           {
                Logging.LOGGER.severe("Could not create video output file " + filename + "\n");

                throw new Exception("Could not create video output file " + filename + "\n");
           }
       }

       public void setFrame(VideoFrame videoFrame) throws Exception
       {
            if (writer.isOpened())
            {
                Mat frame = ImageUtil.imageToMat(videoFrame.getFrame());

                writer.write(frame);

                frame.release();
            }
       }

    I hoped the VideoWriter gives some options to do the job but it seems not the way.

    So is there an option or flag that I am missing for lossless h264 video writing under openCV and java OR maybe there is another way to do this ?
    Please help me - if you have done this already I really would appreciate some example code to get things done.

    UPDATE

    I do have now a solution that fits for my application, so here it is :

    String fps = Double.toString(this.config.getInputConfig().getFramesPerSecond());

    Runtime.getRuntime().exec(
           new String[] {
           "C:\\ffmpeg-3.4.2-win64-static\\bin\\ffmpeg.exe",
           "-framerate",
           fps,
           "-i",
           imageOutputPath + File.separator +  "%01d.jpg",
           "-c:v",
           "libx265",
           "-crf",
           "1",
           imageOutputPath + File.separator +  "ffmpeg.mp4"
           }
       );

    Credits to @Gyan who gave me the correct ffmpeg call in this post :

    Win/ffmpeg - How to generate a video from images under ffmpeg ?

    Greets

  • Unity : Converting Texture2D to YUV420P using FFmpeg

    23 juillet 2021, par strong_kobayashi

    I'm trying to create a game in Unity where each frame is rendered into a texture and then put together into a video using FFmpeg. The output created by FFmpeg should eventually be sent over the network to a client UI. However, I'm struggling mainly with the part where a frame is caught, and passed to an unsafe method as a byte array where it should be processed further by FFmpeg. The wrapper I'm using is FFmpeg.AutoGen.

    &#xA;&#xA;

    The render to texture method :

    &#xA;&#xA;

    private IEnumerator CaptureFrame()&#xA;{&#xA;    yield return new WaitForEndOfFrame();&#xA;&#xA;    RenderTexture.active = rt;&#xA;    frame.ReadPixels(rect, 0, 0);&#xA;    frame.Apply();&#xA;&#xA;    bytes = frame.GetRawTextureData();&#xA;&#xA;    EncodeAndWrite(bytes, bytes.Length);&#xA;}&#xA;

    &#xA;&#xA;

    The unsafe encoding method so far :

    &#xA;&#xA;

    private unsafe void EncodeAndWrite(byte[] bytes, int size)&#xA;{&#xA;    GCHandle pinned = GCHandle.Alloc(bytes, GCHandleType.Pinned);&#xA;    IntPtr address = pinned.AddrOfPinnedObject();&#xA;&#xA;    sbyte** inData = (sbyte**)address;&#xA;    fixed(int* lineSize = new int[1])&#xA;    {&#xA;        lineSize[0] = 4 * textureWidth;&#xA;        // Convert RGBA to YUV420P&#xA;        ffmpeg.sws_scale(sws, inData, lineSize, 0, codecContext->width, inputFrame->extended_data, inputFrame->linesize);&#xA;    }&#xA;&#xA;    inputFrame->pts = frameCounter&#x2B;&#x2B;;&#xA;&#xA;    if(ffmpeg.avcodec_send_frame(codecContext, inputFrame) &lt; 0)&#xA;        throw new ApplicationException("Error sending a frame for encoding!");&#xA;&#xA;    pkt = new AVPacket();&#xA;    fixed(AVPacket* packet = &amp;pkt)&#xA;        ffmpeg.av_init_packet(packet);&#xA;    pkt.data = null;&#xA;    pkt.size = 0;&#xA;&#xA;    pinned.Free();&#xA;    ...&#xA;}&#xA;

    &#xA;&#xA;

    sws_scale takes a sbyte** as the second parameter, therefore I'm trying to convert the input byte array to sbyte** by first pinning it with GCHandle and doing an explicit type conversion afterwards. I don't know if that's the correct way, though.

    &#xA;&#xA;

    Moreover, the condition if(ffmpeg.avcodec_send_frame(codecContext, inputFrame) &lt; 0) alwasy throws an ApplicationException, where I also really don't know why this happens. codecContext and inputFrame are my AVCodecContext and AVFrame objects, respectively, and the fields are defined as the following :

    &#xA;&#xA;

    codecContext

    &#xA;&#xA;

    codecContext = ffmpeg.avcodec_alloc_context3(codec);&#xA;codecContext->bit_rate = 400000;&#xA;codecContext->width = textureWidth;&#xA;codecContext->height = textureHeight;&#xA;&#xA;AVRational timeBase = new AVRational();&#xA;timeBase.num = 1;&#xA;timeBase.den = (int)fps;&#xA;codecContext->time_base = timeBase;&#xA;videoAVStream->time_base = timeBase;&#xA;&#xA;AVRational frameRate = new AVRational();&#xA;frameRate.num = (int)fps;&#xA;frameRate.den = 1;&#xA;codecContext->framerate = frameRate;&#xA;&#xA;codecContext->gop_size = 10;&#xA;codecContext->max_b_frames = 1;&#xA;codecContext->pix_fmt = AVPixelFormat.AV_PIX_FMT_YUV420P;&#xA;

    &#xA;&#xA;

    inputFrame

    &#xA;&#xA;

    inputFrame = ffmpeg.av_frame_alloc();&#xA;inputFrame->format = (int)codecContext->pix_fmt;&#xA;inputFrame->width = textureWidth;&#xA;inputFrame->height = textureHeight;&#xA;inputFrame->linesize[0] = inputFrame->width;&#xA;

    &#xA;&#xA;

    Any help in fixing the issue would be greatly appreciated :)

    &#xA;