Recherche avancée

Médias (91)

Autres articles (48)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Pas question de marché, de cloud etc...

    10 avril 2011

    Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
    sur le web 2.0 et dans les entreprises qui en vivent.
    Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
    Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
    le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
    Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...)

Sur d’autres sites (5898)

  • How to convert ffmpeg video frame to YUV444 ?

    21 octobre 2019, par Edward Severinsen

    I have been following a tutorial on how to use ffmpeg and SDL to make a simple video player with no audio (yet). While looking through the tutorial I realized it was out of date and many of the functions it used, for both ffmpeg and SDL, were deprecated. So I searched for an up-to-date solution and found a stackoverflow question answer that completed what the tutorial was missing.

    However, it uses YUV420 which is of low quality. I want to implement YUV444 and after studying chroma-subsampling for a bit and looking at the different formats for YUV am confused as to how to implement it. From what I understand YUV420 is a quarter of the quality YUV444 is. YUV444 means every pixel has its own chroma sample and as such is more detailed while YUV420 means pixels are grouped together and have the same chroma sample and therefore is less detailed.

    And from what I understand the different formats of YUV(420, 422, 444) are different in the way they order y, u, and v. All of this is a bit overwhelming because I haven’t done much with codecs, conversions, etc. Any help would be much appreciated and if additional info is needed please let me know before downvoting.

    Here is the code from the answer I mentioned concerning the conversion to YUV420 :

    texture = SDL_CreateTexture(
           renderer,
           SDL_PIXELFORMAT_YV12,
           SDL_TEXTUREACCESS_STREAMING,
           pCodecCtx->width,
           pCodecCtx->height
           );
       if (!texture) {
           fprintf(stderr, "SDL: could not create texture - exiting\n");
           exit(1);
       }

       // initialize SWS context for software scaling
       sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
           pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
           AV_PIX_FMT_YUV420P,
           SWS_BILINEAR,
           NULL,
           NULL,
           NULL);

       // set up YV12 pixel array (12 bits per pixel)
       yPlaneSz = pCodecCtx->width * pCodecCtx->height;
       uvPlaneSz = pCodecCtx->width * pCodecCtx->height / 4;
       yPlane = (Uint8*)malloc(yPlaneSz);
       uPlane = (Uint8*)malloc(uvPlaneSz);
       vPlane = (Uint8*)malloc(uvPlaneSz);
       if (!yPlane || !uPlane || !vPlane) {
           fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
           exit(1);
       }

       uvPitch = pCodecCtx->width / 2;
       while (av_read_frame(pFormatCtx, &packet) >= 0) {
           // Is this a packet from the video stream?
           if (packet.stream_index == videoStream) {
               // Decode video frame
               avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

               // Did we get a video frame?
               if (frameFinished) {
                   AVPicture pict;
                   pict.data[0] = yPlane;
                   pict.data[1] = uPlane;
                   pict.data[2] = vPlane;
                   pict.linesize[0] = pCodecCtx->width;
                   pict.linesize[1] = uvPitch;
                   pict.linesize[2] = uvPitch;

                   // Convert the image into YUV format that SDL uses
                   sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
                       pFrame->linesize, 0, pCodecCtx->height, pict.data,
                       pict.linesize);

                   SDL_UpdateYUVTexture(
                       texture,
                       NULL,
                       yPlane,
                       pCodecCtx->width,
                       uPlane,
                       uvPitch,
                       vPlane,
                       uvPitch
                       );

                   SDL_RenderClear(renderer);
                   SDL_RenderCopy(renderer, texture, NULL, NULL);
                   SDL_RenderPresent(renderer);

               }
           }

           // Free the packet that was allocated by av_read_frame
           av_free_packet(&packet);
           SDL_PollEvent(&event);
           switch (event.type) {
               case SDL_QUIT:
                   SDL_DestroyTexture(texture);
                   SDL_DestroyRenderer(renderer);
                   SDL_DestroyWindow(screen);
                   SDL_Quit();
                   exit(0);
                   break;
               default:
                   break;
           }

       }

       // Free the YUV frame
       av_frame_free(&pFrame);
       free(yPlane);
       free(uPlane);
       free(vPlane);

       // Close the codec
       avcodec_close(pCodecCtx);
       avcodec_close(pCodecCtxOrig);

       // Close the video file
       avformat_close_input(&pFormatCtx);

    EDIT :

    After more research I learned that in YUV420 is stored with all Y’s first then a combination of U and V bytes one after another as illustrated by this image :

    (source : wikimedia.org)

    However I also learned that YUV444 is stored in the order U, Y, V and repeats like this picture shows :

    I tried changing some things around in code :

       // I changed SDL_PIXELFORMAT_YV12 to SDL_PIXELFORMAT_UYVY
       // as to reflect the order of YUV444
       texture = SDL_CreateTexture(
           renderer,
           SDL_PIXELFORMAT_UYVY,
           SDL_TEXTUREACCESS_STREAMING,
           pCodecCtx->width,
           pCodecCtx->height
           );
       if (!texture) {
           fprintf(stderr, "SDL: could not create texture - exiting\n");
           exit(1);
       }

       // Changed AV_PIX_FMT_YUV420P to AV_PIX_FMT_YUV444P
       // for rather obvious reasons
       sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
           pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
           AV_PIX_FMT_YUV444P,
           SWS_BILINEAR,
           NULL,
           NULL,
           NULL);

       // There are as many Y, U and V bytes as pixels I just
       // made yPlaneSz and uvPlaneSz equal to the number of pixels
       yPlaneSz = pCodecCtx->width * pCodecCtx->height;
       uvPlaneSz = pCodecCtx->width * pCodecCtx->height;
       yPlane = (Uint8*)malloc(yPlaneSz);
       uPlane = (Uint8*)malloc(uvPlaneSz);
       vPlane = (Uint8*)malloc(uvPlaneSz);
       if (!yPlane || !uPlane || !vPlane) {
           fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
           exit(1);
       }

       uvPitch = pCodecCtx->width * 2;
       while (av_read_frame(pFormatCtx, &packet) >= 0) {
           // Is this a packet from the video stream?
           if (packet.stream_index == videoStream) {
               // Decode video frame
               avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

               // Rearranged the order of the planes to reflect UYV order
               // then set linesize to the number of Y, U and V bytes
               // per row
               if (frameFinished) {
                   AVPicture pict;
                   pict.data[0] = uPlane;
                   pict.data[1] = yPlane;
                   pict.data[2] = vPlane;
                   pict.linesize[0] = pCodecCtx->width;
                   pict.linesize[1] = pCodecCtx->width;
                   pict.linesize[2] = pCodecCtx->width;

                   // Convert the image into YUV format that SDL uses
                   sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
                       pFrame->linesize, 0, pCodecCtx->height, pict.data,
                       pict.linesize);

                   SDL_UpdateYUVTexture(
                       texture,
                       NULL,
                       yPlane,
                       1,
                       uPlane,
                       uvPitch,
                       vPlane,
                       uvPitch
                       );
    //.................................................

    But now I get an access violation at the call to SDL_UpdateYUVTexture... I’m honestly not sure what’s wrong. I think it may have to do with setting AVPicture pic’s member data and linesize improperly but I’m not positive.

  • Using Accord.Video.FFMPEG, I get "parameter is not valid exception". How can I solve it ?

    31 mai 2023, par Sheron Blumental

    I want to extract all the frames from an MP4 video file and display them on a PictureBox.

    


    The original code comes from this Q&A : How can I time the presentation and extraction of frames from a video file ?

    


    The exception happens after clicking the start button on the line :

    


    var frame = videoReader.ReadVideoFrame();


    


    The message

    


    System.ArgumentException&#xA;  HResult=0x80070057&#xA;  Message=Parameter is not valid.&#xA;  Source=System.Drawing&#xA;  StackTrace:&#xA;   at System.Drawing.Bitmap..ctor(Int32 width, Int32 height, PixelFormat format)&#xA;   at Accord.Video.FFMPEG.VideoFileReader.DecodeVideoFrame(BitmapData bitmapData)&#xA;   at Accord.Video.FFMPEG.VideoFileReader.readVideoFrame(Int32 frameIndex, BitmapData output)&#xA;   at Accord.Video.FFMPEG.VideoFileReader.ReadVideoFrame()&#xA;   at Extract_Frames.Form1.<getvideoframesasync>d__15.MoveNext() in D:\Csharp Projects\Extract Frames\Form1.cs:line 114&#xA;   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)&#xA;   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)&#xA;   at System.Runtime.CompilerServices.TaskAwaiter.GetResult()&#xA;   at Extract_Frames.Form1.d__17.MoveNext() in D:\Csharp Projects\Extract Frames\Form1.cs:line 151&#xA;</getvideoframesasync>

    &#xA;

    The full code

    &#xA;

    using Accord.IO;&#xA;using Accord.Video;&#xA;using Accord.Video.FFMPEG;&#xA;using System;&#xA;using System.Collections.Generic;&#xA;using System.ComponentModel;&#xA;using System.Data;&#xA;using System.Drawing;&#xA;using System.IO;&#xA;using System.Linq;&#xA;using System.Reflection;&#xA;using System.Reflection.Emit;&#xA;using System.Text;&#xA;using System.Threading;&#xA;using System.Threading.Tasks;&#xA;using System.Windows.Forms;&#xA;&#xA;namespace Extract_Frames&#xA;{&#xA;    public partial class Form1 : Form&#xA;    {&#xA;        Bitmap frame = null;&#xA;        Graphics frameGraphics = null;&#xA;        bool isVideoRunning = false;&#xA;        IProgress<bitmap> videoProgress = null;&#xA;        private CancellationTokenSource cts = null;&#xA;        private readonly object syncRoot = new object();&#xA;        private static long pause = 0;&#xA;        private int frameRate = 0;&#xA;        private List<bitmap> frames = new List<bitmap>();&#xA;        string fileName;&#xA;&#xA;        public Form1()&#xA;        {&#xA;            InitializeComponent();&#xA;&#xA;        }&#xA;&#xA;        private void Form1_Load(object sender, EventArgs e)&#xA;        {&#xA;&#xA;        }&#xA;&#xA;        private void StopPlayback(bool cancel)&#xA;        {&#xA;            lock (syncRoot)&#xA;            {&#xA;                if (cancel) cts?.Cancel();&#xA;                cts?.Dispose();&#xA;                cts = null;&#xA;            }&#xA;        }&#xA;&#xA;        int counter =1;&#xA;        private void Updater(Bitmap videoFrame)&#xA;        {&#xA;            frames.Add(videoFrame);&#xA;&#xA;            label1.Text = "Current Frame Number : " &#x2B; counter;&#xA;            trackBar1.Value = counter;&#xA;            counter&#x2B;&#x2B;;&#xA;&#xA;            //Size size = new Size(videoFrame.Width, videoFrame.Height);&#xA;            //pictureBox1.ClientSize = size;&#xA;            using (videoFrame) frameGraphics.DrawImage(videoFrame, Point.Empty);&#xA;&#xA;            pictureBox1.Invalidate();&#xA;        }&#xA;&#xA;        private async Task GetVideoFramesAsync(IProgress<bitmap> updater, string fileName, int intervalMs, CancellationToken token = default)&#xA;        {&#xA;            using (var videoReader = new VideoFileReader())&#xA;            {&#xA;                if (token.IsCancellationRequested) return;&#xA;                videoReader.Open(fileName);&#xA;&#xA;                videoReader.ReadVideoFrame(1);&#xA;                trackBar1.Value = 1;&#xA;&#xA;                label1.Text = "Current Frame Number : " &#x2B; counter.ToString();&#xA;&#xA;                while (true)&#xA;                {&#xA;                    if (Interlocked.Read(ref pause) == 0)&#xA;                    {&#xA;                        var frame = videoReader.ReadVideoFrame();&#xA;&#xA;                        if (token.IsCancellationRequested || frame is null) break;&#xA;                        updater.Report(frame);&#xA;                    }&#xA;                    await Task.Delay(frameRate).ConfigureAwait(false);&#xA;                }&#xA;            }&#xA;        }&#xA;&#xA;        private void trackBar2_Scroll(object sender, EventArgs e)&#xA;        {&#xA;            frameRate = trackBar2.Value / 25;&#xA;        }&#xA;&#xA;        private async void buttonStart_Click(object sender, EventArgs e)&#xA;        {&#xA;            string fileName = textBox1.Text;&#xA;&#xA;            if (isVideoRunning) return;&#xA;            isVideoRunning = true;&#xA;&#xA;            using (var videoReader = new VideoFileReader())&#xA;            {&#xA;                videoReader.Open(fileName);&#xA;                frame = new Bitmap(videoReader.Width &#x2B; 2, videoReader.Height &#x2B; 2);&#xA;                trackBar1.Maximum = (int)videoReader.FrameCount;&#xA;            }&#xA;&#xA;            videoProgress = new Progress<bitmap>((bitmap) => Updater(bitmap));&#xA;            cts = new CancellationTokenSource();&#xA;            pictureBox1.Image = frame;&#xA;            try&#xA;            {&#xA;                frameGraphics = Graphics.FromImage(frame);&#xA;                // Set the fame rate to 25 frames per second&#xA;                //int frameRate = 1000 / 25;&#xA;                await GetVideoFramesAsync(videoProgress, fileName, frameRate, cts.Token);&#xA;            }&#xA;            finally&#xA;            {&#xA;                frameGraphics?.Dispose();&#xA;                StopPlayback(false);&#xA;                isVideoRunning = false;&#xA;            }&#xA;        }&#xA;&#xA;        private void buttonPause_Click(object sender, EventArgs e)&#xA;        {&#xA;            if (pause == 0)&#xA;            {&#xA;                buttonPause.Text = "Resume";&#xA;                Interlocked.Increment(ref pause);&#xA;            }&#xA;            else&#xA;            {&#xA;                Interlocked.Decrement(ref pause);&#xA;                buttonPause.Text = "Pause";&#xA;            }&#xA;        }&#xA;&#xA;        private void buttonStop_Click(object sender, EventArgs e)&#xA;        {&#xA;            StopPlayback(true);&#xA;        }&#xA;&#xA;        protected override void OnFormClosing(FormClosingEventArgs e)&#xA;        {&#xA;            if (isVideoRunning) StopPlayback(true);&#xA;            pictureBox1.Image?.Dispose();&#xA;            base.OnFormClosing(e);&#xA;        }&#xA;&#xA;        private void pictureBox1_Paint(object sender, PaintEventArgs e)&#xA;        {&#xA;            ControlPaint.DrawBorder(e.Graphics, pictureBox1.ClientRectangle, Color.Red, ButtonBorderStyle.Solid);&#xA;        }&#xA;&#xA;        private void trackBar1_Scroll(object sender, EventArgs e)&#xA;        {&#xA;            pictureBox1.Image = frames[trackBar1.Value];&#xA;        }&#xA;&#xA;        private void button1_Click(object sender, EventArgs e)&#xA;        {&#xA;            using (OpenFileDialog openFileDialog = new OpenFileDialog())&#xA;            {&#xA;                openFileDialog.InitialDirectory = "c:\\";&#xA;                openFileDialog.Filter = "video files (*.mp4)|*.mp4|All files (*.*)|*.*";&#xA;                openFileDialog.FilterIndex = 2;&#xA;                openFileDialog.RestoreDirectory = true;&#xA;&#xA;                if (openFileDialog.ShowDialog() == DialogResult.OK)&#xA;                {&#xA;                    // Get the path of specified file&#xA;                    textBox1.Text = openFileDialog.FileName;&#xA;                }&#xA;            }&#xA;        }&#xA;    }&#xA;}&#xA;</bitmap></bitmap></bitmap></bitmap></bitmap>

    &#xA;

  • Today we celebrate Data Privacy Day 2019

    28 janvier 2019, par Jake Thornton — Privacy

    Today we celebrate Data Privacy Day 2019 !!!

    What is Data Privacy Day ?

    Wikipedia tells us that : The purpose of Data Privacy Day is to raise awareness and promote privacy and data protection best practices.

    Our personal data is our online identity. When you think what personal data means – our phone records, credit card transactions, GPS position, IP addresses, browsing history and so much more. All so valuable and personal to us as human beings.

    That’s why we cannot take our personal data online for granted. We have a right to know which websites collect our data and how it’s then used, something that’s often not visible or easily recognisable when browsing.

    What Data Privacy Day means to Matomo

    Every year the team at Matomo uses this day as a chance to reflect on how far the Matomo (formerly Piwik) project has come. But then also reflect how far we still have to go in spreading the message that our data and personal information online matters.

    2018 saw the introduction of the EU General Data Protection Regulation (GDPR) to protect people’s data online. As a team, Matomo was at the forefront of this development in the analytics space and have since built a GDPR Manager to ensure our users can be fully compliant with the GDPR.

    With every new release of Matomo, we are ensuring that security continues to be at the highest standard and we will continue to be committed to our bug bounty program. Our most recent release of Matomo 3.8.0 alone added a Two Factor Authentication (2FA) feature and a password brute force prevention.

    What next for Matomo and data privacy ?

    As always, security is a top priority for every new release of Matomo and continues to only get better and better. We have a duty to spread our message further that the protection of personal data matters and today is a vital reminder of that. We are, and forever will be, the #1 open-source (and free to use) web analytics platform in the world that fully respects user privacy and gives our users 100% data ownership.

    In 2018 we changed our name, we updated our logo and website, and advanced our platform to compete with the most powerful web analytics tools in the world, all so we can spread our message further and continue our mission.

    Come with us on this exciting journey. Now is the time to take back control of your data and let’s continue creating a safer web for everyone.

    Please help us spread this message.