Recherche avancée

Médias (0)

Mot : - Tags -/optimisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (20)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

  • Initialisation de MediaSPIP (préconfiguration)

    20 février 2010, par

    Lors de l’installation de MediaSPIP, celui-ci est préconfiguré pour les usages les plus fréquents.
    Cette préconfiguration est réalisée par un plugin activé par défaut et non désactivable appelé MediaSPIP Init.
    Ce plugin sert à préconfigurer de manière correcte chaque instance de MediaSPIP. Il doit donc être placé dans le dossier plugins-dist/ du site ou de la ferme pour être installé par défaut avant de pouvoir utiliser le site.
    Dans un premier temps il active ou désactive des options de SPIP qui ne le (...)

Sur d’autres sites (6563)

  • fate/iamf : add a demux text

    24 avril 2024, par James Almer
    fate/iamf : add a demux text
    

    Using the same input sample as iamf-5_1-copy, in order to compare both test's output

    Signed-off-by : James Almer <jamrial@gmail.com>

    • [DH] tests/fate/iamf.mak
    • [DH] tests/ref/fate/iamf-5_1-demux
  • How to get video pixel location from screen pixel location ?

    22 février 2024, par AmLearning

    Wall of Text so I tried breaking it up into sections to make it better sorry in advance

    &#xA;

    The problem

    &#xA;

    I have some video files that I am reading with ffmpeg to get the colors at specific pixels, and all seems well, but I just ran into a problem with finding the right pixel to input. I realized (or mistakingly believe) that the pixel location (x,y) on the screen will be different than the local pixel location so to speak of the video (ie. If I want to get pixel 50,0 of the video that will be different than my screen's pixel 50,0 because the resolutions don't match). I was trying to think of a way to convert my screen's pixel location into the "local pixel location", and I have two ideas but I am not sure if any of them is any good. Note I am currently using cmd+shift+4 on macos to get the screen coordinates and the video is playing fullscreen like in the screenshot below.

    &#xA;

    Ideas

    &#xA;

      &#xA;
    1. enter image description here If I manually measure and account for this vertical offset, would it effectively convert the screen coordinate into the "local" one ?

      &#xA;

    2. &#xA;

    3. If I instead adjust my SwsContext to put the destination height and width as that of my screen, will it effectively replace the need to convert screen coordinates to the video coordinates ?

      &#xA;

    4. &#xA;

    &#xA;

    Problems with the Ideas

    &#xA;

    The problems I see with the first solution are that I am assuming there is no hidden horizontal offset (or conversely that all of the width of the video is actually renderable on the screen). Additionally, this solution would only get an approximate result as I would need to manually measure the offsets, screen width, and screen height using the method I currently am using to get the screen coordinates.

    &#xA;

    With the second solution, aside from the question of if it will even work, the problem becomes that I can no longer measure what the screen coordinates I want are because I can't seem to get rid of those black bars in VLC.

    &#xA;

    Some Testing I did

    &#xA;

    Given that if the black bars are part of the video itself, my entire problem would be fixed (maybe ?) I tried seeing if the black bars were part of the video, and when I looked at the frame data's first pixel, it was black. The problem then is that if the black bars are entirely part of the video, then why are the colors I get for some pixels slightly off (I am checking with ColorSync Utility). These colors aren't just slightly off as in wrong but it seems more that they belong to a slightly offset region of the video.

    &#xA;

    However, this may be somewhat explained if ffmpeg reads right to left. When I put the top left corner of the video into the program and looked again at the pixel data in the frame for that location (location again was calculated by assuming the video location would be the same as the screen location) instead of getting white, I got a bluish color much like the glove in the top right corner.

    &#xA;

    The Watered Down Code

    &#xA;

        struct SwsContext *rescaler = NULL;&#xA;    rescaler = sws_getContext(codec_context->width, codec_context->height, codec_context->pix_fmt, codec_context->width, codec_context->height, AV_PIX_FMT_RGB0, SWS_FAST_BILINEAR, NULL, NULL, 0);&#xA;&#xA;// Get Packets (containers for frames but not guaranteed to have a full frame) and Frames&#xA;    while (av_read_frame(avformatcontext, packet) >= 0)&#xA;    {&#xA;        &#xA;        // determine if packet is video packet&#xA;        if (packet->stream_index != video_index)&#xA;        {&#xA;            continue;&#xA;        }&#xA;        &#xA;        // send packet to decoder&#xA;        if (avcodec_send_packet(codec_context, packet) &lt; 0)&#xA;        {&#xA;            perror("Failed to decode packet");&#xA;        }&#xA;        &#xA;        // get frame from decoder&#xA;        int response = avcodec_receive_frame(codec_context, frame);&#xA;        if (response == AVERROR(EAGAIN))&#xA;        {&#xA;            continue;&#xA;        }&#xA;        else if (response &lt; 0)&#xA;        {&#xA;            perror("Failed to get frame");&#xA;        }&#xA;        &#xA;        // convert frame to RGB0 colorspace 4 bytes per pixel 1 per channel&#xA;        response = sws_scale_frame(rescaler, scaled_frame, frame);&#xA;        if(response &lt; 0){&#xA;            perror("Failed to change colorspace");&#xA;        }&#xA;        // get data and write it&#xA;        int pixel_number = y*(scaled_frame->linesize[0]/4)&#x2B;x; // divide by four gets pixel linesize (4 byte per pixel)&#xA;        int byte_number = 4*(pixel_number-1); // position of pixel in array&#xA;        // start of debugging things&#xA;        int temp = scaled_frame->data[0][byte_number]; // R&#xA;        int one_after = scaled_frame->data[0][byte_number&#x2B;1]; // G&#xA;        int two_after = scaled_frame->data[0][byte_number&#x2B;2]; // B&#xA;        int als; // where i put the breakpoint&#xA;        // end of debugging things&#xA;    }&#xA;

    &#xA;

    In Summary

    &#xA;

    I have no idea what is happening.

    &#xA;

    I take the data for a pixel and compare it to what colorsync utility says should be there, but it is always slightly off as though the pixel I was actually reading was offset from what I thought I was reading. Therefore, I want to find a way to get the pixel location in a video given a screen coordinate when the video is in fullscreen, but I have no idea how to (aside from a few ideas that are probably bad at best).

    &#xA;

    Also does FFMPEG put the frame data right to left ?

    &#xA;

    A Video Better Showing My Problem

    &#xA;

    https://www.youtube.com/watch?v=NSEErs2lC3A

    &#xA;

  • Python OpenCV VideoCapture Color Differs from ffmpeg and Other Media Players

    17 avril 2024, par cliffsu

    I’m working on video processing in Python and have noticed a slight color difference when using cv2.VideoCapture to read videos compared to other media players.

    &#xA;

    enter image description here

    &#xA;

    I then attempted to read the video frames directly using ffmpeg, and despite using the ffmpeg backend in OpenCV, there are still differences between OpenCV’s and ffmpeg’s output. The frames read by ffmpeg match those from other media players.

    &#xA;

    enter image description here

    &#xA;

    Below are the videos I’m using for testing :

    &#xA;

    test3.webm

    &#xA;

    test.avi

    &#xA;

    Here is my code :

    &#xA;

    import cv2&#xA;import numpy as np&#xA;import subprocess&#xA;&#xA;def read_frames(path, res):&#xA;    """Read numpy arrays of video frames. Path is the file path&#xA;       and res is the resolution as a tuple."""&#xA;    args = [&#xA;        "ffmpeg",&#xA;        "-i",&#xA;        path,&#xA;        "-f",&#xA;        "image2pipe",&#xA;        "-pix_fmt",&#xA;        "rgb24",&#xA;        "-vcodec",&#xA;        "rawvideo",&#xA;        "-",&#xA;    ]&#xA;&#xA;    pipe = subprocess.Popen(&#xA;        args,&#xA;        stdout=subprocess.PIPE,&#xA;        stderr=subprocess.DEVNULL,&#xA;        bufsize=res[0] * res[1] * 3,&#xA;    )&#xA;&#xA;    while pipe.poll() is None:&#xA;        frame = pipe.stdout.read(res[0] * res[1] * 3)&#xA;        if len(frame) > 0:&#xA;            array = np.frombuffer(frame, dtype="uint8")&#xA;            break&#xA;&#xA;    pipe.stdout.close()&#xA;    pipe.wait()&#xA;    array = array.reshape((res[1], res[0], 3))&#xA;    array = cv2.cvtColor(array, cv2.COLOR_RGB2BGR)&#xA;    return array&#xA;&#xA;ORIGINAL_VIDEO = &#x27;test3.webm&#x27;&#xA;&#xA;array = read_frames(ORIGINAL_VIDEO, (1280, 720))&#xA;&#xA;cap = cv2.VideoCapture(ORIGINAL_VIDEO, cv2.CAP_FFMPEG)&#xA;while cap.isOpened():&#xA;    ret, frame = cap.read()&#xA;    if not ret:&#xA;        break&#xA;    print(frame.shape)&#xA;    cv2.imshow("Opencv Read", frame)&#xA;    cv2.imshow("FFmpeg Direct Read", array)&#xA;    cv2.waitKeyEx()&#xA;    cv2.waitKeyEx()&#xA;    break&#xA;cap.release()&#xA;

    &#xA;

    I’ve attempted to use different media players to compare cv2.VideoCapture and ffmpeg’s frame reading, to confirm that the issue lies with opencv. I’m looking to determine whether it’s a bug in OpenCV or if there are issues in my code.

    &#xA;

    EDIT :

    &#xA;

    Just use the following code to check the difference between opencv read and ffmpeg read.

    &#xA;

    cv2.imshow(&#x27;test&#x27;, cv2.absdiff(array, frame)*10)&#xA;cv2.waitKey(0)&#xA;

    &#xA;

    Here is the result :&#xA;enter image description here

    &#xA;