Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (70)

  • Modifier la date de publication

    21 juin 2013, par

    Comment changer la date de publication d’un média ?
    Il faut au préalable rajouter un champ "Date de publication" dans le masque de formulaire adéquat :
    Administrer > Configuration des masques de formulaires > Sélectionner "Un média"
    Dans la rubrique "Champs à ajouter, cocher "Date de publication "
    Cliquer en bas de la page sur Enregistrer

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

Sur d’autres sites (5373)

  • How to install Ffmpeg fluent in LAMBDA ?

    9 octobre 2020, par Jhony code

    Hi, can you help how did you set up the library (node-fluent-ffmpeg) in a lambda function correctly ?

    


    Because i already set up the :

    


      

    • Serverless file with it's layer parameters
    • 


    • Uploaded the binaries to the lambda function as a layer
    • 


    • Already set up the FFPROBE_PATH AND FFMPEG_PATH from the lambda function
    • 


    


    Also the weird thing is that : the lambda function just finish working, like sending a normal response.

    


    https://user-images.githubusercontent.com/35440957/95387637-ff99a580-08be-11eb-9fc9-1498aea2e2c1.png

    


    I mean also if you could show me step by step how did you make it work in a lambda function ?

    


    You will see the following example of how i have everything setup and it still does not work :

    


    1- This is from the lambda console

    


    https://user-images.githubusercontent.com/35440957/95386867-ee03ce00-08bd-11eb-91ae-29b45bd90471.png

    


    https://user-images.githubusercontent.com/35440957/95386217-f60f3e00-08bc-11eb-9fd8-b51d1b04a81e.png

    


    2- My json dependencies

    


     "dependencies": {
    "aws-sdk": "^2.764.0",
    "aws-serverless-express": "^3.3.8",
    "fluent-ffmpeg": "^2.1.2",
    "lambduh-execute": "^1.3.0"
  }


    


    3- My serverless file

    


    `service: functionName

provider:
  name: aws
  runtime: nodejs12.x
  memorySize: 3008
  timeout: 300
  stage: live
  region: us-east-1
  environment:
    FFMPEG_PATH: /opt/ffmpeg/ffmpeg
    FFPROBE_PATH: /opt/ffmpeg/ffprobe

functions:
  api:
    handler: lambda.handler
    events:
      - s3: ${self:custom.bucket}
    layers:
      - { Ref: FfmpegLambdaLayer }

layers:
  ffmpeg:
    path: layer

custom:
  bucket: buckename



    


    4- The directory of the layer

    


    https://user-images.githubusercontent.com/35440957/95386502-6918b480-08bd-11eb-95e6-1b0b78f3a230.png

    


    5- Javascript file

    


    
const fs = require("fs");
const AWS = require("aws-sdk");
const ffmpeg = require("fluent-ffmpeg");
const s3 = new AWS.S3();

exports.handler = async (event, context, callback) => {

ffmpeg({
        source: `**Object file which i already verified it exists**`
      })
        .on("filenames", async (filenames) => {
          console.log("Uploading please wait");
        })
        .on("error", function (err) {
          console.log("Error in filenames section: " + JSON.stringify(err));
        })
        .on("end", function () {
          console.log("Screenshots taken");
        })
        .screenshots({
          count: 10,
          folder: "tmp/",
          filename: "thumbnail-at-%i.png",
          size: "1600x900",
        })
        .on("end", function (stdout, stderr) {


        })
        .on("error", function (err) {
          console.log("Error writing video to disk: " + JSON.stringify(err));
          throw "An error: " + err.message;
        });

};



    


    Expected results

    


    The dependecy must work as expected in a AWS lambda function.

    


    Observed results

    


    When using enviroment variables

    


    https://user-images.githubusercontent.com/35440957/95548905-154cbf00-09d4-11eb-8f46-f06cd012b05b.png

    


    When not using enviroment variables

    


    Lambda function just finish it's job without showing any log (Error) messages
In the cloudwatch console it just show the log that lambda finished de function succesfully (Not the expected from the dependency)

    


    I already used this dependecy locally, and it works ! but in LAMBDA it is too hard

    


    Checklist

    


      

    • [ X] I have read the FAQ
    • 


    • [ X] I have included full stderr/stdout output from ffmpeg
    • 


    • [ X] I have included the binaries from the static build and deployed it to the lambda function
    • 


    • [ X] I have set the enviroment variables
FFMPEG_PATH : /opt/ffmpeg/ffmpeg
FFPROBE_PATH : /opt/ffmpeg/ffprobe
    • 


    


    Version information

    


      

    • fluent-ffmpeg version : ^2.1.2
    • 


    • ffmpeg version or build : ffmpeg-git-20190925-amd64-static.tar.xz
    • 


    • OS : Lambda (Linux enviroment) / Node 12.x
    • 


    


  • AR.Drone 2, ffmpeg avcodec_decode_video2( ) segmentation fault

    21 avril 2014, par mechanicalmanb

    I have been trying to decode the video stream from an AR.Drone 2.0 (http://ardrone2.parrot.com/) for a while now with no success. Despite several examples that I have been following closely (I’d paste links, but I am not allowed) I cannot escape a segmentation fault inside of the ffmpeg libavcodec library. I thought that perhaps I was making some kind of mistake in the multi-threaded structure I was building, so I cut out everything except the bare minimum you need to connect to the drone, collect a frame from the drone, and send it to ffmpeg’s avcodec_decode_video2() function.

    I compiled the ffmpeg source (I’ve actually tried three different releases !) and can get the ffplay utility to display the drone’s video TCP stream. The video lags significantly, but at least I know the drone isn’t sending me complete gibberish.

    Has anyone encountered a problem like this before ? What could be causing this segmentation fault, and what can I do about it ? Is there a way to isolate a test on ffmpeg so that I can be sure it is the library and not something I’ve been doing this entire time ?

    Thanks for your time.

    A pastebin with my code :
    http://pastebin.com/NYTf0NeT

    Some details on my ffmpeg and compiler set up :

    ffmpeg version 2.2.git Copyright (c) 2000-2014 the FFmpeg developers
     built on Mar  3 2014 18:05:42 with gcc 4.8 (Ubuntu 4.8.1-2ubuntu1~12.04)
     configuration:
     libavutil      52. 66.100 / 52. 66.100
     libavcodec     55. 52.102 / 55. 52.102
     libavformat    55. 33.100 / 55. 33.100
     libavdevice    55. 10.100 / 55. 10.100
     libavfilter     4.  2.100 /  4.  2.100
     libswscale      2.  5.101 /  2.  5.101
     libswresample   0. 18.100 /  0. 18.100

    The output of my code and a backtrace at the segmentation fault :

    *********************** START ***********************



    booting...

    [h264 @ 0x604040] err{or,}_recognition separate: 1; 1

    [h264 @ 0x604040] err{or,}_recognition combined: 1; 1

    [h264 @ 0x604040] Unsupported bit depth: 0

    asked for 40000 bytes, received packet of 1448 bytes

    PaVE synchronized. YIPEEEEEEEEEEEEEEEEEEEEEEEE



    ---------------------------

    Codec : H264

    StreamID : 1

    Timestamp : 1031517 ms

    Encoded dims : 640 x 368

    Display dims : 640 x 360

    Header size : 76

    Payload size : 17583

    Size of SPS inside payload : 14

    Size of PPS inside payload : 10

    Slices in the frame : 1

    Frame Type / Number : IDR-Frame : 31467 : slide 1/1

    ---------------------------




    gathering payload...

    asked for 16211 bytes, received packet of 1448 bytes

    gathering payload...

    asked for 14763 bytes, received packet of 1448 bytes

    gathering payload...

    asked for 13315 bytes, received packet of 1448 bytes

    gathering payload...

    asked for 11867 bytes, received packet of 1448 bytes

    gathering payload...

    asked for 10419 bytes, received packet of 1448 bytes

    gathering payload...

    asked for 8971 bytes, received packet of 1448 bytes

    gathering payload...

    asked for 7523 bytes, received packet of 1448 bytes

    gathering payload...

    asked for 6075 bytes, received packet of 1448 bytes

    gathering payload...

    asked for 4627 bytes, received packet of 1448 bytes

    gathering payload...

    asked for 3179 bytes, received packet of 1448 bytes

    gathering payload...

    asked for 1731 bytes, received packet of 1448 bytes

    gathering payload...

    asked for 283 bytes, received packet of 283 bytes

    payload complete, attempting to decode frame




    Program received signal SIGSEGV, Segmentation fault.

    0x00007ffff73fccba in ?? () from /usr/lib/x86_64-linux-gnu/libavcodec.so.53

    (gdb) bt

    #0  0x00007ffff73fccba in ?? () from /usr/lib/x86_64-linux-gnu/libavcodec.so.53

    #1  0x00007ffff73fd8f5 in avcodec_decode_video2 () from /usr/lib/x86_64-linux-gnu/libavcodec.so.53

    #2  0x000000000040159f in fetch_and_decode(int, parrot_video_encapsulation_t, AVCodecContext*, AVFrame*)

       ()

    #3  0x00000000004019c6 in main ()

    EDIT : I used Valgrind to try and get a better picture of the seg fault, and received the following :

    ==4730== Invalid read of size 1
    ==4730==    at 0x5265CBA: ??? (in /usr/lib/x86_64-linux-gnu/libavcodec.so.53.35.0)
    ==4730==    by 0x52668F4: avcodec_decode_video2 (in /usr/lib/x86_64-linux-gnu/libavcodec.so.53.35.0)
    ==4730==    by 0x40140E: fetch_and_decode(int, AVCodecContext*, AVFrame*) (main.cpp:176)
    ==4730==    by 0x401757: main (main.cpp:273)
    ==4730==  Address 0x280056c46f9 is not stack'd, malloc'd or (recently) free'd
    ==4730==
    ==4730==
    ==4730== Process terminating with default action of signal 11 (SIGSEGV)
    ==4730==  Access not within mapped region at address 0x280056C46F9
    ==4730==    at 0x5265CBA: ??? (in /usr/lib/x86_64-linux-gnu/libavcodec.so.53.35.0)
    ==4730==    by 0x52668F4: avcodec_decode_video2 (in /usr/lib/x86_64-linux-gnu/libavcodec.so.53.35.0)
    ==4730==    by 0x40140E: fetch_and_decode(int, AVCodecContext*, AVFrame*) (main.cpp:176)
    ==4730==    by 0x401757: main (main.cpp:273)

    "Invalid read size of 1" refers to trying to access a byte outside the bounds of an array. Does this mean that the library is trying to access something outside the bounds of an array I’m giving it ? I’ve checked the AVPkt, and that seems fine. I’m still stumped !

  • How to get video pixel location from screen pixel location ?

    22 février 2024, par AmLearning

    Wall of Text so I tried breaking it up into sections to make it better sorry in advance

    


    The problem

    


    I have some video files that I am reading with ffmpeg to get the colors at specific pixels, and all seems well, but I just ran into a problem with finding the right pixel to input. I realized (or mistakingly believe) that the pixel location (x,y) on the screen will be different than the local pixel location so to speak of the video (ie. If I want to get pixel 50,0 of the video that will be different than my screen's pixel 50,0 because the resolutions don't match). I was trying to think of a way to convert my screen's pixel location into the "local pixel location", and I have two ideas but I am not sure if any of them is any good. Note I am currently using cmd+shift+4 on macos to get the screen coordinates and the video is playing fullscreen like in the screenshot below.

    


    Ideas

    


      

    1. enter image description here If I manually measure and account for this vertical offset, would it effectively convert the screen coordinate into the "local" one ?

      


    2. 


    3. If I instead adjust my SwsContext to put the destination height and width as that of my screen, will it effectively replace the need to convert screen coordinates to the video coordinates ?

      


    4. 


    


    Problems with the Ideas

    


    The problems I see with the first solution are that I am assuming there is no hidden horizontal offset (or conversely that all of the width of the video is actually renderable on the screen). Additionally, this solution would only get an approximate result as I would need to manually measure the offsets, screen width, and screen height using the method I currently am using to get the screen coordinates.

    


    With the second solution, aside from the question of if it will even work, the problem becomes that I can no longer measure what the screen coordinates I want are because I can't seem to get rid of those black bars in VLC.

    


    Some Testing I did

    


    Given that if the black bars are part of the video itself, my entire problem would be fixed (maybe ?) I tried seeing if the black bars were part of the video, and when I looked at the frame data's first pixel, it was black. The problem then is that if the black bars are entirely part of the video, then why are the colors I get for some pixels slightly off (I am checking with ColorSync Utility). These colors aren't just slightly off as in wrong but it seems more that they belong to a slightly offset region of the video.

    


    However, this may be somewhat explained if ffmpeg reads right to left. When I put the top left corner of the video into the program and looked again at the pixel data in the frame for that location (location again was calculated by assuming the video location would be the same as the screen location) instead of getting white, I got a bluish color much like the glove in the top right corner.

    


    The Watered Down Code

    


        struct SwsContext *rescaler = NULL;
    rescaler = sws_getContext(codec_context->width, codec_context->height, codec_context->pix_fmt, codec_context->width, codec_context->height, AV_PIX_FMT_RGB0, SWS_FAST_BILINEAR, NULL, NULL, 0);

// Get Packets (containers for frames but not guaranteed to have a full frame) and Frames
    while (av_read_frame(avformatcontext, packet) >= 0)
    {
        
        // determine if packet is video packet
        if (packet->stream_index != video_index)
        {
            continue;
        }
        
        // send packet to decoder
        if (avcodec_send_packet(codec_context, packet) < 0)
        {
            perror("Failed to decode packet");
        }
        
        // get frame from decoder
        int response = avcodec_receive_frame(codec_context, frame);
        if (response == AVERROR(EAGAIN))
        {
            continue;
        }
        else if (response < 0)
        {
            perror("Failed to get frame");
        }
        
        // convert frame to RGB0 colorspace 4 bytes per pixel 1 per channel
        response = sws_scale_frame(rescaler, scaled_frame, frame);
        if(response < 0){
            perror("Failed to change colorspace");
        }
        // get data and write it
        int pixel_number = y*(scaled_frame->linesize[0]/4)+x; // divide by four gets pixel linesize (4 byte per pixel)
        int byte_number = 4*(pixel_number-1); // position of pixel in array
        // start of debugging things
        int temp = scaled_frame->data[0][byte_number]; // R
        int one_after = scaled_frame->data[0][byte_number+1]; // G
        int two_after = scaled_frame->data[0][byte_number+2]; // B
        int als; // where i put the breakpoint
        // end of debugging things
    }


    


    In Summary

    


    I have no idea what is happening.

    


    I take the data for a pixel and compare it to what colorsync utility says should be there, but it is always slightly off as though the pixel I was actually reading was offset from what I thought I was reading. Therefore, I want to find a way to get the pixel location in a video given a screen coordinate when the video is in fullscreen, but I have no idea how to (aside from a few ideas that are probably bad at best).

    


    Also does FFMPEG put the frame data right to left ?

    


    A Video Better Showing My Problem

    


    https://www.youtube.com/watch?v=NSEErs2lC3A