Recherche avancée

Médias (91)

Autres articles (36)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Soumettre améliorations et plugins supplémentaires

    10 avril 2011

    Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
    Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...)

  • Les statuts des instances de mutualisation

    13 mars 2010, par

    Pour des raisons de compatibilité générale du plugin de gestion de mutualisations avec les fonctions originales de SPIP, les statuts des instances sont les mêmes que pour tout autre objets (articles...), seuls leurs noms dans l’interface change quelque peu.
    Les différents statuts possibles sont : prepa (demandé) qui correspond à une instance demandée par un utilisateur. Si le site a déjà été créé par le passé, il est passé en mode désactivé. publie (validé) qui correspond à une instance validée par un (...)

Sur d’autres sites (6923)

  • Affectiva drops every second frame

    19 juin 2019, par machinery

    I am running Affectiva SDK 4.0 on a GoPro video recording. I’m using a C++ program on Ubuntu 16.04. The GoPro video was recorded with 60 fps. The problem is that Affectiva only provides results for around half of the frames (i.e. 30 fps). If I look at the timestamps provided by Affectiva, the last timestamp matches the video duration, that means Affectiva somehow skips around every second frame.

    Before running Affectiva I was running ffmpeg with the following command to make sure that the video has a constant frame rate of 60 fps :

    ffmpeg -i in.MP4 -vf -y -vcodec libx264 -preset medium -r 60 -map_metadata 0:g -strict -2 out.MP4 null 2>&1

    When I inspect the presentation timestamp using ffprobe -show_entries frame=pict_type,pkt_pts_time -of csv -select_streams v in.MP4 I’m getting for the raw video the following values :

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/media/GoPro_concat/GoPro_concat.MP4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf58.20.100
     Duration: 01:14:46.75, start: 0.000000, bitrate: 15123 kb/s
       Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuvj420p(pc, bt709), 1280x720 [SAR 1:1 DAR 16:9], 14983 kb/s, 59.94 fps, 59.94 tbr, 60k tbn, 119.88 tbc (default)
       Metadata:
         handler_name    :  GoPro AVC
         timecode        : 13:17:26:44
       Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 127 kb/s (default)
       Metadata:
         handler_name    :  GoPro AAC
       Stream #0:2(eng): Data: none (tmcd / 0x64636D74)
       Metadata:
         handler_name    :  GoPro AVC
         timecode        : 13:17:26:44
    Unsupported codec with id 0 for input stream 2
    frame,0.000000,I
    frame,0.016683,P
    frame,0.033367,P
    frame,0.050050,P
    frame,0.066733,P
    frame,0.083417,P
    frame,0.100100,P
    frame,0.116783,P
    frame,0.133467,I
    frame,0.150150,P
    frame,0.166833,P
    frame,0.183517,P
    frame,0.200200,P
    frame,0.216883,P
    frame,0.233567,P
    frame,0.250250,P
    frame,0.266933,I
    frame,0.283617,P
    frame,0.300300,P
    frame,0.316983,P
    frame,0.333667,P
    frame,0.350350,P
    frame,0.367033,P
    frame,0.383717,P
    frame,0.400400,I
    frame,0.417083,P
    frame,0.433767,P
    frame,0.450450,P
    frame,0.467133,P
    frame,0.483817,P
    frame,0.500500,P
    frame,0.517183,P
    frame,0.533867,I
    frame,0.550550,P
    frame,0.567233,P
    frame,0.583917,P
    frame,0.600600,P
    frame,0.617283,P
    frame,0.633967,P
    frame,0.650650,P
    frame,0.667333,I
    frame,0.684017,P
    frame,0.700700,P
    frame,0.717383,P
    frame,0.734067,P
    frame,0.750750,P
    frame,0.767433,P
    frame,0.784117,P
    frame,0.800800,I
    frame,0.817483,P
    frame,0.834167,P
    frame,0.850850,P
    frame,0.867533,P
    frame,0.884217,P
    frame,0.900900,P
    frame,0.917583,P
    frame,0.934267,I
    frame,0.950950,P
    frame,0.967633,P
    frame,0.984317,P
    frame,1.001000,P
    frame,1.017683,P
    frame,1.034367,P
    frame,1.051050,P
    frame,1.067733,I
    ...

    I have uploaded the full output on OneDrive.

    If I run Affectiva on the raw video (not processed by ffmpeg) I face the same problem of dropped frames. I was using Affectiva with affdex::VideoDetector detector(60);

    Is there a problem with the ffmpeg command or with Affectiva ?

    Edit : I think I have found out where the problem could be. It seems that Affectiva is not processing the whole video but just stops after a certain amount of processed frames without any error message. Below I have posted the C++ code I’m using. In the onProcessingFinished() method I’m printing something to the console when the processing is finished. But this message is never printed, so Affectiva never comes to the end.

    Is there something wrong with my code or should I encode the videos into another format than MP4 ?

    #include "VideoDetector.h"
    #include "FrameDetector.h"

    #include <iostream>
    #include <fstream>
    #include <mutex>
    #include

    std::mutex m;
    std::condition_variable conditional_variable;
    bool processed = false;

    class Listener : public affdex::ImageListener {
    public:
       Listener(std::ofstream * fout) {
           this->fout = fout;
     }
     virtual void onImageCapture(affdex::Frame image){
         //std::cout &lt;&lt; "called";
     }
     virtual void onImageResults(std::map faces, affdex::Frame image){
         //std::cout &lt;&lt; faces.size() &lt;&lt; " faces detected:" &lt;&lt; std::endl;

         for(auto&amp; kv : faces){
           (*this->fout) &lt;&lt; image.getTimestamp() &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.first &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.emotions.joy &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.emotions.fear &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.emotions.disgust &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.emotions.sadness &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.emotions.anger &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.emotions.surprise &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.emotions.contempt &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.emotions.valence &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.emotions.engagement &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.measurements.orientation.pitch &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.measurements.orientation.yaw &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.measurements.orientation.roll &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.faceQuality.brightness &lt;&lt; std::endl;


           //std::cout &lt;&lt;  kv.second.emotions.fear &lt;&lt; std::endl;
           //std::cout &lt;&lt;  kv.second.emotions.surprise  &lt;&lt; std::endl;
           //std::cout &lt;&lt;  (int) kv.second.emojis.dominantEmoji;
         }
     }
    private:
       std::ofstream * fout;
    };

    class ProcessListener : public affdex::ProcessStatusListener{
    public:
       virtual void onProcessingException (affdex::AffdexException ex){
           std::cerr &lt;&lt; "[Error] " &lt;&lt; ex.getExceptionMessage();
       }
       virtual void onProcessingFinished (){
           {
               std::lock_guard lk(m);
               processed = true;
               std::cout &lt;&lt; "[Affectiva] Video processing finised." &lt;&lt; std::endl;
           }
           conditional_variable.notify_one();
       }
    };

    int main(int argc, char ** argsv)
    {
       affdex::VideoDetector detector(60, 1, affdex::FaceDetectorMode::SMALL_FACES);
       //affdex::VideoDetector detector(60, 1, affdex::FaceDetectorMode::LARGE_FACES);
       std::string classifierPath="/home/wrafael/affdex-sdk/data";
       detector.setClassifierPath(classifierPath);
       detector.setDetectAllEmotions(true);

       // Output
       std::ofstream fout(argsv[2]);
       fout &lt;&lt; "timestamp" &lt;&lt; ",";
       fout &lt;&lt; "faceId" &lt;&lt; ",";
       fout &lt;&lt; "joy" &lt;&lt; ",";
       fout &lt;&lt; "fear" &lt;&lt; ",";
       fout &lt;&lt; "disgust" &lt;&lt; ",";
       fout &lt;&lt; "sadness" &lt;&lt; ",";
       fout &lt;&lt; "anger" &lt;&lt; ",";
       fout &lt;&lt; "surprise" &lt;&lt; ",";
       fout &lt;&lt; "contempt" &lt;&lt; ",";
       fout &lt;&lt; "valence" &lt;&lt; ",";
       fout &lt;&lt; "engagement"  &lt;&lt; ",";
       fout &lt;&lt; "pitch" &lt;&lt; ",";
       fout &lt;&lt; "yaw" &lt;&lt; ",";
       fout &lt;&lt; "roll" &lt;&lt; ",";
       fout &lt;&lt; "brightness" &lt;&lt; std::endl;

       Listener l(&amp;fout);
       ProcessListener pl;
       detector.setImageListener(&amp;l);
       detector.setProcessStatusListener(&amp;pl);

       detector.start();
       detector.process(argsv[1]);

       // wait for the worker
       {
       std::unique_lock lk(m);
       conditional_variable.wait(lk, []{return processed;});
       }
       fout.flush();
       fout.close();
    }
    </mutex></fstream></iostream>

    Edit 2 : I have now digged further into the problem and looked only at one GoPro file with a duration of 19min 53s (GoPro splits the recordings). When I run Affectiva with affdex::VideoDetector detector(60, 1, affdex::FaceDetectorMode::SMALL_FACES); on that raw video the following file is produced. Affectiva stops after 906s without any error message and without printing "[Affectiva] Video processing finised".

    When I now transform the video using ffmpeg -i raw.MP4 -y -vcodec libx264 -preset medium -r 60 -map_metadata 0:g -strict -2 out.MP4 and then run Affectiva with affdex::VideoDetector detector(60, 1, affdex::FaceDetectorMode::SMALL_FACES);, Affectiva runs until the end and prints
    "[Affectiva] Video processing finised" but the frame rate is only at 23 fps. Here is the file.

    When I now run Affectiva with affdex::VideoDetector detector(62, 1, affdex::FaceDetectorMode::SMALL_FACES); on this transformed file, Affectiva stops after 509s and "[Affectiva] Video processing finised" is not printed. Here is the file.

  • AppRTC : Google’s WebRTC test app and its parameters

    23 juillet 2014, par silvia

    If you’ve been interested in WebRTC and haven’t lived under a rock, you will know about Google’s open source testing application for WebRTC : AppRTC.

    When you go to the site, a new video conferencing room is automatically created for you and you can share the provided URL with somebody else and thus connect (make sure you’re using Google Chrome, Opera or Mozilla Firefox).

    We’ve been using this application forever to check whether any issues with our own WebRTC applications are due to network connectivity issues, firewall issues, or browser bugs, in which case AppRTC breaks down, too. Otherwise we’re pretty sure to have to dig deeper into our own code.

    Now, AppRTC creates a pretty poor quality video conference, because the browsers use a 640×480 resolution by default. However, there are many query parameters that can be added to the AppRTC URL through which the connection can be manipulated.

    Here are my favourite parameters :

    • hd=true : turns on high definition, ie. minWidth=1280,minHeight=720
    • stereo=true : turns on stereo audio
    • debug=loopback : connect to yourself (great to check your own firewalls)
    • tt=60 : by default, the channel is closed after 30min – this gives you 60 (max 1440)

    For example, here’s how a stereo, HD loopback test would look like : https://apprtc.appspot.com/?r=82313387&hd=true&stereo=true&debug=loopback .

    This is not the limit of the available parameter, though. Here are some others that you may find interesting for some more in-depth geekery :

    • ss=[stunserver] : in case you want to test a different STUN server to the default Google ones
    • ts=[turnserver] : in case you want to test a different TURN server to the default Google ones
    • tp=[password] : password for the TURN server
    • audio=true&video=false : audio-only call
    • audio=false : video-only call
    • audio=googEchoCancellation=false,googAutoGainControl=true : disable echo cancellation and enable gain control
    • audio=googNoiseReduction=true : enable noise reduction (more Google-specific parameters)
    • asc=ISAC/16000 : preferred audio send codec is ISAC at 16kHz (use on Android)
    • arc=opus/48000 : preferred audio receive codec is opus at 48kHz
    • dtls=false : disable datagram transport layer security
    • dscp=true : enable DSCP
    • ipv6=true : enable IPv6

    AppRTC’s source code is available here. And here is the file with the parameters (in case you want to check if they have changed).

    Have fun playing with the main and always up-to-date WebRTC application : AppRTC.

    UPDATE 12 May 2014

    AppRTC now also supports the following bitrate controls :

    • arbr=[bitrate] : set audio receive bitrate
    • asbr=[bitrate] : set audio send bitrate
    • vsbr=[bitrate] : set video receive bitrate
    • vrbr=[bitrate] : set video send bitrate

    Example usage : https://apprtc.appspot.com/?r=&asbr=128&vsbr=4096&hd=true

    The post AppRTC : Google’s WebRTC test app and its parameters first appeared on ginger’s thoughts.

  • FFmpeg api iOS "Resource temporarily unavailable"

    8 octobre 2017, par Julius Naeumann

    I’ve spent hours trying to fix this :

    I’m trying to use the ffmpeg api on iOS. My Xcode project is building and I can call ffmpeg api functions. I am trying to write code that decodes a video (Without outputting anything for now), and I keep getting error -35 : "Resource temporarily unavailable".

    The input file is from the camera roll (.mov) and I’m using Mpeg-4 for decoding. All I’m currently doing is getting data from the file, parsing it and sending the parsed packets to the decoder. When I try to get frames, all I get is an error. Does anyone know what I’m doing wrong ?

    +(void)test: (NSString*)filename outfile:(NSString*)outfilename {

    /* register all the codecs */
    avcodec_register_all();

    AVCodec *codec;
    AVCodecParserContext *parser;
    AVCodecContext *c= NULL;
    int frame_count;
    FILE* f;
    AVFrame* frame;
    AVPacket* avpkt;
    avpkt = av_packet_alloc();
    //av_init_packet(avpkt);
    char buf[1024];

    uint8_t inbuf[INBUF_SIZE + AV_INPUT_BUFFER_PADDING_SIZE];
    uint8_t *data;
    size_t   data_size;

    /* set end of buffer to 0 (this ensures that no overreading happens for damaged mpeg streams) */
    memset(inbuf + INBUF_SIZE, 0, AV_INPUT_BUFFER_PADDING_SIZE);

    printf("Decode video file %s to %s\n", [filename cStringUsingEncoding:NSUTF8StringEncoding], [outfilename cStringUsingEncoding:NSUTF8StringEncoding]);
    /* find the h264 video decoder */
    codec = avcodec_find_decoder(AV_CODEC_ID_MPEG4);
    if (!codec) {
       fprintf(stderr, "Codec not found\n");
       exit(1);
    }
    c = avcodec_alloc_context3(codec);
    if (!c) {
       fprintf(stderr, "Could not allocate video codec context\n");
       exit(1);
    }
    if (codec->capabilities &amp; AV_CODEC_CAP_TRUNCATED)
       c->flags |= AV_CODEC_FLAG_TRUNCATED; // we do not send complete frames
    /* For some codecs, such as msmpeg4 and mpeg4, width and height
    MUST be initialized there because this information is not
    available in the bitstream. */
    /* open it */
    if (avcodec_open2(c, codec, NULL) &lt; 0) {
       fprintf(stderr, "Could not open codec\n");
       exit(1);
    }
    f = fopen([filename cStringUsingEncoding:NSUTF8StringEncoding], "rb");
    if (!f) {
       fprintf(stderr, "Could not open %s\n", [filename cStringUsingEncoding:NSUTF8StringEncoding]);
       exit(1);
    }
    frame = av_frame_alloc();
    if (!frame) {
       fprintf(stderr, "Could not allocate video frame\n");
       exit(1);
    }
    frame_count = 0;

    parser = av_parser_init(codec->id);
    if (!parser) {
       fprintf(stderr, "parser not found\n");
       exit(1);
    }

    while (!feof(f)) {
       /* read raw data from the input file */
       data_size = fread(inbuf, 1, INBUF_SIZE, f);
       if (!data_size)
           break;
       /* use the parser to split the data into frames */
       data = inbuf;
       while (data_size > 0) {
           int ret = av_parser_parse2(parser, c, &amp;avpkt->data, &amp;avpkt->size, data, data_size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);
           if (ret &lt; 0) {
               fprintf(stderr, "Error while parsing\n");
               exit(1);
           }
           data      += ret;
           data_size -= ret;
           if (avpkt->size){
               char buf[1024];

               ret = avcodec_send_packet(c, avpkt);
               if (ret &lt; 0) {

                   fprintf(stderr, "Error sending a packet for decoding\n");
                   continue;
                   exit(1);
               }

               while (ret >= 0) {
                   ret = avcodec_receive_frame(c, frame);
                   if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF){
                       char e [1024];
                       av_strerror(ret, e, 1024);
                       fprintf(stderr, "Fail: %s !\n", e);
    // ~~~~~~~~ This is where my program exits ~~~~~~~~~~~~~~~~~~~~~~~~~~~
                       return;
                   }
                   else if (ret &lt; 0) {
                       fprintf(stderr, "Error during decoding\n");
                       exit(1);
                   }
               }


           }
       }
    }
    /* some codecs, such as MPEG, transmit the I and P frame with a
    latency of one frame. You must do the following to have a
    chance to get the last frame of the video */

    fclose(f);
    avcodec_close(c);
    av_free(c);
    av_frame_free(&amp;frame);
    printf("\n");

    }