Recherche avancée

Médias (1)

Mot : - Tags -/ticket

Autres articles (43)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (5691)

  • Make video frames from a livestream identifiable across multiple clients

    23 septembre 2016, par mschwaig

    I need to distribute a video stream from a live source to several clients with the additional requirement that each frame is identifiable across all clients.

    I have already done research into the topic, and I have arrived at a possible solution that I can share. My solution seems suboptimal and this is my first experience of working with video streams, so I want to see if somebody knows a better way.

    The reason why I need to be able to identify specific frames within the video stream is that the streaming clients need to be able to talk about the time differences between events each of them identifies in their video stream.

    A little clarifying example

    I want to enable the following interaction :

    • Two client applications Dewey and Stevie connect to the streaming server
    • Dewey displays the stream and Stevie saves it to disk
    • Dewey identifies a specific video frame that is of interest to Stevie, so he wants to tell Stevie about it
    • Dewey extracts some identifying information from the video frame and sends it to Stevie
    • Stevie uses the identifying information to extract the same frame from the copy of the livestream he is currently saving

    Dewey cannot send the frame to Stevie directly, because Malcolm and Reese also want to tell him about specific video frames and Stevie is interested in the time difference between their findings.

    Suggested solution

    The solution that I found was using ffserver to broadcast a RTP stream and use the timestamps from the RTCP packets to identify frames. These timestamps are normally used to synchronize audio and video, and not to provide a shared timeline across several clients, which is why I am skeptical this is the best way to solve my problem.

    It also seems beneficial to have frame numbers, like an increasing counter of frames instead of arbitrary timestamps which increase by some perhaps varying offset as for my application I also have to reference neighboring frames and it seems easier to compute time differences from frame numbers, than the other way around.

  • Raw H264 + ADTS AAC audio streams muxing results in wrong duration and bitrate using ffmpeg libraries

    17 avril 2015, par Muhammad Ali

    I have an imaging hardware that outputs two streams :

    Video : H264 bitstream
    Audio : ADTS AAC

    I am aware of Input Sources params e-g bitrate , FPS (video), Sampling Rate(audio) etc and I’ve set the params for ffmpeg accordingly.

    My desired output is an FLV container with these two streams .

    At Stage 1, I was able to Mux H264 bitstream into an FLV container which would play just fine in ffplay. No errors reported on console, duration and bitrate calculations were fine too.

    At Stage 2, I tried to mux Audio (ADTS AAC) audio stream with video stream into FLV container. Audio stream required adtstoasc bitstream filter though. But now the duration of the file was wrong and so was the bitrate.

    I should mention that my source of PTS is hardware provided PTS which claims that audio and video streams use the same counter for PTS so Audio video frames should always be considered in order.

    The playback of the resultant file using ffplay stucks on first video frame but audio keeps playing fine. And console complains a lot about "AVC NAL Size (very large number)".

    Any ideas why is the duration / bitrate wrong when I mux the audio along ?

    Here is the muxing code :

    if ((packet.header.dataType == AM_TRANSFER_PACKET_TYPE_H264) || (packet.header.dataType == AM_TRANSFER_PACKET_TYPE_AAC))
         {


           AVCodecContext *cn;

           raw_fd_index = packet.header.streamId << 1;
           //printf("GOT VIDEO FRAME : DATA LEN : %d \n",packet.header.dataLen);
           if(packet.header.dataType == AM_TRANSFER_PACKET_TYPE_H264)
           {  
             //printf("VIDEO STREAM ID %d | PTS : %d | V.SEQ: %d \n\n",packet.header.streamId,packet.header.dataPTS,packet.header.seqNum);
             if(!firstVideoRecvd)
             {
               //still waiting for I Frame
               if(packet.header.frameType == AM_TRANSFER_FRAME_IDR_VIDEO)
               {
                 firstVideoRecvd = 1;
                 audioEnabled = 1;
                 lastvPts = packet.header.dataPTS;
                 printf("\n\n IDR received : AudioEnabled : true  |  MuxingEnabled : true \n");
               }
               else
               {
                 printf("... waiting for IDR frame \n\n ");
                 continue;
               }
             }


           }
           else
           {
             //printf("AUDIO STREAM ID %d | PTS : %d | A.SEQ: %d \n\n",packet.header.streamId + 1,packet.header.dataPTS,packet.header.seqNum);
             if(!firstVideoRecvd)
             {
               printf("\n\n Waiting for a video Frame before we start packing audio... ignoring packet\n");
               continue;
             }
             if(!audioEnabled)
             {  printf("\n\n First Video received but audio still not enabled \n\n");
               continue;
             }

             if(recvFirstAudio)
             {
               recvFirstAudio = 0;
               lastaPts = packet.header.dataPTS;
             }

           }


           //******** FFMPEG SPECIFICS

           //printf("FRAME SIZE : %d \t FRAME TYPE : %d \n",packet.header.dataLen, packet.header.frameType);

           av_init_packet(&pkt);

           //pkt.flags        |= AV_PKT_FLAG_KEY;

           if(packet.header.dataType == AM_TRANSFER_PACKET_TYPE_H264)
             {
               pkt.stream_index  = packet.header.streamId;//0;//ost->st->index;     //stream index 0 for vid : 1 for aud
               outStreamIndex = outputVideoStreamIndex;
               vDuration += (packet.header.dataPTS - lastvPts);
               lastvPts = packet.header.dataPTS;

             }
           else if(packet.header.dataType == AM_TRANSFER_PACKET_TYPE_AAC)
             {
               pkt.stream_index  = packet.header.streamId + 1;//1;//ost->st->index;     //stream index 0 for vid : 1 for aud
               outStreamIndex = outputAudioStreamIndex;
               aDuration += (packet.header.dataPTS - lastaPts);
               lastaPts = packet.header.dataPTS;
             }


           //packet.header.streamId
           pkt.data          = (uint8_t *)packet.data;//raw_data;
           pkt.size          = packet.header.dataLen;

           pkt.pts = pkt.dts= packet.header.dataPTS;
           //pkt.duration = 24000;      //24000 assumed basd on observation

           //duration calculation
           if(packet.header.dataType == AM_TRANSFER_PACKET_TYPE_H264)
             {
               pkt.duration = vDuration;

             }
           else if(packet.header.dataType == AM_TRANSFER_PACKET_TYPE_AAC)
             {
               pkt.duration = aDuration;
             }

           in_stream  = ifmt_ctx->streams[pkt.stream_index];
           out_stream = ofmt_ctx->streams[outStreamIndex];

           cn = out_stream->codec;

           if(packet.header.dataType == AM_TRANSFER_PACKET_TYPE_AAC)
             av_bitstream_filter_filter(aacbsfc, in_stream->codec, NULL, &pkt.data, &pkt.size, pkt.data, pkt.size, 0);

           //if(packet.header.dataType == AM_TRANSFER_PACKET_TYPE_H264)
           // av_bitstream_filter_filter(h264bsfc, in_stream->codec, NULL, &pkt.data, &pkt.size, pkt.data, pkt.size, 0);


            if(packet.header.dataType == AM_TRANSFER_PACKET_TYPE_H264)
            {
              //commented on Tuesday
             av_packet_rescale_ts(&pkt, cn->time_base, out_stream->time_base);

             pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);

             pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);

             //printf("Pkt Duration before scaling: %d \n ",pkt.duration);
             pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
             //printf("Pkt Duration after scaling: %d \n ",pkt.duration);
           }

           //enabled on Tuesday
           pkt.pos = -1;

           pkt.stream_index = outStreamIndex;

           //doxygen suggests i use av_write_frame if i am taking care of interleaving
           ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
           //ret = av_write_frame(ofmt_ctx, &pkt);
           if (ret < 0)
            {
                fprintf(stderr, "Error muxing packet\n");
                continue;
            }

          av_free_packet(&pkt);
         }

    Notice that I am not using pkt.flags . I am not sure what I should set it to and would it matter ? I am not doing so when I am muxing just the video into FLV or muxing both audio and video into flv.

  • ffmpeg crop detect python using subprocess

    12 mai 2015, par JRM

    I want to use python subprocess to call ffmpeg and use crop detect to find all black in a video. The crop detect return I want to put into a string variable and put in a database. At the moment I can get the process running in terminal and but I am unsure about how to grab the specific part of the terminal (stdout) output :

    the script :

    def cropDetect():
       p = subprocess.Popen(["ffmpeg", "-i", "/Desktop/ffprobe_instance/Crop_detect/video_file.mpg", "-vf", "cropdetect=24:16:0", "-vframes", "10", "dummy.mp4"], stdout=subprocess.PIPE)
       result = p.communicate()[0]
       print result


    # SCRIPT
    cropDetect()

    Result in terminal :
    [Parsed_cropdetect_0 @ 0x7fa1d840cb80] x1:719 x2:0 y1:575 y2:0 w :-704 h :-560 x:714 y:570 pos:432142 pts:44102 t:0.490022 crop=-704 :-560:714:570

    How do I take "crop=-704 :-560:714:570" and put it into a variable that I can store in a database ?

    As per update :

    def cropDetect1():
       p = subprocess.check_output(["ffmpeg", "-i", "/Desktop/ffprobe_instance/Crop_detect/video_file.mpg", "-vf", "cropdetect=24:16:0", "-vframes", "10", "dummy.mp4"])
       match = re.search("crop\S+", p)
       crop_result = None
       if match is not None:
           crop_result = match.group()
           print "hello %s" % crop_result

    I can’t seem to print out the "crop_result" - I am presuming that means that the variable is empty ?

    UPDATE : Found it :

    def detectCropFile(localPath):
       fpath = "/xxx/xx/Desktop/Crop_detect/videos/USUV.mp4"
       print "File to detect crop: %s " % fpath
       p = subprocess.Popen(["ffmpeg", "-i", fpath, "-vf", "cropdetect=24:16:0", "-vframes", "500", "-f", "rawvideo", "-y", "/dev/null"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
       infos = p.stderr.read()
       print infos
       allCrops = re.findall(CROP_DETECT_LINE + ".*", infos)
       print allCrops
       mostCommonCrop = Counter(allCrops).most_common(1)
       print "most common crop: %s" % mostCommonCrop
       print mostCommonCrop[0][0]
       global crop
       crop = mostCommonCrop[0][0]
       video_rename()

    Use : p = subprocess.Popen(["ffmpeg", "-i", fpath, "-vf", "cropdetect=24:16:0", "-vframes", "500", "-f", "rawvideo", "-y", "/dev/null"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) to pipe it out