Recherche avancée

Médias (2)

Mot : - Tags -/media

Autres articles (41)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

Sur d’autres sites (2279)

  • Android Merging two video with different (sizes,codec,frames,aspect raito) using FFMPEG

    6 septembre 2017, par Alok Kumar Verma

    I’m making an app which merges two or more than two video files which I’m getting from another activity. After choosing the files we pass the files to another activity where the merging happens. I’ve followed this link to do the same : AndroidWarZone FFMPEG

    Here I found the way on how to merge the two files only with different qualities. The command is given below :

    String[] complexCommand = {"ffmpeg","-y","-i","/storage/emulated/0/videokit/sample.mp4",
    "-i","/storage/emulated/0/videokit/in.mp4","-strict","experimental",
    "-filter_complex",
    "[0:v]scale=640x480,setsar=1:1[v0];[1:v]scale=640x480,setsar=1:1[v1];[v0][0:a][v1][1:a] concat=n=2:v=1:a=1",
    "-ab","48000","-ac","2","-ar","22050","-s","640x480","-r","30","-vcodec","mpeg4","-b","2097k","/storage/emulated/0/vk2_out/out.mp4"}

    Since I have a list of selected videos inside my array which I’m passing to the next page, I’ve done some changes in my command, like this :

    private void mergeVideos() {
       String savingPath = Environment.getExternalStorageDirectory().getAbsolutePath() + "/video.mp4";

       ArrayList<file> fileList = mList;

       List<string> filenames = new ArrayList<string>();

       for (int i = 0; i &lt; fileList.size(); i++) {
           filenames.add("-i");
           filenames.add(fileList.get(i).toString());
       }

       Log.e("Log===",filenames.toString());

       String joined = TextUtils.join(", ",filenames);

       Log.e("Joined====",joined);

       String complexCommand[] = {"-y", joined,
               "-filter_complex",
               "[0:v]scale=640x480,setsar=1:1[v0];[1:v]scale=640x480,setsar=1:1[v1];[v0][0:a][v1][1:a] concat=n=2:v=1:a=1",
               "-ab","48000","-ac","2","-ar","22050","-s","640x480","-r","30","-vcodec","mpeg4","-b","2097k", savingPath};
       Log.e("RESULT====",Arrays.toString(complexCommand));

      execFFmpegBinary(complexCommand);  }
    </string></string></file>

    In the log this is the the output I’m getting :

    This one is for received data which I have added in the mList

    E/RECEIVED DATA=====: [/mnt/m_external_sd/DCIM/Camera/VID_31610130_011933_454.mp4, /mnt/m_external_sd/DCIM/Camera/VID_23120824_054526_878.mp4]
    E/RESULT====: [-y, -i, /mnt/m_external_sd/DCIM/Camera/VID_31610130_011933_454.mp4, -i, /mnt/m_external_sd/DCIM/Camera/VID_23120824_054526_878.mp4, -filter_complex, [0:v]scale=640x480,setsar=1:1[v0];[1:v]scale=640x480,setsar=1:1[v1];[v0][0:a][v1][1:a] concat=n=2:v=1:a=1, -ab, 48000, -ac, 2, -ar, 22050, -s, 640x480, -r, 30, -vcodec, mpeg4, -b, 2097k, /storage/emulated/0/video.mp4]

    Here result is the complexCommand that is going inside the exeFFMPEGBinary() but not working.

    This is my exceFFMPEGBinary()

    private void execFFmpegBinary(final String[] combine) {
       try{
       fFmpeg.execute(combine, new ExecuteBinaryResponseHandler() {
           @Override
           public void onFailure(String s) {
               Log.d("", "FAILED with output : " + s);
           }

           @Override
           public void onSuccess(String s) {
               Log.d("", "SUCCESS with output : " + s);
               Toast.makeText(getApplicationContext(),"Success!",Toast.LENGTH_SHORT)
                       .show();
           }

           @Override
           public void onProgress(String s) {
               Log.d("", "progress : " + s);
           }

           @Override
           public void onStart() {
               progressDialog.setMessage("Processing...");
               progressDialog.show();
           }

           @Override
           public void onFinish() {
               progressDialog.dismiss();
           }
       });
    } catch (FFmpegCommandAlreadyRunningException e) {
       // do nothing for now
    }
    }

    I’ve done this and run my project, now the problem is it is not merging/concatenating anything, just a progressDialog comes up for a fraction of second and all I’m getting is this in my log :

    E/FFMPEG====: ffmpef : coorect loaded

    This means that ffmpeg is loading and nothing is getting implemented. I don’t get any log for onFailur, onSuccess(), onStart().

    Any suggestions would help me achieve my goal. Thanks.

    Note : I have done this merging with the use of Mp4Parser but there is a glitch inside it, it requires the file with same specification. So this is not my requirement.

    EDITS

    I did some more research and got this to concatenate, but this is not working either, here is the link : Concatenating two files

    I’ve found this stuff also from a link : FFMPEG Merging/Concatenating
    and found that his piece of code is working fine. But not mine.

    I’ve used that command also but it is not working nor giving me any log results. Except the FFMPEG Loading.

    Here is the command :

    complexCommand = new String[]{"-y", "-i", file1.toString(), "-i", file2.toString(), "-strict", "experimental", "-filter_complex",
               "[0:v]scale=1920x1080,setsar=1:1[v0];[1:v] scale=iw*min(1920/iw\\,1080/ih):ih*min(1920/iw\\,1080/ih), pad=1920:1080:(1920-iw*min(1920/iw\\,1080/ih))/2:(1080-ih*min(1920/iw\\,1080/ih))/2,setsar=1:1[v1];[v0][0:a][v1][1:a] concat=n=2:v=1:a=1","-ab", "48000", "-ac", "2", "-ar", "22050", "-s", "1920x1080", "-vcodec", "libx264","-crf","27","-q","4","-preset", "ultrafast", rootPath + "/output.mp4"};
  • Syncing 3 RTSP video streams in ffmpeg

    26 septembre 2017, par Damon Maria

    I’m using an AXIS Q3708 camera. It internally has 3 sensors to create a 180º view. Each sensor puts out it’s own RTSP stream. To put together the full 180º view I need to pull an image from each stream and place the images side-by-side. Obviously it’s important that the 3 streams be synchronized so the 3 images were taken at the same ’wall clock’ time. For this reason I want to use ffmpeg because it should be a champ at this.

    I intended to use the hstack filter to combine the 3 images. However, it’s causing me a lot of grief and errors.

    What I’ve tried :

    1. Hstack the rtsp streams :

    ffmpeg -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=1" -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=2" -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=3" -filter_complex "[0:v][1:v][2:v]hstack=inputs=3[v]" -map "[v]" out.mp4

    I get lots of RTSP dropped packets decoding errors which is strange given this is an i7-7700K 4.2GHz with an NVIDIA GTX 1080 Ti and 32GB of RAM and the camera is on a local gigabit network :

    [rtsp @ 0xd4eca42a00] max delay reached. need to consume packetA dup=0 drop=136 speed=1.16x    
    [rtsp @ 0xd4eca42a00] RTP: missed 5 packets
    [rtsp @ 0xd4eca42a00] max delay reached. need to consume packetA dup=0 drop=137 speed=1.15x    
    [rtsp @ 0xd4eca42a00] RTP: missed 4 packets
    [h264 @ 0xd4ecbb3520] error while decoding MB 14 15, bytestream -21
    [h264 @ 0xd4ecbb3520] concealing 1185 DC, 1185 AC, 1185 MV errors in P frame

    2. Using ffmpeg -i %s -c:v copy -map 0:0 out.mp4 to save each stream to a file and then run the above hstack command with the 3 files rather than 3 RSTP streams. First off, there are no dropped packets saving the files, and the hstack runs at speed=25x so I don’t know why the operation in 1 had so many errors. But in the resulting video, some parts ’pause’ between frames as tho the same image was used across 2 frames for some of the hstack inputs, but not the others. Also, the ’scene’ at a set distance into the video lags behind the input videos – which would happen if the frames are being duplicated.

    3. If I use the RTSP streams as the input, and for the output specify -f null - (the null demuxer) then the demuxer reports a lot of these errors :

    [null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 1 >= 1
       Last message repeated 1 times
    [null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 2 >= 2
    [null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 3 >= 3
    [null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 4 >= 4
       Last message repeated 1 times
    [null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 5 >= 5
       Last message repeated 1 times

    Which sounds again like frames are being duplicated.

    4. If I add -vsync cfr then the null muxer no longer reports non monotonically increasing dts, and dropped packet / decoding errors are reduced (but still there). Does this show that timing info from the RTSP streams is ’tripping up’ ffmpeg ? I presume it’s not a solution tho because it is essentially wiping out and replacing the timing information ffmpeg would need to use to sync ?

    5. Saving a single RTSP stream (re-encoding, not using the copy codec or null demuxer) logs a lot of warnings like :

    Past duration 0.999992 too large
       Last message repeated 7 times
    Past duration 0.999947 too large
    Past duration 0.999992 too large

    6. I first tried performing this in code (using PyAV). But struck problems when pushing a frame from each container into the 3 hstack inputs would cause hstack to output multiple frames (when it should be only one). Again, this points to hstack duplicating frames.

    7. I have used Wireshark to sniff the RTCP/RTP traffic and the RTCP Sender Report’s have correct NTP timestamps in them matched to the timestamps in the RTP streams.

    8. Using ffprobe to show the frames of a RTSP stream (example below) I would have expected to see real (NTP based timestamps) given they exist in the RTCP packets. I’m not sure what is correct behaviour for ffprobe. But it does show that most frame timestamps are not exactly 0.25s (the camera is running 4 FPS) which might explain -vsync cfr ’fixing’ some issues and the Past duration 0.999992 style errors :

    pkt_pts=1012502
    pkt_pts_time=0:00:11.250022
    pkt_dts=1012502
    pkt_dts_time=0:00:11.250022
    best_effort_timestamp=1012502
    best_effort_timestamp_time=0:00:11.250022
    pkt_duration=N/A
    pkt_duration_time=N/A
    pkt_pos=N/A

    I posted this as a possible hstack bug on the ffmpeg bug tracker but that discussion fizzled out.

    So, question : how do I sync 3 RTSP video streams through hstack in ffmpeg ?

  • Assigning of dts values to encoded packets

    24 mars, par Alex

    I have a dump of H264-encoded data, which I need to put in mp4 container. I verified the validity of the encoded data by using mp4box utility against it. The mp4 file created by mp4box contained a proper 17 seconds long video. It is interesting that if I try ffmpeg to achieve the same, the resulting video is 34 seconds long and rather crappy (probably ffmpeg tries to decode video and then encode it, which results in the loss of video quality ?) Anyway, for my project I can't use command line approach and need to come up wit a programmatic way to embed the data in the mp4 container.

    &#xA;&#xA;

    Below is the code I use (I removed error checking for brevity. During execution all the calls succeed) :

    &#xA;&#xA;

    AVFormatContext* pInputFormatContext = avformat_alloc_context();&#xA;avformat_open_input(&amp;pInputFormatContext, "Data.264", NULL, NULL);&#xA;avformat_find_stream_info(pInputFormatContext, NULL);&#xA;AVRational* pTime_base = &amp;pInputFormatContext->streams[0]->time_base;&#xA;&#xA;int nFrameRate = pInputFormatContext->streams[0]->r_frame_rate.num / pFormatCtx->streams[0]->r_frame_rate.den;&#xA;int nWidth = pInputFormatContext->streams[0]->codecpar->width;&#xA;int nHeight = pInputFormatContext->streams[0]->codecpar->height;&#xA;// nWidth = 1920, nHeight = 1080, nFrameRate = 25&#xA;&#xA;// Create output objects&#xA;AVFormatContext* pOutputFormatContext = NULL;&#xA;avformat_alloc_output_context2(&amp;pOutputFormatContext, NULL, NULL, "Destination.mp4");&#xA;&#xA;AVCodec* pVideoCodec = avcodec_find_encoder(pOutputFormatContext->oformat->video_codec/*AV_CODEC_ID_264*/);&#xA;AVStream* pOutputStream = avformat_new_stream(pOutputFormatContext, NULL);&#xA;pOutputStream->id = pOutputFormatContext->nb_streams - 1;&#xA;AVCodecContext* pCodecContext = avcodec_alloc_context3(pVideoCodec);&#xA;&#xA;switch (pVideoCodec->type) {&#xA;case AVMEDIA_TYPE_VIDEO:&#xA;  pCodecContext->codec_id = codec_id;&#xA;  pCodecContext->bit_rate = 400000;&#xA;  /* Resolution must be a multiple of two. */&#xA;  pCodecContext->width = nFrameWidth;&#xA;  pCodecContext->height = nFrameHeight;&#xA;  /* timebase: This is the fundamental unit of time (in seconds) in terms&#xA;  * of which frame timestamps are represented. For fixed-fps content,&#xA;  * timebase should be 1/framerate and timestamp increments should be&#xA;  * identical to 1. */&#xA;  pOutputStream->time_base.num = 1;&#xA;  pOutputStream->time_base.den = nFrameRate;&#xA;  pCodecContext->time_base = pOutputStream->time_base;&#xA;  pCodecContext->gop_size = 12; /* emit one intra frame every twelve frames at most */&#xA;  pCodecContext->pix_fmt = STREAM_PIX_FMT;&#xA;  break;&#xA;default:&#xA;  break;&#xA;}&#xA;&#xA;/* copy the stream parameters to the muxer */&#xA;avcodec_parameters_from_context(pOutputStream->codecpar, pCodecContext);&#xA;&#xA;avio_open(&amp;pOutputFormatContext->pb, "Destination.mp4", AVIO_FLAG_WRITE);&#xA;&#xA;// Start writing&#xA;AVDictionary* pDict = NULL;&#xA;avformat_write_header(pOutputFormatContext, &amp;pDict);&#xA;&#xA;// Process packets&#xA;AVPacket packet;&#xA;int64_t nCurrentDts = 0;&#xA;int64_t nDuration = 0;&#xA;int nReadResult = 0;&#xA;&#xA;while (nReadResult == 0)&#xA;{&#xA;  nReadResult = av_read_frame(m_pInputFormatContext, &amp;packet);&#xA;// At this point, packet.dts == AV_NOPTS_VALUE. &#xA;// The duration field of the packet contains valid data&#xA;&#xA;  packet.flags |= AV_PKT_FLAG_KEY;&#xA;  nDuration = packet.duration;&#xA;  packet.dts = nCurrentDts;&#xA;  packet.dts = av_rescale_q(nCurrentDts, pOutputFormatContext->streams[0]->codec->time_base, pOutputFormatContext->streams[0]->time_base);&#xA;  av_interleaved_write_frame(pOutputFormatContext, &amp;packet);&#xA;  nCurrentDts &#x2B;= nDuration;&#xA;  nDuration &#x2B;= packet.duration;&#xA;  av_free_packet(&amp;packet);&#xA;}&#xA;&#xA;av_write_trailer(pOutputFormatContext);&#xA;

    &#xA;&#xA;

    The properties for the Destination.mp4 file I receive indicate it is about 1 hour long with frame rate 0. I am sure the culprit is in the way I calculate dts values for each packet and use av_rescale_q(), but I do not have sufficient understanding of the avformat library to figure out the proper way to do it. Any help will be appreciated !

    &#xA;