Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (65)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

Sur d’autres sites (6803)

  • Syncing 3 RTSP video streams in ffmpeg

    26 septembre 2017, par Damon Maria

    I’m using an AXIS Q3708 camera. It internally has 3 sensors to create a 180º view. Each sensor puts out it’s own RTSP stream. To put together the full 180º view I need to pull an image from each stream and place the images side-by-side. Obviously it’s important that the 3 streams be synchronized so the 3 images were taken at the same ’wall clock’ time. For this reason I want to use ffmpeg because it should be a champ at this.

    I intended to use the hstack filter to combine the 3 images. However, it’s causing me a lot of grief and errors.

    What I’ve tried :

    1. Hstack the rtsp streams :

    ffmpeg -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=1" -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=2" -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=3" -filter_complex "[0:v][1:v][2:v]hstack=inputs=3[v]" -map "[v]" out.mp4

    I get lots of RTSP dropped packets decoding errors which is strange given this is an i7-7700K 4.2GHz with an NVIDIA GTX 1080 Ti and 32GB of RAM and the camera is on a local gigabit network :

    [rtsp @ 0xd4eca42a00] max delay reached. need to consume packetA dup=0 drop=136 speed=1.16x    
    [rtsp @ 0xd4eca42a00] RTP: missed 5 packets
    [rtsp @ 0xd4eca42a00] max delay reached. need to consume packetA dup=0 drop=137 speed=1.15x    
    [rtsp @ 0xd4eca42a00] RTP: missed 4 packets
    [h264 @ 0xd4ecbb3520] error while decoding MB 14 15, bytestream -21
    [h264 @ 0xd4ecbb3520] concealing 1185 DC, 1185 AC, 1185 MV errors in P frame

    2. Using ffmpeg -i %s -c:v copy -map 0:0 out.mp4 to save each stream to a file and then run the above hstack command with the 3 files rather than 3 RSTP streams. First off, there are no dropped packets saving the files, and the hstack runs at speed=25x so I don’t know why the operation in 1 had so many errors. But in the resulting video, some parts ’pause’ between frames as tho the same image was used across 2 frames for some of the hstack inputs, but not the others. Also, the ’scene’ at a set distance into the video lags behind the input videos – which would happen if the frames are being duplicated.

    3. If I use the RTSP streams as the input, and for the output specify -f null - (the null demuxer) then the demuxer reports a lot of these errors :

    [null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 1 >= 1
       Last message repeated 1 times
    [null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 2 >= 2
    [null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 3 >= 3
    [null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 4 >= 4
       Last message repeated 1 times
    [null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 5 >= 5
       Last message repeated 1 times

    Which sounds again like frames are being duplicated.

    4. If I add -vsync cfr then the null muxer no longer reports non monotonically increasing dts, and dropped packet / decoding errors are reduced (but still there). Does this show that timing info from the RTSP streams is ’tripping up’ ffmpeg ? I presume it’s not a solution tho because it is essentially wiping out and replacing the timing information ffmpeg would need to use to sync ?

    5. Saving a single RTSP stream (re-encoding, not using the copy codec or null demuxer) logs a lot of warnings like :

    Past duration 0.999992 too large
       Last message repeated 7 times
    Past duration 0.999947 too large
    Past duration 0.999992 too large

    6. I first tried performing this in code (using PyAV). But struck problems when pushing a frame from each container into the 3 hstack inputs would cause hstack to output multiple frames (when it should be only one). Again, this points to hstack duplicating frames.

    7. I have used Wireshark to sniff the RTCP/RTP traffic and the RTCP Sender Report’s have correct NTP timestamps in them matched to the timestamps in the RTP streams.

    8. Using ffprobe to show the frames of a RTSP stream (example below) I would have expected to see real (NTP based timestamps) given they exist in the RTCP packets. I’m not sure what is correct behaviour for ffprobe. But it does show that most frame timestamps are not exactly 0.25s (the camera is running 4 FPS) which might explain -vsync cfr ’fixing’ some issues and the Past duration 0.999992 style errors :

    pkt_pts=1012502
    pkt_pts_time=0:00:11.250022
    pkt_dts=1012502
    pkt_dts_time=0:00:11.250022
    best_effort_timestamp=1012502
    best_effort_timestamp_time=0:00:11.250022
    pkt_duration=N/A
    pkt_duration_time=N/A
    pkt_pos=N/A

    I posted this as a possible hstack bug on the ffmpeg bug tracker but that discussion fizzled out.

    So, question : how do I sync 3 RTSP video streams through hstack in ffmpeg ?

  • Android Merging two video with different (sizes,codec,frames,aspect raito) using FFMPEG

    6 septembre 2017, par Alok Kumar Verma

    I’m making an app which merges two or more than two video files which I’m getting from another activity. After choosing the files we pass the files to another activity where the merging happens. I’ve followed this link to do the same : AndroidWarZone FFMPEG

    Here I found the way on how to merge the two files only with different qualities. The command is given below :

    String[] complexCommand = {"ffmpeg","-y","-i","/storage/emulated/0/videokit/sample.mp4",
    "-i","/storage/emulated/0/videokit/in.mp4","-strict","experimental",
    "-filter_complex",
    "[0:v]scale=640x480,setsar=1:1[v0];[1:v]scale=640x480,setsar=1:1[v1];[v0][0:a][v1][1:a] concat=n=2:v=1:a=1",
    "-ab","48000","-ac","2","-ar","22050","-s","640x480","-r","30","-vcodec","mpeg4","-b","2097k","/storage/emulated/0/vk2_out/out.mp4"}

    Since I have a list of selected videos inside my array which I’m passing to the next page, I’ve done some changes in my command, like this :

    private void mergeVideos() {
       String savingPath = Environment.getExternalStorageDirectory().getAbsolutePath() + "/video.mp4";

       ArrayList<file> fileList = mList;

       List<string> filenames = new ArrayList<string>();

       for (int i = 0; i &lt; fileList.size(); i++) {
           filenames.add("-i");
           filenames.add(fileList.get(i).toString());
       }

       Log.e("Log===",filenames.toString());

       String joined = TextUtils.join(", ",filenames);

       Log.e("Joined====",joined);

       String complexCommand[] = {"-y", joined,
               "-filter_complex",
               "[0:v]scale=640x480,setsar=1:1[v0];[1:v]scale=640x480,setsar=1:1[v1];[v0][0:a][v1][1:a] concat=n=2:v=1:a=1",
               "-ab","48000","-ac","2","-ar","22050","-s","640x480","-r","30","-vcodec","mpeg4","-b","2097k", savingPath};
       Log.e("RESULT====",Arrays.toString(complexCommand));

      execFFmpegBinary(complexCommand);  }
    </string></string></file>

    In the log this is the the output I’m getting :

    This one is for received data which I have added in the mList

    E/RECEIVED DATA=====: [/mnt/m_external_sd/DCIM/Camera/VID_31610130_011933_454.mp4, /mnt/m_external_sd/DCIM/Camera/VID_23120824_054526_878.mp4]
    E/RESULT====: [-y, -i, /mnt/m_external_sd/DCIM/Camera/VID_31610130_011933_454.mp4, -i, /mnt/m_external_sd/DCIM/Camera/VID_23120824_054526_878.mp4, -filter_complex, [0:v]scale=640x480,setsar=1:1[v0];[1:v]scale=640x480,setsar=1:1[v1];[v0][0:a][v1][1:a] concat=n=2:v=1:a=1, -ab, 48000, -ac, 2, -ar, 22050, -s, 640x480, -r, 30, -vcodec, mpeg4, -b, 2097k, /storage/emulated/0/video.mp4]

    Here result is the complexCommand that is going inside the exeFFMPEGBinary() but not working.

    This is my exceFFMPEGBinary()

    private void execFFmpegBinary(final String[] combine) {
       try{
       fFmpeg.execute(combine, new ExecuteBinaryResponseHandler() {
           @Override
           public void onFailure(String s) {
               Log.d("", "FAILED with output : " + s);
           }

           @Override
           public void onSuccess(String s) {
               Log.d("", "SUCCESS with output : " + s);
               Toast.makeText(getApplicationContext(),"Success!",Toast.LENGTH_SHORT)
                       .show();
           }

           @Override
           public void onProgress(String s) {
               Log.d("", "progress : " + s);
           }

           @Override
           public void onStart() {
               progressDialog.setMessage("Processing...");
               progressDialog.show();
           }

           @Override
           public void onFinish() {
               progressDialog.dismiss();
           }
       });
    } catch (FFmpegCommandAlreadyRunningException e) {
       // do nothing for now
    }
    }

    I’ve done this and run my project, now the problem is it is not merging/concatenating anything, just a progressDialog comes up for a fraction of second and all I’m getting is this in my log :

    E/FFMPEG====: ffmpef : coorect loaded

    This means that ffmpeg is loading and nothing is getting implemented. I don’t get any log for onFailur, onSuccess(), onStart().

    Any suggestions would help me achieve my goal. Thanks.

    Note : I have done this merging with the use of Mp4Parser but there is a glitch inside it, it requires the file with same specification. So this is not my requirement.

    EDITS

    I did some more research and got this to concatenate, but this is not working either, here is the link : Concatenating two files

    I’ve found this stuff also from a link : FFMPEG Merging/Concatenating
    and found that his piece of code is working fine. But not mine.

    I’ve used that command also but it is not working nor giving me any log results. Except the FFMPEG Loading.

    Here is the command :

    complexCommand = new String[]{"-y", "-i", file1.toString(), "-i", file2.toString(), "-strict", "experimental", "-filter_complex",
               "[0:v]scale=1920x1080,setsar=1:1[v0];[1:v] scale=iw*min(1920/iw\\,1080/ih):ih*min(1920/iw\\,1080/ih), pad=1920:1080:(1920-iw*min(1920/iw\\,1080/ih))/2:(1080-ih*min(1920/iw\\,1080/ih))/2,setsar=1:1[v1];[v0][0:a][v1][1:a] concat=n=2:v=1:a=1","-ab", "48000", "-ac", "2", "-ar", "22050", "-s", "1920x1080", "-vcodec", "libx264","-crf","27","-q","4","-preset", "ultrafast", rootPath + "/output.mp4"};
  • How to capture UDP packets from ffmpeg with Wireshark ?

    15 septembre 2017, par Davis8988

    I have 2 laptops connected via LAN cable and static ip’s (v4).
    I use ffmpeg to capture desktop from laptop1 and stream it to laptop2 :

    ffmpeg -y -rtbufsize 100M -f gdigrab -framerate 30 -probesize 10M -draw_mouse 1  
    -i desktop -c:v libx264 -r 30 -preset Medium -tune zerolatency -crf 25 -f mpegts  
    udp://150.50.1.2:1080  

    And I use ffplay to recieve the stream on laptop2 and play it and it works - I can see laptop1’s desktop :

    ffplay -fflags nobuffer -sync ext udp://127.0.0.1:1080  

    Now I want also to capture the udp packets sent by ffmpeg to laptop2 via Wireshark(I start Wireshark on laptop2).
    But when I start Wireshark and press "capture" - I don’t see any packets beeing sent from the ip (as source) of laptop1. I see some few lines in Wireshark beeing added every 1 minute or so but from different ip’s.

    Why I can’t see the stream sent by ffmpeg in laptop1 to laptop2 ?