Recherche avancée

Médias (91)

Autres articles (71)

  • Utilisation et configuration du script

    19 janvier 2011, par

    Informations spécifiques à la distribution Debian
    Si vous utilisez cette distribution, vous devrez activer les dépôts "debian-multimedia" comme expliqué ici :
    Depuis la version 0.3.1 du script, le dépôt peut être automatiquement activé à la suite d’une question.
    Récupération du script
    Le script d’installation peut être récupéré de deux manières différentes.
    Via svn en utilisant la commande pour récupérer le code source à jour :
    svn co (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (6837)

  • ffmpeg h.264 invalid cutting

    1er mai 2012, par E.Ar

    I have an s3 bucket with several hundreds video files.
    Those files were created by cutting parts of larger video files using ffmpeg.
    I wrote a script for this, which downloads the original video file from another bucket, runs ffmpeg to cut the file, and uploads the new file to it's bucket.
    For downloading and uploading from/to s3 i used this php library.
    The ffmpeg syntax I used :

    ffmpeg -y -vsync 2 -async 1 -ss [time-in] -t [duration] -i [large-input-video.mp4] -vcodec copy -acodec copy [short-output-video.mp4]

    Which should just cut the original file between the specified times, without any changes to the a/v codecs.
    All the original video files are encoded in h.264, and this is also the required encoding for the new files (which will be streamed through a CDN to the clients' flash players).

    My problem is that only a small part of the new files are coming out as encoded in h.264, but most of them aren't (h.264 is a must, otherwise the files wont play on the clients' side).
    I can't trace the problem to the original videos, since when i use the same ffmpeg command manually, with the same parameters and on the same files, the output files come out just fine. It seems arbitrary.

    I use ffprobe to get information about the files' codecs.
    For example :
    ffprobe of one of the large (original) video files :

    ...
    Stream #0.0(und) : Video : h264, yuv420p, 640x352, 499 kb/s, 25 fps, 25 tbr, 90k tbn, 50 tbc
    ...

    ffprobe of the corresponding new cut file :

    ...
    Stream #0.0(und) : Video : mpeg4, yuv420p, 640x352 [PAR 1:1 DAR 20:11], 227 kb/s, 25 fps, 25 tbr, 25 tbn, 25 tbc
    ...

    As can be seen, the difference is in 'mpeg4' vs. 'h264'.

    Any insights on what can cause the new files to come out in the wrong encoding would be greatly appreciated.

    Thanks !

    Edit : Problem Resolved
    After analyzing all the files, I noticed that about two thirds of them are coming out in the wrong codec.
    Since I used three machines for the cutting process (three separate EC2 servers), it occurred to me that on two of them there is a bad installation of ffmpeg (as @LordNeckbeard suggested in his answer).
    I ran the process again, only on the invalid files, on the third machine alone - which produced the desired result.

  • iOS Recorded Video Playback on Android

    1er mars 2015, par Nirav

    I am trying to record video from iPhone device using UIImagePickerController and able to store it in MP4 format. The same video is uploaded on the Amazon S3 cloud.

    When I try to play the same video on Android devices, it fails to play with an error, cannot play.

    I searched forums/google and found that ffmpeg should be used to compress the video before uploading. I want to do the compression on the phone itself rather than on the server. Which is the best way to achieve this ?

    Regards,

    Nirav

  • Finding Optimal Code Coverage

    7 mars 2012, par Multimedia Mike — Programming

    A few months ago, I published a procedure for analyzing code coverage of the test suites exercised in FFmpeg and Libav. I used it to add some more tests and I have it on good authority that it has helped other developers fill in some gaps as well (beginning with students helping out with the projects as part of the Google Code-In program). Now I’m wondering about ways to do better.

    Current Process
    When adding a test that depends on a sample (like a demuxer or decoder test), it’s ideal to add a sample that’s A) small, and B) exercises as much of the codebase as possible. When I was studying code coverage statistics for the WC4-Xan video decoder, I noticed that the sample didn’t exercise one of the 2 possible frame types. So I scouted samples until I found one that covered both types, trimmed the sample down, and updated the coverage suite.

    I started wondering about a method for finding the optimal test sample for a given piece of code, one that exercises every code path in a module. Okay, so that’s foolhardy in the vast majority of cases (although I was able to add one test spec that pushed a module’s code coverage from 0% all the way to 100% — but the module in question only had 2 exercisable lines). Still, given a large enough corpus of samples, how can I find the smallest set of samples that exercise the complete codebase ?

    This almost sounds like an NP-complete problem. But why should that stop me from trying to find a solution ?

    Science Project
    Here’s the pitch :

    • Instrument FFmpeg with code coverage support
    • Download lots of media to exercise a particular module
    • Run FFmpeg against each sample and log code coverage statistics
    • Distill the resulting data in some meaningful way in order to obtain more optimal code coverage

    That first step sounds harsh– downloading lots and lots of media. Fortunately, there is at least one multimedia format in the projects that tends to be extremely small : ANSI. These are files that are designed to display elaborate scrolling graphics using text mode. Further, the FATE sample currently deployed for this test (TRE_IOM5.ANS) only exercises a little less than 50% of the code in libavcodec/ansi.c. I believe this makes the ANSI video decoder a good candidate for this experiment.

    Procedure
    First, find a site that hosts a lot ANSI files. Hi, sixteencolors.net. This site has lots (on the order of 4000) artpacks, which are ZIP archives that contain multiple ANSI files (and sometimes some other files). I scraped a list of all the artpack names.

    In an effort to be responsible, I randomized the list of artpacks and downloaded periodically and with limited bandwidth ('wget --limit-rate=20k').

    Run ‘gcov’ on ansi.c in order to gather the full set of line numbers to be covered.

    For each artpack, unpack the contents, run the instrumented FFmpeg on each file inside, run ‘gcov’ on ansi.c, and log statistics including the file’s size, the file’s location (artpack.zip:filename), and a comma-separated list of line numbers touched.

    Definition of ‘Optimal’
    The foregoing procedure worked and yielded useful, raw data. Now I have to figure out how to analyze it.

    I think it’s most desirable to have the smallest files (in terms of bytes) that exercise the most lines of code. To that end, I sorted the results by filesize, ascending. A Python script initializes a set of all exercisable line numbers in ansi.c, then iterates through each each file’s stats line, adding the file to the list of candidate samples if its set of exercised lines can remove any line numbers from the overall set of lines. Ideally, that set of lines should devolve to an empty set.

    I think a second possible approach is to find the single sample that exercises the most code and then proceed with the previously described method.

    Initial Results
    So far, I have analyzed 13324 samples from 357 different artpacks provided by sixteencolors.net.

    Using the first method, I can find a set of samples that covers nearly 80% of ansi.c :

    <br />
    0 bytes: bad-0494.zip:5<br />
    1 bytes: grip1293.zip:-ANSI---.---<br />
    1 bytes: pur-0794.zip:.<br />
    2 bytes: awe9706.zip:-ANSI───.───<br />
    61 bytes: echo0197.zip:-(ART)-<br />
    62 bytes: hx03.zip:HX005.DAT<br />
    76 bytes: imp-0494.zip:IMPVIEW.CFG<br />
    82 bytes: ice0010b.zip:_cont'd_.___<br />
    101 bytes: bdp-0696.zip:BDP2.WAD<br />
    112 bytes: plain12.zip:--------.---<br />
    181 bytes: ins1295v.zip:-°VGA°-.  н<br />
    219 bytes: purg-22.zip:NEM-SHIT.ASC<br />
    289 bytes: srg1196.zip:HOWTOREQ.JNK<br />
    315 bytes: karma-04.zip:FASHION.COM<br />
    318 bytes: buzina9.zip:ox-rmzzy.ans<br />
    411 bytes: solo1195.zip:FU-BLAH1.RIP<br />
    621 bytes: ciapak14.zip:NA-APOC1.ASC<br />
    951 bytes: lght9404.zip:AM-TDHO1.LIT<br />
    1214 bytes: atb-1297.zip:TX-ROKL.ASC<br />
    2332 bytes: imp-0494.zip:STATUS.ANS<br />
    3218 bytes: acepak03.zip:TR-STAT5.ANS<br />
    6068 bytes: lgc-0193.zip:LGC-0193.MEM<br />
    16778 bytes: purg-20.zip:EZ-HIR~1.JPG<br />
    20582 bytes: utd0495.zip:LT-CROW3.ANS<br />
    26237 bytes: quad0597.zip:MR-QPWP.GIF<br />
    29208 bytes: mx-pack17.zip:mx-mobile-source-logo.jpg<br />
    ----<br />
    109440 bytes total<br />

    A few notes about that list : Some of those filenames are comprised primarily of control characters. 133t, and all that. The first file is 0 bytes. I wondered if I should discard 0-length files but decided to keep those in, especially if they exercise lines that wouldn’t normally be activated. Also, there are a few JPEG and GIF files in the set. I should point out that I forced the tty demuxer using -f tty and there isn’t much in the way of signatures for this format. So, again, whatever exercises more lines is better.

    Using this same corpus, I tried approach 2– which single sample exercises the most lines of the decoder ? Answer : blde9502.zip:REQUEST.EXE. Huh. I checked it out and ‘file’ ID’s it as a MS-DOS executable. So, that approach wasn’t fruitful, at least not for this corpus since I’m forcing everything through this narrow code path.

    Think About The Future
    Where can I take this next ? The cloud ! I have people inside the search engine industry who have furnished me with extensive lists of specific types of multimedia files from around the internet. I also see that Amazon Web Services Elastic Compute Cloud (AWS EC2) instances don’t charge for incoming bandwidth.

    I think you can see where I’m going with this.

    See Also :