Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (40)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

Sur d’autres sites (6017)

  • Anomalie #3539 : 2 fois prévisualiser pour voir le bloc de prévisualisation

    30 octobre 2015, par chan kalan

    De mon côté je reproduis le problème.
    plugin Forum 1.9.29
    Un bug au chargement ajax ?

  • merge an audio and an image and create video using fmpeg android

    5 juin 2013, par Kamal Sharma

    I want to merge IMAGE + AUDIO and convert them into video using FFMPEG library,i compiled the library successfully,and got libfmpeg.so.but getting problem to execute the ffmpeg command through java code.This is command which i am using... "ffmpeg -i image8.jpg -i file.m4a -acodec copy test.mp4" if i execute this ffmpeg command through CMD,my video.mp4 file is successfully created,but if exceute same command through my Activity,it doesnot create any file.

    I used the code :

    public class Mpeg extends Activity {
    static {

    System.loadLibrary("ffmpeg");

    }

    @Override
    protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_mpeg);
    File mf = Environment.getExternalStorageDirectory();

    String livestream = mf.getAbsoluteFile()+"smile.png";

    String folderpth = mf.getAbsoluteFile()+"RABBA.MP3";

    //String output="/home/saicomputer/game.mp4";

    String output = new File(Environment.getExternalStorageDirectory(), "video.mp4").getAbsolutePath();
    Log.i("Test", "Let's set output to " + output);

    String cmd="ffmpeg -i "+ livestream +" -i "+ folderpth +" -acodec copy "+ output;

    Log.e("chck plzzzzz", "after "+ cmd);

    //String jaiho="ffmpeg -i image8.jpg -i file.m4a -acodec copy test.mp4";

    try{

    Process p = Runtime.getRuntime().exec(cmd);

    }
    catch(Exception e)
    {
       System.out.println("exception"+e);
    }

    and the logcat is

    06-05 17:58:10.686: D/dalvikvm(1189): Trying to load lib  /data/data/com.example.myfmpeg/lib/libffmpeg.so 0x412a5cf0
    06-05 17:58:10.756: I/dalvikvm(1189): threadid=3: reacting to signal 3
    06-05 17:58:10.955: D/dalvikvm(1189): Added shared lib /data/data/com.example.myfmpeg/lib/libffmpeg.so 0x412a5cf0
    06-05 17:58:10.955: D/dalvikvm(1189): No JNI_OnLoad found in /data/data/com.example.myfmpeg/lib/libffmpeg.so 0x412a5cf0, skipping init
    06-05 17:58:11.024: I/dalvikvm(1189): Wrote stack traces to '/data/anr/traces.txt'
    06-05 17:58:11.215: I/dalvikvm(1189): threadid=3: reacting to signal 3
    06-05 17:58:11.326: I/dalvikvm(1189): Wrote stack traces to '/data/anr/traces.txt'
    06-05 17:58:11.466: E/image(1189): imageeeeeeeee /mnt/sdcard/smile.png
    06-05 17:58:11.466: E/Test(1189): songggggggggg /mnt/sdcard/RABBA.MP3
    06-05 17:58:11.476: E/Test(1189): outputttttt /mnt/sdcard/video.mp4
    06-05 17:58:11.476: E/chck plzzzzz(1189): after ffmpeg -i /mnt/sdcard/smile.png -i /mnt/sdcard/RABBA.MP3 -acodec copy /mnt/sdcard/video.mp4
    06-05 17:58:11.896: E/AndroidRuntime(1189): FATAL EXCEPTION: main
    06-05 17:58:11.896: E/AndroidRuntime(1189): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.myfmpeg/com.example.myfmpeg.Mpeg}: java.lang.RuntimeException: java.io.IOException: Error running exec(). Command: [ffmpeg, -i, /mnt/sdcard/smile.png, -i, /mnt/sdcard/RABBA.MP3, -acodec, copy, /mnt/sdcard/video.mp4]    Working Directory: null Environment: null
    06-05 17:58:11.896: E/AndroidRuntime(1189):     at     android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1956)
    06-05 17:58:11.896: E/AndroidRuntime(1189):     at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1981)
    06-05 17:58:11.896: E/AndroidRuntime(1189):     at android.app.ActivityThread.access$600(ActivityThread.java:123)
    06-05 17:58:11.896: E/AndroidRuntime(1189):     at    android.app.ActivityThread$H.handleMessage(ActivityThread.java:1147)
    06-05 17:58:11.896: E/AndroidRuntime(1189):     at android.os.Handler.dispatchMessage(Handler.java:99)
    06-05 17:58:11.896: E/AndroidRuntime(1189):     at android.os.Looper.loop(Looper.java:137)
    06-05 17:58:11.896: E/AndroidRuntime(1189):     at android.app.ActivityThread.main(ActivityThread.java:4424)
    06-05 17:58:11.896: E/AndroidRuntime(1189):     at java.lang.reflect.Method.invokeNative(Native Method)
    06-05 17:58:11.896: E/AndroidRuntime(1189):     at java.lang.reflect.Method.invoke(Method.java:511)
    06-05 17:58:11.896: E/AndroidRuntime(1189):     at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784)
    06-05 17:58:11.896: E/AndroidRuntime(1189):     at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551)
    06-05 17:58:11.896: E/AndroidRuntime(1189):     at dalvik.system.NativeStart.main(Native Method)
    06-05 17:58:11.896: E/AndroidRuntime(1189): Caused by: java.lang.RuntimeException: java.io.IOException: Error running exec(). Command: [ffmpeg, -i, /mnt/sdcard/smile.png, -i, /mnt/sdcard/RABBA.MP3, -acodec, copy, /mnt/sdcard/video.mp4] Working Directory: null Environment: null

    I dont know what is the error when run from java. Any help ????

  • Subtitling Sierra RBT Files

    2 juin 2016, par Multimedia Mike — Game Hacking

    This is part 2 of the adventure started in my Subtitling Sierra VMD Files post. After I completed the VMD subtitling, The Translator discovered a wealth of animation files in a format called RBT (this apparently stands for “Robot” but I think “Ribbit” format could be more fun). What are we going to do ? We had come so far by solving the VMD subtitling problem for Phantasmagoria. It would be a shame if the effort ground to a halt due to this.

    Fortunately, the folks behind the ScummVM project already figured out enough of the format to be able to decode the RBT files in Phantasmagoria.

    In the end, I was successful in creating a completely standalone tool that can take a Robot file and a subtitle file and create a new Robot file with subtitles. The source code is here (subtitle-rbt.c). Here’s what the final result looks like :


    Spanish refrigerator
    “What’s in the refrigerator ?” I should note at this juncture that I am not sure if this particular Robot file even has sound or dialogue since I was conducting these experiments on a computer with non-working audio.

    The RBT Format
    I have created a new MultimediaWiki page describing the Robot Animation format based on the ScummVM source code. I have not worked with a format quite like this before. These are paletted animations which consist of a sequence of independent frames that are designed to be overlaid on top of static background. Because of these characteristics, each frame encodes its own unique dimensions and origin coordinate within the frame. While the Phantasmagoria VMD files are usually 288×144 (which are usually double-sized for the benefit of a 640×400 Super VGA canvas), these frames are meant to be plotted on a game field that was roughly 576×288 (288×144 doublesized).

    For example, 2 minimalist animation frames from a desk investigation Robot file :


    Robot Animation Frame #1
    100×147

    Robot Animation Frame #2
    101×149

    As for compression, my first impression was that the algorithm was the same as VMD. This is wrong. It evidently uses an unmodified version of a standard algorithm called Lempel-Ziv-Stac (LZS). It shows up in several RFCs and was apparently used in MS-DOS’s transparent disk compression scheme.

    Approach
    Thankfully, many of the lessons I learned from the previous project are applicable to this project, including : subtitle library interfacing, subtitling in the paletted colorspace, and replacing encoded frames from the original file instead of trying to create a new file.

    Here is the pitch for this project :

    • Create a C program that can traverse through an input file, piece by piece, and generate an output file. The result of this should be a bitwise identical file.
    • Adapt the LZS compression decoding algorithm from ScummVM into the new tool. Make the tool dump raw Portable NetMap (PNM) files of varying dimensions and ensure that they look correct.
    • Compress using LZS.
    • Stretch the frames and draw subtitles.
    • More compression. Find the minimum window for each frame.

    Compression
    Normally, my first goal is to decompress the video and store the data in a raw form. However, this turned out to be mathematically intractable. While the format does support both compressed and uncompressed frames (even though ScummVM indicates that the uncompressed path is yet unexercised), the goal of this project requires making the frames so large that they overflow certain parameters of the file.

    A Robot file has a sequence of frames and 2 tables describing the size of each frame. One table describes the entire frame size (audio + video) while the second table describes just the video frame size. Since these tables only use 16 bits to specify a size, the maximum frame size is 65536 bytes. Leaving space for the audio portion of the frame, this only leaves a per-frame byte budget of about 63000 bytes for the video. Expanding the frame to 576×288 (165,888 pixels) would overflow this limit.

    Anyway, the upshot is that I needed to compress the data up front.

    Fortunately, the LZS compressor is pretty straightforward, at least if you have experience writing VLC-oriented codecs. While the algorithm revolves around back references, my approach was to essentially write an RLE encoder. My compressor would search for runs of data (plentiful when I started to stretch the frame for subtitling purposes). When a run length of n=3 or more of the same pixel is found, encode the pixel by itself, and then store a back reference of offset -1 and length (n-1). It took a little while to iron out a few problems, but I eventually got it to work perfectly.

    I have to say, however, that the format is a little bit weird in how it codes very large numbers. The length encoding is somewhat Golomb-like, i.e., smaller values are encoded with fewer bits. However, when it gets to large numbers, it starts encoding counts of 15 as blocks of 1111. For example, 24 is bigger than 7. Thus, emit 1111 into the bitstream and subtract 8 from 23 -> 16. Still bigger than 15, so stuff another 1111 into the bitstream and subtract 15. Now we’re at 1, so stuff 0001. So 24 is 11111111 0001. 12 bits is not too horrible. But the total number of bytes (value / 30). So a value of 300 takes around 10 bytes (80 bits) to encode.

    Palette Slices
    As in the VMD subtitling project, I took the subtitle color offered in the subtitle spec file as a suggestion and used Euclidean distance to match to the closest available color in the palette. One problem, however, is that the palette is a lot smaller in these animations. According to my notes, for the set of animations I scanned, only about 80 colors were specified, starting at palette index 55. I hypothesize that different slices of the palette are reserved for different uses. E.g., animation, background, and user interface. Thus, there is a smaller number of colors to draw upon for subtitling purposes.

    Scaling
    One bit of residual weirdness in this format is the presence of a per-frame scale factor. While most frames set this to 100 (100% scale), I have observed 70%, 80%, and 90%. ScummVM is a bit unsure about how to handle these, so I am as well. However, I eventually realized I didn’t really need to care, at least not when decoding and re-encoding the frame. Just preserve the scale factor. I intend to modify the tool further to take scale factor into account when creating the subtitle.

    The Final Resolution
    Right around the time that I was composing this post, The Translator emailed me and notified me that he had found a better way to subtitle the Robot files by modifying the scripts, rendering my entire approach moot. The result is much cleaner :


    Proper RBT Subtitles
    Turns out that the engine supported subtitles all along

    It’s a good thing that I enjoyed the challenge or I might be annoyed at this point.

    See Also