Recherche avancée

Médias (0)

Mot : - Tags -/protocoles

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (45)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (6132)

  • From jpg to animated gif

    12 mai 2016, par lleoun

    I’m desperate, I need to convert some jpg images into an animated gif.

    I’ve tried ffmpeg, but the result has a terrible quality.

    Also tried imagemagick, and the result looks great but it weights 511 KB !!

    Anyone please can tell me what to use or how to use the before applications to get a final animated gif with a normal quality and a normal weight for a gif ??

    As I said I’m desperate, I need to finish this asap :(

    Thanks a lot

  • Seam carving

    9 juin 2010, par Mikko Koppanen — Imagick, PHP stuff

    Today I was reading trough the ImageMagick ChangeLog and noticed an interesting entry. “Add support for liquid rescaling”. I rushed to check the MagickWand API docs and there it was : MagickLiquidRescaleImage ! After about ten minutes of hacking the Imagick support was done. Needless to say ; I was excited :)

    For those who don’t know what seam carving is check the demo here. More detailed information about the algorithm can be found here : “Seam Carving for Content-Aware Image Resizing” by Shai Avidan and Ariel Shamir

    To use this functionality you need to install at least ImageMagick 6.3.8-2 and liblqr. Remember to pass –with-lqr to ImageMagick configuration line. You can get liblqr here : http://liblqr.wikidot.com/. The Imagick side of the functionality should appear in the CVS today if everything goes as planned.

    Here is a really simple example just to illustrate the results of the operation. The parameters might be far from optimal (didn’t do much testing yet). The original dimensions of image are 500×375 and the resulting size is 500×200.

    Update : the functionality is pending until license issues are solved.

    1. < ?php
    2.  
    3. /* Create new object */
    4. $im = new Imagick( ’test.jpg’ ) ;
    5.  
    6. /* Scale down */
    7. $im->liquidRescaleImage( 500, 200, 3, 25 ) ;
    8.  
    9. /* Display */
    10. header( ’Content-Type : image/jpg’ ) ;
    11. echo $im ;
    12.  
    13.  ?>

    The original image by flickr/jennconspiracy

    result

    And the result :

    result

    Update. On kenrick’s request here is an image which is scaled down to 300×300

    result2

  • Adjusting The Timetable and SQL Shame

    16 août 2012, par Multimedia Mike — General, Python, sql

    My Game Music Appreciation website has a big problem that many visitors quickly notice and comment upon. The problem looks like this :



    The problem is that all of these songs are 2m30s in length. During the initial import process, unless a chiptune file already had curated length metadata attached, my metadata utility emitted a default play length of 150 seconds. This is not good if you want to listen to all the songs in a soundtrack without interacting with the player page, but have various short songs (think “game over” or other quick jingles) that are over in a few seconds. Such songs still pad out 150 seconds of silence.

    So I needed to correct this. Possible solutions :

    1. Manually : At first, I figured I could ask the database which songs needed fixing and listen to them to determine the proper lengths. Then I realized that there were well over 1400 games affected by this problem. This just screams “automated solution”.
    2. Automatically : Ask the database which songs need fixing and then somehow ask the computer to listen to the songs and decide their proper lengths. This sounds like a winner, provided that I can figure out how to programmatically determine if a song has “finished”.

    SQL Shame
    This play adjustment task has been on my plate for a long time. A key factor that has blocked me is that I couldn’t figure out a single SQL query to feed to the SQLite database underlying the site which would give me all the songs I needed. To be clear, it was very simple and obvious to me how to write a program that would query the database in phases to get all the information. However, I felt that it would be impure to proceed with the task unless I could figure out one giant query to get all the information.

    This always seems to come up whenever I start interacting with a database in any serious way. I call it SQL shame. This task got some traction when I got over this nagging doubt and told myself that there’s nothing wrong with the multi-step query program if it solves the problem at hand.

    Suddenly, I had a flash of inspiration about why the so-called NoSQL movement exists. Maybe there are a lot more people who don’t like trying to derive such long queries and are happy to allow other languages to pick up the slack.

    Estimating Lengths
    Anyway, my solution involved writing a Python script to iterate through all the games whose metadata was output by a certain engine (the one that makes the default play length 150 seconds). For each of those games, the script queries the song table and determines if each song is exactly 150 seconds. If it is, then go to work trying to estimate the true length.

    The forgoing paragraph describes what I figured was possible with only a single (possibly large) SQL query.

    For each song represented in the chiptune file, I ran it through a custom length estimator program. My brilliant (err, naïve) solution to the length estimation problem was to synthesize seconds of audio up to a maximum of 120 seconds (tightening up the default length just a bit) and counting how many of those seconds had all 0 samples. If the count reached 5 consecutive seconds of silence, then the estimator rewound the running length by 5 seconds and declared that to be the proper length. Update the database.

    There were about 1430 chiptune files whose songs needed updates. Some files had 1 single song. Some files had over 100. When I let the script run, it took nearly 65 minutes to process all the files. That was a single-threaded solution, of course. Even though I already had the data I needed, I wanted to try to hand at parallelizing the script. So I went to work with Python’s multiprocessing module and quickly refactored it to use all 4 CPU threads on the machine where the files live. Results :

    • Single-threaded solution : 64m42s to process corpus (22 games/minute)
    • Multi-threaded solution : 18m48s with 4 CPU threads (75 games/minute)

    More than a 3x speedup across 4 CPU threads, which is decent for a primarily CPU-bound operation.

    Epilogue
    I suspect that this task will require some refinement or manual intervention. Maybe there are songs which actually have more than 5 legitimate seconds of silence. Also, I entertained the possibility that some songs would generate very low amplitude noise rather than being perfectly silent. In that case, I could refine the script to stipulate that amplitudes below a certain threshold count as 0. Fortunately, I marked which games were modified by this method, so I can run a new script as necessary.

    SQL Schema
    Here is the schema of my SQlite3 database, for those who want to try their hand at a proper query. I am confident that it’s possible ; I just didn’t have the patience to work it out. The task is to retrieve all the rows from the games table where all of the corresponding songs in the songs table is 150000 milliseconds.

    1. CREATE TABLE games
    2.   (
    3.    id INTEGER PRIMARY KEY AUTOINCREMENT,
    4.    uncompressed_sha1 TEXT,
    5.    uncompressed_size INTEGER,
    6.    compressed_sha1 TEXT,
    7.    compressed_size INTEGER,
    8.    system TEXT,
    9.    game TEXT,
    10.    gme_system TEXT default NULL,
    11.    canonical_url TEXT default NULL,
    12.    extension TEXT default "gamemusicxz",
    13.    enabled INTEGER default 1,
    14.    redirect_to_id INT DEFAULT -1,
    15.    play_lengths_modified INT DEFAULT NULL) ;
    16. CREATE TABLE songs
    17.   (
    18.    game_id INTEGER,
    19.    song_number INTEGER NOT NULL,
    20.    song TEXT,
    21.    author TEXT,
    22.    copyright TEXT,
    23.    dumper TEXT,
    24.    length INTEGER,
    25.    intro_length INTEGER,
    26.    loop_length INTEGER,
    27.    play_length INTEGER,
    28.    play_order INTEGER default -1) ;
    29. CREATE TABLE tags
    30.   (
    31.    game_id INTEGER,
    32.    tag TEXT NOT NULL,
    33.    tag_type TEXT default "filename") ;
    34. CREATE INDEX gameid_index_songs ON songs(game_id) ;
    35. CREATE INDEX gameid_index_tag ON tags(game_id) ;
    36. CREATE UNIQUE INDEX sha1_index ON games(uncompressed_sha1) ;