Recherche avancée

Médias (0)

Mot : - Tags -/tags

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (15)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

Sur d’autres sites (4123)

  • Multi core theora encoding

    26 octobre 2012, par Jamie Taylor

    We convert uploaded video to MP4 and OGV, but while trying to speed up the process we've hit a wall. We found the bottleneck is the OGV encoding, While it might take 5 minutes to convert a 350mb AVI to MP4, it takes roughly 25-30 minutes to convert the same file to OGV.

    avconv supports multithreading/multiple cores but it seems that libtheora doesn't, does anyone have any way of encoding over multiple cores ? I found an old mail group which discussed a patch but I can't find much else about it, or if it even still works 5 years on.

    So. Is multi-core theora processing possible and what should I use to do it ?

    For Reference :

    avconv -y -i big_buck_bunny_720p_surround.avi -vcodec libtheora -qscale 10 -bufsize 20M -same_quant -acodec libvorbis -ac 2 -ar 44100 -ab 128k buck.ogv
  • How to build ffmbc with static libraries on Mac

    27 janvier 2013, par Brainware

    I can build ffmbc (similar to ffmpeg) and it runs fine from the terminal. But, when I try to run it from MAMP it loads different dynamic libraries and crashes. What I would really like to do is have ffmbc load the same libraries each time no matter how it's run. Static libraries should do the trick. Another option is to create a Mac app (ffmbc.app) and use package contents. But, I don't know how to do either of these things. An XCode project would probably be best but I can't find one for ffmbc.

    I see people spending days to get things to build on their machines. I don't understand why the linux community makes this so difficult and so fragile. I've been writing apps for over 25 years so I guess I'm not as interested in jumping through hoops like I use to be. ;-)

    Suggestions or advice most appreciated.

  • Sequencing MIDI From A Chiptune

    28 avril 2013, par Multimedia Mike — Outlandish Brainstorms

    The feature requests for my game music appreciation website project continue to pour in. Many of them take the form of “please add player support for system XYZ and the chiptune library to go with it.” Most of these requests are A) plausible, and B) in process. I have also received recommendations for UI improvements which I take under consideration. Then there are the numerous requests to port everything from Native Client to JavaScript so that it will work everywhere, even on mobile, a notion which might take a separate post to debunk entirely.

    But here’s an interesting request about which I would like to speculate : Automatically convert a chiptune into a MIDI file. I immediately wanted to dismiss it as impossible or highly implausible. But, as is my habit, I started pondering the concept a little more carefully and decided that there’s an outside chance of getting some part of the idea to work.

    Intro to MIDI
    MIDI stands for Musical Instrument Digital Interface. It’s a standard musical interchange format and allows music instruments and computers to exchange musical information. The file interchange format bears the extension .mid and contains a sequence of numbers that translate into commands separated by time deltas. E.g. : turn key on (this note, this velocity) ; wait x ticks ; turn key off ; wait y ticks ; etc. I’m vastly oversimplifying, as usual.

    MIDI fascinated me back in the days of dialup internet and discrete sound cards (see also my write-up on the Gravis Ultrasound). Typical song-length MIDI files often ranged from a few kilobytes to a few 10s of kilobytes. They were significantly smaller than the MOD et al. family of tracker music formats mostly by virtue of the fact that MIDI files aren’t burdened by transporting digital audio samples.

    I know I’m missing a lot of details. I haven’t dealt much with MIDI in the past… 15 years or so (ever since computer audio became a blur of MP3 and AAC audio). But I’m led to believe it’s still relevant. The individual who requested this feature expressed an interest in being able to import the sequenced data into any of the many music programs that can interpret .mid files.

    The Pitch
    To limit the scope, let’s focus on music that comes from the 8-bit Nintendo Entertainment System or the original Game Boy. The former features 2 square wave channels, a triangle wave, a noise channel, and a limited digital channel. The latter creates music via 2 square waves, a wave channel, and a noise channel. The roles that these various channels usually play typically break down as : square waves represent the primary melody, triangle wave is used to simulate a bass line, noise channel approximates a variety of percussive sounds, and the DPCM/wave channels are fairly free-form. They can have random game sound effects or, if they are to assist in the music, are often used for more authentic percussive sounds.

    The various channels are controlled via an assortment of memory-mapped hardware registers. These registers are fed values such as frequency, volume, and duty cycle. My idea is to modify the music playback engine to track when various events occur. Whenever a channel is turned on or off, that corresponds to a MIDI key on or off event. If a channel is already playing but a new frequency is written, that would likely count as a note change, so log a key off event followed by a new key on event.

    There is the major obstacle of what specific note is represented by a channel in a particular state. The MIDI standard defines 128 different notes spanning 11 octaves. Empirically, I wonder if I could create a table which maps the assorted frequencies to different MIDI notes ?

    I think this strategy would only work with the square and triangle waves. Noise and digital channels ? I’m not prepared to tackle that challenge.

    Prior Work ?
    I have to wonder if there is any existing work in this area. I’m certain that people have wanted to do this before ; I wonder if anyone has succeeded ?

    Just like reverse engineering a binary program entails trying to obtain a higher level abstraction of a program from a very low level representation, this challenge feels like reverse engineering a piece of music as it is being performed and automatically expressing it in a higher level form.