
Recherche avancée
Autres articles (91)
-
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community. -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
Sur d’autres sites (3648)
-
Metal Gear Solid VP3 Easter Egg
4 août 2011, par Multimedia Mike — Game HackingMetal Gear Solid : The Twin Snakes for the Nintendo GameCube is very heavy on the cutscenes. Most of them are animated in real-time but there are a bunch of clips — normally of a more photo-realistic nature — that the developers needed to compress using a conventional video codec. What did they decide to use for this task ? On2 VP3 (forerunner of Theora) in a custom transport format. This is only the second game I have seen in the wild that uses pure On2 VP3 (first was a horse game). Reimar and I sorted out most of the details sometime ago. I sat down today and wrote a FFmpeg / Libav demuxer for the format, mostly to prove to myself that I still could.
Things went pretty smoothly. We suspected that there was an integer field that indicated the frame rate, but 18 fps is a bit strange. I kept fixating on a header field that read
0x41F00000
. Where have I seen that number before ? Oh, of course — it’s the number 30.0 expressed as an IEEE 32-bit float. The 4XM format pulled the same trick.Hexadecimal Easter Egg
I know I finished the game years ago but I really can’t recall any of the clips present in the samples directory. The file mgs1-60.vp3 contains a computer screen granting the player access and illustrates this with a hexdump. It looks something like this :
Funny, there are only 22 bytes on a line when there should be 32 according to the offsets. But, leave it to me to try to figure out what the file type is, regardless. I squinted and copied the first 22 bytes into a file :
1F 8B 08 00 85 E2 17 38 00 03 EC 3A 0D 78 54 D5 38 00 03 EC 3A 0D
And the answer to the big question :
$ file mgsfile mgsfile : gzip compressed data, from Unix, last modified : Wed Oct 27 22:43:33 1999
A gzip’d file from 1999. I don’t know why I find this stuff so interesting, but I do. I guess it’s no more and less strange than writing playback systems like this.
-
Notes on Linux for Dreamcast
23 février 2011, par Multimedia Mike — Sega Dreamcast, VP8I wanted to write down some notes about compiling Linux on Dreamcast (which I have yet to follow through to success). But before I do, allow me to follow up on my last post where I got Google’s libvpx library decoding VP8 video on the DC. Remember when I said the graphics hardware could only process variations of RGB color formats ? I was mistaken. Reading over some old documentation, I noticed that the DC’s PowerVR hardware can also handle packed YUV textures (UYVY, specifically) :
The video looks pretty sharp in the small photo. Up close, less so, due to the low resolution and high quantization of the test vector combined with the naive chroma upscaling. For the curious, the grey box surrounding the image highlights the 256-square texture that the video frame gets plotted on. Texture dimensions have to be powers of 2.
Notes on Linux for Dreamcast
I’ve occasionally dabbled with Linux on my Dreamcast. There’s an ancient (circa 2001) distro based around a build of kernel 2.4.5 out there. But I wanted to try to get something more current compiled. Thus far, I have figured out how to cross compile kernels pretty handily but have been unsuccessful in making them run.Here are notes are the compilation portion :
- kernel.org provides a very useful set of cross compiling toolchains
- get the gcc 4.5.1 cross toolchain for SH-4 (the gcc 4.3.3 one won’t work because the binutils is too old ; it will fail to assemble certain instructions as described in this post)
- working off of Linux kernel 2.6.37, edit the top-level Makefile ; find the ARCH and CROSS_COMPILE variables and set appropriately :
ARCH ?= sh CROSS_COMPILE ?= /path/to/gcc-4.5.1-nolibc/sh4-linux/bin/sh4-linux-
$ make dreamcast_defconfig
$ make menuconfig
... if any changes to the default configuration are desired- manually edit arch/sh/Makefile, changing :
cflags-$(CONFIG_CPU_SH4) := $(call cc-option,-m4,) \ $(call cc-option,-mno-implicit-fp,-m4-nofpu)
to :
cflags-$(CONFIG_CPU_SH4) := $(call cc-option,-m4,) \ $(call cc-option,-mno-implicit-fp)
I.e., remove the
'-m4-nofpu'
option. According to the gcc man page, this will "Generate code for the SH4 without a floating-point unit." Why this is a default is a mystery since the DC’s SH-4 has an FPU and compilation fails when enabling this option. - On that note, I was always under the impression that the DC sported an SH-4 CPU with the model number SH7750. According to this LinuxSH wiki page as well as the Linux kernel help, it actually has an SH7091 variant. This photo of the physical DC hardware corroborates the model number.
$ make
... to build a Linux kernel for the Sega Dreamcast
Running
So I can compile the kernel but running the kernel (the resulting vmlinux ELF file) gives me trouble. The default kernel ELF file reports an entry point of 0x8c002000. Attempting to upload this through the serial uploading facility I have available to me triggers a system reset almost immediately, probably because that’s the same place that the bootloader calls home. I have attempted to alter the starting address via ’make menuconfig’ -> System type -> Memory management options -> Physical memory start address. This allows the upload to complete but it still does not run. It’s worth noting that the 2.4.5 vmlinux file from the old distribution can be executed when uploaded through the serial loader, and it begins at 0x8c210000. -
Creating A Lossless SMC Encoder
26 avril 2011, par Multimedia Mike — GeneralLook, I can’t explain how or why I come up with this stuff. For some reason, I thought it would be interesting to write a new encoder for the Apple SMC video codec. I can’t even remember why. I just sat down the other day, started writing, and now I have a lossless SMC encoder that I’m not sure what to do with. Maybe this is to be my new thing— writing encoders for marginal multimedia formats.
Introduction
SMC is a vector quantizer (a lossy method) but I decided to attack it from the angle of lossless encoding. A.k.a. Apple Graphics Codec, SMC operates on 4x4 blocks in an 8-bit paletted colorspace. Each 4x4 block can be encoded with 1, 2, 4, 8, or 16 colors. Blocks can also be skipped (copied from previous frame) or copied from blocks rendered immediately prior within the same frame.Step 1 : Validating Infrastructure
The goal of this step is to encode the most braindead SMC frame possible and see if FFmpeg/libav’s QuickTime muxer can create a valid file. I think the simplest frame would be one in which each vector is encoded with the single-color mode, starting with color 0 and incrementing through the palette.Status : Successful. The only ’trick’ was to set
avctx->bits_per_coded_sample
to 8. (For fun, this can also be set to 40 (8 | 0x20) to specify a grayscale palette.)
Step 2 : Preprocessing
The video frames will arrive at the encoder as 32-bit RGB. These will need to be converted to a paletted colorspace before encoding. I don’t want to use FFmpeg’s default dithering approach as this will result in a substantial loss of quality as described in this post. I would rather maintain a palette built from observed colors throughout successive frames. If the total number of unique observed colors ever exceeds 256, error out.That’s what I would like to do. However, I noticed that FFmpeg/libav’s QuickTime muxer has never taken into account the possibility of encoding palettes. The path of least resistance in this case is to dither the input to match QuickTime’s default 8-bit palette (if a paletted QuickTime file does not specify a palette, a default 1-, 2-, 4-, or 8-bit palette is selected).
Status : Successful, if slow. I definitely need to optimize this step later.
Step 3 : Most Naive Encoding
The most basic encoding is to "encode" each block as a 16-color block. This will actually result in a slightly larger frame size than a raw encoding since each 4x4 block will be prepended by a byte opcode (0xE0 in this case) to indicate encoding mode. This should demonstrate that the encoder is functioning at the most basic level.Status : Successful. Try not to laugh too hard at the Big Buck Bunny dithered to an 8-bit palette :
Step 4 : Better Representation
It seems to me that encoding this format (losslessly) will entail performing vector operations on lots of 16-element (4x4-pixel) vectors. These could be done on the frame as-is, but it strikes me as more efficient and perhaps less error prone to rearrange the input images into a vector of vectors (or array of arrays if you prefer) :0 1 2 3 w ... 4 5 6 7 x ... 8 9 A B y ... C D E F z ...
0 : [0 1 2 3 4 5 6 7 8 9 A B C D E F] 1 : [...]
Status : Successful.
Step 5 : Add Interframe Skip Codes
Time to add a bit of brainpower to the proceedings : On non-keyframes, compare the current vector to the vector at the same position from the previous frame.Test this by encoding a pair of identical frames. Ideally, all codes should be skip codes.
Status : Successful, though my vector matching function could probably be improved.
Step 6 : Analyze Blocks For Optimal Color Coding
This is where things get potentially interesting, algorithmically. At least, I need to figure out (or look up) an algorithm to count the unique elements in a vector.Naive algorithm (i.e., first thing I can think of) :
- initialize a count variable to 0
- initialize an array of 256 flags to false
- for each 8-bit element in vector :
- if flag array[element] is 0, set array[element] to true and increment count
Status : Successful. Here is the distribution for the 640x360 Big Buck Bunny title :
1194 4636 4113 2140 1138 568 325 154 80 36 9 5 2 0 0 0
Or, in pretty graph form, demonstrating that vectors with few distinct elements dominate :
Step 7 : Encode Monochrome Blocks
At this point, the structure is starting to come together pretty well. This phase involves encoding a 0x60 opcode and a palette index when the count_distinct() function returns 1.Status : Absolutely no problem.
Step 8 : Encode 2-, 4-, and 8-color Modes
This step is a little more involved. This is where SMC’s 2-, 4-, and 8-color circular palette caches come into play. E.g., when the first 2-color block is encoded, the pair of colors it uses will be inserted into entry 0 of the 2-color cache. During the next 2-color block encoding, if the block uses a pair of colors that already occurs in the cache, the encoding can reference that cache entry. Otherwise, it adds the pair to the next available cache entry, looping back around to 0 as necessary.I think I should modify the count_distinct() function to also return a 16-byte array that contains a sorted list of the palette indicies used in the vector. The color pair cache will contain 256 16-bit, 32-bit ints for the quads and 64-bit ints for the octets. This will allow a slightly faster linear cache search.
Status : The 2-color encoding wasn’t too much trouble and I was able to adapt it to the 4-color mode pretty quickly afterward. I’m still having trouble with the insane 8-color coding mode, though. So that’s commented out for the time being.
Step 9 : Run Encoding and Putting It All Together
For each frame, convert the input pixels to a paletted format via one method or another (match to default QuickTime palette for first pass). Then, preprocess each vector to determine the minimum number of elements that can be used to represent it, storing the sorted list of distinct colors in a separate array. The number of elements can either be 0 (only for interframes and indicates a skip block), 1, 2, 4, 8, or 16. Also during this phase, for each vector after the first, test if the vector is the same as the previous vector. If it is, denote this fact in the preprocessed encoding (set the high bit of the element count number).Finally, pack it into the bytestream. Iterate through the element count array and search for the longest runs of elements that are encoded with the same mode (up to 256 for skip modes, up to 16 for other modes). If the high bit of an element count is set, that indicates that a copy mode can be encoded. Look for the longest run of element counts with the high bit set and encode a copy mode.
Status : In-process. Will finish this as motivation strikes.