Recherche avancée

Médias (0)

Mot : - Tags -/objet éditorial

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (56)

  • Récupération d’informations sur le site maître à l’installation d’une instance

    26 novembre 2010, par

    Utilité
    Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
    Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)

  • Soumettre améliorations et plugins supplémentaires

    10 avril 2011

    Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
    Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (5354)

  • Announcing the world’s fastest VP8 decoder : ffvp8

    24 juillet 2010, par Dark Shikari — VP8, ffmpeg, google, speed

    Back when I originally reviewed VP8, I noted that the official decoder, libvpx, was rather slow. While there was no particular reason that it should be much faster than a good H.264 decoder, it shouldn’t have been that much slower either ! So, I set out with Ronald Bultje and David Conrad to make a better one in FFmpeg. This one would be community-developed and free from the beginning, rather than the proprietary code-dump that was libvpx. A few weeks ago the decoder was complete enough to be bit-exact with libvpx, making it the first independent free implementation of a VP8 decoder. Now, with the first round of optimizations complete, it should be ready for primetime. I’ll go into some detail about the development process, but first, let’s get to the real meat of this post : the benchmarks.

    We tested on two 1080p clips : Parkjoy, a live-action 1080p clip, and the Sintel trailer, a CGI 1080p clip. Testing was done using “time ffmpeg -vcodec libvpx or vp8 -i input -vsync 0 -an -f null -”. We all used the latest SVN FFmpeg at the time of this posting ; the last revision optimizing the VP8 decoder was r24471.

    Parkjoy graphSintel graph

    As these benchmarks show, ffvp8 is clearly much faster than libvpx, particularly on 64-bit. It’s even faster by a large margin on Atom, despite the fact that we haven’t even begun optimizing for it. In many cases, ffvp8′s extra speed can make the difference between a video that plays and one that doesn’t, especially in modern browsers with software compositing engines taking up a lot of CPU time. Want to get faster playback of VP8 videos ? The next versions of FFmpeg-based players, like VLC, will include ffvp8. Want to get faster playback of WebM in your browser ? Lobby your browser developers to use ffvp8 instead of libvpx. I expect Chrome to switch first, as they already use libavcodec for most of their playback system.

    Keep in mind ffvp8 is not “done” — we will continue to improve it and make it faster. We still have a number of optimizations in the pipeline that aren’t committed yet.

    Developing ffvp8

    The initial challenge, primarily pioneered by David and Ronald, was constructing the core decoder and making it bit-exact to libvpx. This was rather challenging, especially given the lack of a real spec. Many parts of the spec were outright misleading and contradicted libvpx itself. It didn’t help that the suite of official conformance tests didn’t even cover all the features used by the official encoder ! We’ve already started adding our own conformance tests to deal with this. But I’ve complained enough in past posts about the lack of a spec ; let’s get onto the gritty details.

    The next step was adding SIMD assembly for all of the important DSP functions. VP8′s motion compensation and deblocking filter are by far the most CPU-intensive parts, much the same as in H.264. Unlike H.264, the deblocking filter relies on a lot of internal saturation steps, which are free in SIMD but costly in a normal C implementation, making the plain C code even slower. Of course, none of this is a particularly large problem ; any sane video decoder has all this stuff in SIMD.

    I tutored Ronald in x86 SIMD and wrote most of the motion compensation, intra prediction, and some inverse transforms. Ronald wrote the rest of the inverse transforms and a bit of the motion compensation. He also did the most difficult part : the deblocking filter. Deblocking filters are always a bit difficult because every one is different. Motion compensation, by comparison, is usually very similar regardless of video format ; a 6-tap filter is a 6-tap filter, and most of the variation going on is just the choice of numbers to multiply by.

    The biggest challenge in an SIMD deblocking filter is to avoid unpacking, that is, going from 8-bit to 16-bit. Many operations in deblocking filters would naively appear to require more than 8-bit precision. A simple example in the case of x86 is abs(a-b), where a and b are 8-bit unsigned integers. The result of “a-b” requires a 9-bit signed integer (it can be anywhere from -255 to 255), so it can’t fit in 8-bit. But this is quite possible to do without unpacking : (satsub(a,b) | satsub(b,a)), where “satsub” performs a saturating subtract on the two values. If the value is positive, it yields the result ; if the value is negative, it yields zero. Oring the two together yields the desired result. This requires 4 ops on x86 ; unpacking would probably require at least 10, including the unpack and pack steps.

    After the SIMD came optimizing the C code, which still took a significant portion of the total runtime. One of my biggest optimizations was adding aggressive “smart” prefetching to reduce cache misses. ffvp8 prefetches the reference frames (PREVIOUS, GOLDEN, and ALTREF)… but only the ones which have been used reasonably often this frame. This lets us prefetch everything we need without prefetching things that we probably won’t use. libvpx very often encodes frames that almost never (but not quite never) use GOLDEN or ALTREF, so this optimization greatly reduces time spent prefetching in a lot of real videos. There are of course countless other optimizations we made that are too long to list here as well, such as David’s entropy decoder optimizations. I’d also like to thank Eli Friedman for his invaluable help in benchmarking a lot of these changes.

    What next ? Altivec (PPC) assembly is almost nonexistent, with the only functions being David’s motion compensation code. NEON (ARM) is completely nonexistent : we’ll need that to be fast on mobile devices as well. Of course, all this will come in due time — and as always — patches welcome !

    Appendix : the raw numbers

    Here’s the raw numbers (in fps) for the graphs at the start of this post, with standard error values :

    Core i7 620QM (1.6Ghz), Windows 7, 32-bit :
    Parkjoy ffvp8 : 44.58 0.44
    Parkjoy libvpx : 33.06 0.23
    Sintel ffvp8 : 74.26 1.18
    Sintel libvpx : 56.11 0.96

    Core i5 520M (2.4Ghz), Linux, 64-bit :
    Parkjoy ffvp8 : 68.29 0.06
    Parkjoy libvpx : 41.06 0.04
    Sintel ffvp8 : 112.38 0.37
    Sintel libvpx : 69.64 0.09

    Core 2 T9300 (2.5Ghz), Mac OS X 10.6.4, 64-bit :
    Parkjoy ffvp8 : 54.09 0.02
    Parkjoy libvpx : 33.68 0.01
    Sintel ffvp8 : 87.54 0.03
    Sintel libvpx : 52.74 0.04

    Core Duo (2Ghz), Mac OS X 10.6.4, 32-bit :
    Parkjoy ffvp8 : 21.31 0.02
    Parkjoy libvpx : 17.96 0.00
    Sintel ffvp8 : 41.24 0.01
    Sintel libvpx : 29.65 0.02

    Atom N270 (1.6Ghz), Linux, 32-bit :
    Parkjoy ffvp8 : 15.29 0.01
    Parkjoy libvpx : 12.46 0.01
    Sintel ffvp8 : 26.87 0.05
    Sintel libvpx : 20.41 0.02

  • Inside WebM Technology : VP8 Intra and Inter Prediction

    20 juillet 2010, par noreply@blogger.com (Lou Quillio)
    Continuing our series on WebM technology, I will discuss the use of prediction methods in the VP8 video codec, with special attention to the TM_PRED and SPLITMV modes, which are unique to VP8.

    First, some background. To encode a video frame, block-based codecs such as VP8 first divide the frame into smaller segments called macroblocks. Within each macroblock, the encoder can predict redundant motion and color information based on previously processed blocks. The redundant data can be subtracted from the block, resulting in more efficient compression.

    Image by Fido Factor, licensed under Creative Commons Attribution License.
    Based on a work at www.flickr.com

    A VP8 encoder uses two classes of prediction :
    • Intra prediction uses data within a single video frame
    • Inter prediction uses data from previously encoded frames
    The residual signal data is then encoded using other techniques, such as transform coding.

    VP8 Intra Prediction Modes
    VP8 intra prediction modes are used with three types of macroblocks :
    • 4x4 luma
    • 16x16 luma
    • 8x8 chroma
    Four common intra prediction modes are shared by these macroblocks :
    • H_PRED (horizontal prediction). Fills each column of the block with a copy of the left column, L.
    • V_PRED (vertical prediction). Fills each row of the block with a copy of the above row, A.
    • DC_PRED (DC prediction). Fills the block with a single value using the average of the pixels in the row above A and the column to the left of L.
    • TM_PRED (TrueMotion prediction). A mode that gets its name from a compression technique developed by On2 Technologies. In addition to the row A and column L, TM_PRED uses the pixel P above and to the left of the block. Horizontal differences between pixels in A (starting from P) are propagated using the pixels from L to start each row.
    For 4x4 luma blocks, there are six additional intra modes similar to V_PRED and H_PRED, but correspond to predicting pixels in different directions. These modes are outside the scope of this post, but if you want to learn more see the VP8 Bitstream Guide.

    As mentioned above, the TM_PRED mode is unique to VP8. The following figure uses an example 4x4 block of pixels to illustrate how the TM_PRED mode works :
    Where C, As and Ls represent reconstructed pixel values from previously coded blocks, and X00 through X33 represent predicted values for the current block. TM_PRED uses the following equation to calculate Xij :

    Xij = Li + Aj - C (i, j=0, 1, 2, 3)

    Although the above example uses a 4x4 block, the TM_PRED mode for 8x8 and 16x16 blocks works in the same fashion.
    TM_PRED is one of the more frequently used intra prediction modes in VP8, and for common video sequences it is typically used by 20% to 45% of all blocks that are intra coded. Overall, together with other intra prediction modes, TM_PRED helps VP8 to achieve very good compression efficiency, especially for key frames, which can only use intra modes (key frames by their very nature cannot refer to previously encoded frames).

    VP8 Inter Prediction Modes

    In VP8, inter prediction modes are used only on inter frames (non-key frames). For any VP8 inter frame, there are typically three previously coded reference frames that can be used for prediction. A typical inter prediction block is constructed using a motion vector to copy a block from one of the three frames. The motion vector points to the location of a pixel block to be copied. In most video compression schemes, a good portion of the bits are spent on encoding motion vectors ; the portion can be especially large for video encoded at lower datarates.

    Like previous VPx codecs, VP8 encodes motion vectors very efficiently by reusing vectors from neighboring macroblocks (a macroblock includes one 16x16 luma block and two 8x8 chroma blocks). VP8 uses a similar strategy in the overall design of inter prediction modes. For example, the prediction modes "NEAREST" and "NEAR" make use of last and second-to-last, non-zero motion vectors from neighboring macroblocks. These inter prediction modes can be used in combination with any of the three different reference frames.

    In addition, VP8 has a very sophisticated, flexible inter prediction mode called SPLITMV. This mode was designed to enable flexible partitioning of a macroblock into sub-blocks to achieve better inter prediction. SPLITMV is very useful when objects within a macroblock have different motion characteristics. Within a macroblock coded using SPLITMV mode, each sub-block can have its own motion vector. Similar to the strategy of reusing motion vectors at the macroblock level, a sub-block can also use motion vectors from neighboring sub-blocks above or left to the current block. This strategy is very flexible and can effectively encode any shape of sub-macroblock partitioning, and does so efficiently. Here is an example of a macroblock with 16x16 luma pixels that is partitioned to 16 4x4 blocks :


    where New represents a 4x4 bock coded with a new motion vector, and Left and Above represent a 4x4 block coded using the motion vector from the left and above, respectively. This example effectively partitions the 16x16 macroblock into 3 different segments with 3 different motion vectors (represented below by 1, 2 and 3) :


    Through effective use of intra and inter prediction modes, WebM encoder implementations can achieve great compression quality on a wide range of source material. If you want to delve further into VP8 prediction modes, read the VP8 Bitstream Guide or examine the reconintra.c and rdopt.c files in the VP8 source tree.

    Yaowu Xu, Ph.D. is a codec engineer at Google.

  • Anatomy of an optimization : H.264 deblocking

    26 mai 2010, par Dark Shikari — H.264, assembly, development, speed, x264

    As mentioned in the previous post, H.264 has an adaptive deblocking filter. But what exactly does that mean — and more importantly, what does it mean for performance ? And how can we make it as fast as possible ? In this post I’ll try to answer these questions, particularly in relation to my recent deblocking optimizations in x264.

    H.264′s deblocking filter has two steps : strength calculation and the actual filter. The first step calculates the parameters for the second step. The filter runs on all the edges in each macroblock. That’s 4 vertical edges of length 16 pixels and 4 horizontal edges of length 16 pixels. The vertical edges are filtered first, from left to right, then the horizontal edges, from top to bottom (order matters !). The leftmost edge is the one between the current macroblock and the left macroblock, while the topmost edge is the one between the current macroblock and the top macroblock.

    Here’s the formula for the strength calculation in progressive mode. The highest strength that applies is always selected.

    If we’re on the edge between an intra macroblock and any other macroblock : Strength 4
    If we’re on an internal edge of an intra macroblock : Strength 3
    If either side of a 4-pixel-long edge has residual data : Strength 2
    If the motion vectors on opposite sides of a 4-pixel-long edge are at least a pixel apart (in either x or y direction) or the reference frames aren’t the same : Strength 1
    Otherwise : Strength 0 (no deblocking)

    These values are then thrown into a lookup table depending on the quantizer : higher quantizers have stronger deblocking. Then the actual filter is run with the appropriate parameters. Note that Strength 4 is actually a special deblocking mode that performs a much stronger filter and affects more pixels.

    One can see somewhat intuitively why these strengths are chosen. The deblocker exists to get rid of sharp edges caused by the block-based nature of H.264, and so the strength depends on what exists that might cause such sharp edges. The strength calculation is a way to use existing data from the video stream to make better decisions during the deblocking process, improving compression and quality.

    Both the strength calculation and the actual filter (not described here) are very complex if naively implemented. The latter can be SIMD’d with not too much difficulty ; no H.264 decoder can get away with reasonable performance without such a thing. But what about optimizing the strength calculation ? A quick analysis shows that this can be beneficial as well.

    Since we have to check both horizontal and vertical edges, we have to check up to 32 pairs of coefficient counts (for residual), 16 pairs of reference frame indices, and 128 motion vector values (counting x and y as separate values). This is a lot of calculation ; a naive implementation can take 500-1000 clock cycles on a modern CPU. Of course, there’s a lot of shortcuts we can take. Here’s some examples :

    • If the macroblock uses the 8×8 transform, we only need to check 2 edges in each direction instead of 4, because we don’t deblock inside of the 8×8 blocks.
    • If the macroblock is a P-skip, we only have to check the first edge in each direction, since there’s guaranteed to be no motion vector differences, reference frame differences, or residual inside of the macroblock.
    • If the macroblock has no residual at all, we can skip that check.
    • If we know the partition type of the macroblock, we can do motion vector checks only along the edges of the partitions.
    • If the effective quantizer is so low that no deblocking would be performed no matter what, don’t bother calculating the strength.

    But even all of this doesn’t save us from ourselves. We still have to iterate over a ton of edges, checking each one. Stuff like the partition-checking logic greatly complicates the code and adds overhead even as it reduces the number of checks. And in many cases decoupling the checks to add such logic will make it slower : if the checks are coupled, we can avoid doing a motion vector check if there’s residual, since Strength 2 overrides Strength 1.

    But wait. What if we could do this in SIMD, just like the actual loopfilter itself ? Sure, it seems more of a problem for C code than assembly, but there aren’t any obvious things in the way. Many years ago, Loren Merritt (pengvado) wrote the first SIMD implementation that I know of (for ffmpeg’s decoder) ; it is quite fast, so I decided to work on porting the idea to x264 to see if we could eke out a bit more speed here as well.

    Before I go over what I had to do to make this change, let me first describe how deblocking is implemented in x264. Since the filter is a loopfilter, it acts “in loop” and must be done in both the encoder and decoder — hence why x264 has it too, not just decoders. At the end of encoding one row of macroblocks, x264 goes back and deblocks the row, then performs half-pixel interpolation for use in encoding the next frame.

    We do it per-row for reasons of cache coherency : deblocking accesses a lot of pixels and a lot of code that wouldn’t otherwise be used, so it’s more efficient to do it in a single pass as opposed to deblocking each macroblock immediately after encoding. Then half-pixel interpolation can immediately re-use the resulting data.

    Now to the change. First, I modified deblocking to implement a subset of the macroblock_cache_load function : spend an extra bit of effort loading the necessary data into a data structure which is much simpler to address — as an assembly implementation would need (x264_macroblock_cache_load_deblock). Then I massively cleaned up deblocking to move all of the core strength-calculation logic into a single, small function that could be converted to assembly (deblock_strength_c). Finally, I wrote the assembly functions and worked with Loren to optimize them. Here’s the result.

    And the timings for the resulting assembly function on my Core i7, in cycles :

    deblock_strength_c : 309
    deblock_strength_mmx : 79
    deblock_strength_sse2 : 37
    deblock_strength_ssse3 : 33

    Now that is a seriously nice improvement. 33 cycles on average to perform that many comparisons–that’s absurdly low, especially considering the SIMD takes no branchy shortcuts : it always checks every single edge ! I walked over to my performance chart and happily crossed off a box.

    But I had a hunch that I could do better. Remember, as mentioned earlier, we’re reloading all that data back into our data structures in order to address it. This isn’t that slow, but takes enough time to significantly cut down on the gain of the assembly code. And worse, less than a row ago, all this data was in the correct place to be used (when we just finished encoding the macroblock) ! But if we did the deblocking right after encoding each macroblock, the cache issues would make it too slow to be worth it (yes, I tested this). So I went back to other things, a bit annoyed that I couldn’t get the full benefit of the changes.

    Then, yesterday, I was talking with Pascal, a former Xvid dev and current video hacker over at Google, about various possible x264 optimizations. He had seen my deblocking changes and we discussed that a bit as well. Then two lines hit me like a pile of bricks :

    <_skal_> tried computing the strength at least ?
    <_skal_> while it’s fresh

    Why hadn’t I thought of that ? Do the strength calculation immediately after encoding each macroblock, save the result, and then go pick it up later for the main deblocking filter. Then we can use the data right there and then for strength calculation, but we don’t have to do the whole deblock process until later.

    I went and implemented it and, after working my way through a horde of bugs, eventually got a working implementation. A big catch was that of slices : deblocking normally acts between slices even though normal encoding does not, so I had to perform extra munging to get that to work. By midday today I was able to go cross yet another box off on the performance chart. And now it’s committed.

    Sometimes chatting for 10 minutes with another developer is enough to spot the idea that your brain somehow managed to miss for nearly a straight week.

    NB : the performance chart is on a specific test clip at a specific set of settings (super fast settings) relevant to the company I work at, so it isn’t accurate nor complete for, say, default settings.

    Update : Here’s a higher resolution version of the current chart, as requested in the comments.