
Recherche avancée
Médias (2)
-
Granite de l’Aber Ildut
9 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
Autres articles (77)
-
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
Sur d’autres sites (4995)
-
IJG swings again, and misses
1er février 2010, par Mans — MultimediaEarlier this month the IJG unleashed version 8 of its ubiquitous libjpeg library on the world. Eager to try out the “major breakthrough in image coding technology” promised in the README file accompanying v7, I downloaded the release. A glance at the README file suggests something major indeed is afoot :
Version 8.0 is the first release of a new generation JPEG standard to overcome the limitations of the original JPEG specification.
The text also hints at the existence of a document detailing these marvellous new features, and a Google search later a copy has found its way onto my monitor. As I read, however, my state of mind shifts from an initial excited curiosity, through bewilderment and disbelief, finally arriving at pure merriment.
Already on the first page it becomes clear no new JPEG standard in fact exists. All we have is an unsolicited proposal sent to the ITU-T by members of the IJG. Realising that even the most brilliant of inventions must start off as mere proposals, I carry on reading. The summary informs me that I am about to witness the introduction of three extensions to the T.81 JPEG format :
- An alternative coefficient scan sequence for DCT coefficient serialization
- A SmartScale extension in the Start-Of-Scan (SOS) marker segment
- A Frame Offset definition in or in addition to the Start-Of-Frame (SOF) marker segment
Together these three extensions will, it is promised, “bring DCT based JPEG back to the forefront of state-of-the-art image coding technologies.”
Alternative scan
The first of the proposed extensions introduces an alternative DCT coefficient scan sequence to be used in place of the zigzag scan employed in most block transform based codecs.
Alternative scan sequence
The advantage of this scan would be that combined with the existing progressive mode, it simplifies decoding of an initial low-resolution image which is enhanced through subsequent passes. The author of the document calls this scheme “image-pyramid/hierarchical multi-resolution coding.” It is not immediately obvious to me how this constitutes even a small advance in image coding technology.
At this point I am beginning to suspect that our friend from the IJG has been trapped in a half-world between interlaced GIF images transmitted down noisy phone lines and today’s inferno of SVC, MVC, and other buzzwords.
(Not so) SmartScale
Disguised behind this camel-cased moniker we encounter a method which, we are told, will provide better image quality at high compression ratios. The author has combined two well-known (to us) properties in a (to him) clever way.
The first property concerns the perceived impact of different types of distortion in an image. When encoding with JPEG, as the quantiser is increased, the decoded image becomes ever more blocky. At a certain point, a better subjective visual quality can be achieved by down-sampling the image before encoding it, thus allowing a lower quantiser to be used. If the decoded image is scaled back up to the original size, the unpleasant, blocky appearance is replaced with a smooth blur.
The second property belongs to the DCT where, as we all know, the top-left (DC) coefficient is the average of the entire block, its neighbours represent the lowest frequency components etc. A top-left-aligned subset of the coefficient block thus represents a low-resolution version of the full block in the spatial domain.
In his flash of genius, our hero came up with the idea of using the DCT for down-scaling the image. Unfortunately, he appears to possess precious little knowledge of sampling theory and human visual perception. Any block-based resampling will inevitably produce sharp artefacts along the block edges. The human visual system is particularly sensitive to sharp edges, so this is one of the most unwanted types of distortion in an encoded image.
Despite the obvious flaws in this approach, I decided to give it a try. After all, the software is already written, allowing downscaling by factors of 8/8..16.
Using a 1280×720 test image, I encoded it with each of the nine scaling options, from unity to half size, each time adjusting the quality parameter for a final encoded file size of no more than 200000 bytes. The following table presents the encoded file size, the libjpeg quality parameter used, and the SSIM metric for each of the images.
Scale Size Quality SSIM 8/8 198462 59 0.940 8/9 196337 70 0.936 8/10 196133 79 0.934 8/11 197179 84 0.927 8/12 193872 89 0.915 8/13 197153 92 0.914 8/14 188334 94 0.899 8/15 198911 96 0.886 8/16 197190 97 0.869 Although the smaller images allowed a higher quality setting to be used, the SSIM value drops significantly. Numbers may of course be misleading, but the images below speak for themselves. These are cut-outs from the full image, the original on the left, unscaled JPEG-compressed in the middle, and JPEG with 8/16 scaling to the right.
Looking at these images, I do not need to hesitate before picking the JPEG variant I prefer.
Frame offset
The third and final extension proposed is quite simple and also quite pointless : a top-left cropping to be applied to the decoded image. The alleged utility of this feature would be to enable lossless cropping of a JPEG image. In a typical image workflow, however, JPEG is only used for the final published version, so the need for this feature appears quite far-fetched.
The grand finale
Throughout the text, the author makes references to “the fundamental DCT property for image representation.” In his own words :
This property was found by the author during implementation of the new DCT scaling features and is after his belief one of the most important discoveries in digital image coding after releasing the JPEG standard in 1992.
The secret is to be revealed in an annex to the main text. This annex quotes in full a post by the author to the comp.dsp Usenet group in a thread with the subject why DCT. Reading the entire thread proves quite amusing. A few excerpts follow.
The actual reason is much simpler, and therefore apparently very difficult to recognize by complicated-thinking people.
Here is the explanation :
What are people doing when they have a bunch of images and want a quick preview ? They use thumbnails ! What are thumbnails ? Thumbnails are small downscaled versions of the original image ! If you want more details of the image, you can zoom in stepwise by enlarging (upscaling) the image.
So with proper understanding of the fundamental DCT property, the MPEG folks could make their videos more scalable, but, as in the case of JPEG, they are unable to recognize this simple but basic property, unfortunately, and pursue rather inferior approaches in actual developments.
These are just phrases, and they don’t explain anything. But this is typical for the current state in this field : The relevant people ignore and deny the true reasons, and thus they turn in a circle and no progress is being made.
However, there are dark forces in action today which ignore and deny any fruitful advances in this field. That is the reason that we didn’t see any progress in JPEG for more than a decade, and as long as those forces dominate, we will see more confusion and less enlightenment. The truth is always simple, and the DCT *is* simple, but this fact is suppressed by established people who don’t want to lose their dubious position.
I believe a trip to the Total Perspective Vortex may be in order. Perhaps his tin-foil hat will save him.
-
Beware the builtins
14 janvier 2010, par Mans — CompilersGCC includes a large number of builtin functions allegedly providing optimised code for common operations not easily expressed directly in C. Rather than taking such claims at face value (this is GCC after all), I decided to conduct a small investigation to see how well a few of these functions are actually implemented for various targets.
For my test, I selected the following functions :
__builtin_bswap32
: Byte-swap a 32-bit word.__builtin_bswap64
: Byte-swap a 64-bit word.__builtin_clz
: Count leading zeros in a word.__builtin_ctz
: Count trailing zeros in a word.__builtin_prefetch
: Prefetch data into cache.
To test the quality of these builtins, I wrapped each in a normal function, then compiled the code for these targets :
- ARMv7
- AVR32
- MIPS
- MIPS64
- PowerPC
- PowerPC64
- x86
- x86_64
In all cases I used compiler flags were
-O3 -fomit-frame-pointer
plus any flags required to select a modern CPU model.
ARM
Both
__builtin_clz
and__builtin_prefetch
generate the expectedCLZ
andPLD
instructions respectively. The code for__builtin_ctz
is reasonable for ARMv6 and earlier :rsb r3, r0, #0 and r0, r3, r0 clz r0, r0 rsb r0, r0, #31
For ARMv7 (in fact v6T2), however, using the new bit-reversal instruction would have been better :
rbit r0, r0 clz r0, r0
I suspect this is simply a matter of the function not yet having been updated for ARMv7, which is perhaps even excusable given the relatively rare use cases for it.
The byte-reversal functions are where it gets shocking. Rather than use the
REV
instruction found from ARMv6 on, both of them generate external calls to__bswapsi2
and__bswapdi2
in libgcc, which is plain C code :SItype __bswapsi2 (SItype u) return ((((u) & 0xff000000) >> 24) | (((u) & 0x00ff0000) >> 8) | (((u) & 0x0000ff00) << 8) | (((u) & 0x000000ff) << 24)) ;
DItype
__bswapdi2 (DItype u)
return ((((u) & 0xff00000000000000ull) >> 56)
| (((u) & 0x00ff000000000000ull) >> 40)
| (((u) & 0x0000ff0000000000ull) >> 24)
| (((u) & 0x000000ff00000000ull) >> 8)
| (((u) & 0x00000000ff000000ull) << 8)
| (((u) & 0x0000000000ff0000ull) << 24)
| (((u) & 0x000000000000ff00ull) << 40)
| (((u) & 0x00000000000000ffull) << 56)) ;
While the 32-bit version compiles to a reasonable-looking shift/mask/or job, the 64-bit one is a real WTF. Brace yourselves :
push r4, r5, r6, r7, r8, r9, sl, fp mov r5, #0 mov r6, #65280 ; 0xff00 sub sp, sp, #40 ; 0x28 and r7, r0, r5 and r8, r1, r6 str r7, [sp, #8] str r8, [sp, #12] mov r9, #0 mov r4, r1 and r5, r0, r9 mov sl, #255 ; 0xff ldr r9, [sp, #8] and r6, r4, sl mov ip, #16711680 ; 0xff0000 str r5, [sp, #16] str r6, [sp, #20] lsl r2, r0, #24 and ip, ip, r1 lsr r7, r4, #24 mov r1, #0 lsr r5, r9, #24 mov sl, #0 mov r9, #-16777216 ; 0xff000000 and fp, r0, r9 lsr r6, ip, #8 orr r9, r7, r1 and ip, r4, sl orr sl, r1, r2 str r6, [sp] str r9, [sp, #32] str sl, [sp, #36] ; 0x24 add r8, sp, #32 ldm r8, r7, r8 str r1, [sp, #4] ldm sp, r9, sl orr r7, r7, r9 orr r8, r8, sl str r7, [sp, #32] str r8, [sp, #36] ; 0x24 mov r3, r0 mov r7, #16711680 ; 0xff0000 mov r8, #0 and r9, r3, r7 and sl, r4, r8 ldr r0, [sp, #16] str fp, [sp, #24] str ip, [sp, #28] stm sp, r9, sl ldr r7, [sp, #20] ldr sl, [sp, #12] ldr fp, [sp, #12] ldr r8, [sp, #28] lsr r0, r0, #8 orr r7, r0, r7, lsl #24 lsr r6, sl, #24 orr r5, r5, fp, lsl #8 lsl sl, r8, #8 mov fp, r7 add r8, sp, #32 ldm r8, r7, r8 orr r6, r6, r8 ldr r8, [sp, #20] ldr r0, [sp, #24] orr r5, r5, r7 lsr r8, r8, #8 orr sl, sl, r0, lsr #24 mov ip, r8 ldr r0, [sp, #4] orr fp, fp, r5 ldr r5, [sp, #24] orr ip, ip, r6 ldr r6, [sp] lsl r9, r5, #8 lsl r8, r0, #24 orr fp, fp, r9 lsl r3, r3, #8 orr r8, r8, r6, lsr #8 orr ip, ip, sl lsl r7, r6, #24 and r5, r3, #16711680 ; 0xff0000 orr r7, r7, fp orr r8, r8, ip orr r4, r1, r7 orr r5, r5, r8 mov r9, r6 mov r1, r5 mov r0, r4 add sp, sp, #40 ; 0x28 pop r4, r5, r6, r7, r8, r9, sl, fp bx lr
That’s right, 91 instructions to move 8 bytes around a bit. GCC definitely has a problem with 64-bit numbers. It is perhaps worth noting that the
bswap_64
macro in glibc splits the 64-bit value into 32-bit halves which are then reversed independently, thus side-stepping this weakness of gcc.As a side note, ARM RVCT (armcc) compiles those functions perfectly into one and two
REV
instructions, respectively.AVR32
There is not much to report here. The latest gcc version available is 4.2.4, which doesn’t appear to have the bswap functions. The other three are handled nicely, even using a bit-reverse for
__builtin_ctz
.MIPS / MIPS64
The situation MIPS is similar to ARM. Both bswap builtins result in external libgcc calls, the rest giving sensible code.
PowerPC
I scarcely believe my eyes, but this one is actually not bad. The PowerPC has no byte-reversal instructions, yet someone seems to have taken the time to teach gcc a good instruction sequence for this operation. The PowerPC does have some powerful rotate-and-mask instructions which come in handy here. First the 32-bit version :
rotlwi r0,r3,8 rlwimi r0,r3,24,0,7 rlwimi r0,r3,24,16,23 mr r3,r0 blr
The 64-bit byte-reversal simply applies the above code on each half of the value :
rotlwi r0,r3,8 rlwimi r0,r3,24,0,7 rlwimi r0,r3,24,16,23 rotlwi r3,r4,8 rlwimi r3,r4,24,0,7 rlwimi r3,r4,24,16,23 mr r4,r0 blr
Although I haven’t analysed that code carefully, it looks pretty good.
PowerPC64
Doing 64-bit operations is easier on a 64-bit CPU, right ? For you and me perhaps, but not for gcc. Here
__builtin_bswap64
gives us the now familiar__bswapdi2
call, and while not as bad as the ARM version, it is not pretty :rldicr r0,r3,8,55 rldicr r10,r3,56,7 rldicr r0,r0,56,15 rldicl r11,r3,8,56 rldicr r9,r3,16,47 or r11,r10,r11 rldicr r9,r9,48,23 rldicl r10,r0,24,40 rldicr r0,r3,24,39 or r11,r11,r10 rldicl r9,r9,40,24 rldicr r0,r0,40,31 or r9,r11,r9 rlwinm r10,r3,0,0,7 rldicl r0,r0,56,8 or r0,r9,r0 rldicr r10,r10,8,55 rlwinm r11,r3,0,8,15 or r0,r0,r10 rldicr r11,r11,24,39 rlwinm r3,r3,0,16,23 or r0,r0,r11 rldicr r3,r3,40,23 or r3,r0,r3 blr
That is 6 times longer than the (presumably) hand-written 32-bit version.
x86 / x86_64
As one might expect, results on x86 are good. All the tested functions use the available special instructions. One word of caution though : the bit-counting instructions are very slow on some implementations, specifically the Atom, AMD chips, and the notoriously slow Pentium4E.
Conclusion
In conclusion, I would say gcc builtins can be useful to avoid fragile inline assembler. Before using them, however, one should make sure they are not in fact harmful on the required targets. Not even those builtins mapping directly to CPU instructions can be trusted.
-
NAB 2010 wrapup
15 avril 2010Another year of NAB has come and gone. Making it out of Vegas with some remaining faith in humanity seems like a successful outcome. So, anything worth talking about at the show ?
First off, there’s 3d. 3D is The Next Big Thing, and that was obvious to anyone who spent half a second on the show floor. Everything from camera rigs, to post production apps, to display technology was all 3d, all the time. I’m not a huge fan of 3d in most cases, but the industry is at least feigning interest.
Luckily, at a show as big as NAB, there’s plenty of other cool stuff to see. So, what struck my fancy ?
First off, Avid and Adobe were showing new versions of Media Composer and Premiere. Both sounded pretty amazing on paper, but I must say I was somewhat underwhelmed by both in reality. Premiere felt a little rough around the edges - the Mercurial Engine wasn’t the sort of next generation tech that I expected. Media Composer 5 has some nice new tweaks, but it’s still rather Avid-y - which is good for Avid people, less interesting for the rest of us.
In other software news, Blackmagic Design was showing off some of what they’re doing with the DaVinci technology that they acquired. Software-only Da Vinci Resolve for $999 is a pretty amazing deal, and the demos were quite nice. That said, color correction is an art, so just making the technology cheaper isn’t necessarily going to dramatically change the number of folks who do it well - see Color.
Blackmagic also has a pile of new USB 3.0 hardware devices, including the absolutely gorgeous UltraStudio Pro. Makes me pine for USB 3.0 on the mac.
On the production side, we saw new cameras from just about everyone. To start at the high end, the Arri Alexa was absolutely stunning. Perhaps the nicest digital cinema footage I’ve seen. Not only that, but they’ve worked out a usable workflow, recording to ProRes plus RAW. At the price point they’re promising, the world is going to get a lot more difficult for RED.
Sony’s new XDCam EX gear is another good step forward for that format. Nothing groundbreaking, but another nice progression. I was kind of hoping we’d see 4:2:2 EX gear from them, but I suppose they need to justify the disc based formats for a while longer.
The Panasonic AG-AF100 is another interesting camera, bringing micro 4/3rds into video. The only strange thing is the recording side - AVCHD to SD cards. While I’m thrilled to see them using SD instead of P2, it sure would have been nice to have an AVCIntra option.
Finally, Canon’s 4:2:2 XF cams are a nice option for the ENG/EFP market. Nothing groundbreaking, aside from the extra color sampling, but it’s a nice step up from what they’ve been doing.
Speaking of Canon, it’s interesting to see the ways that the 5d and 7d have made their way into mainstream filmmaking. At one point, I thought they’d be relegated to the indie community - folks looking for nice DoF on a budget. Instead, they seem to have been adopted by a huge range of productions, from episodic TV to features. While they’re not right for everyone, the price and quality make them an easy choice in many cases.
One of the stars of the show for me was the GoPro, a small waterproof HD camera that ships with a variety of mounts, designed to be used in places where you couldn’t or wouldn’t use a more full featured camera. No LCD, just a record button and a wide angle lens. I bought two.
Those are the things that stand out for me. While there was plenty of interesting stuff to be seen, given the current economic conditions at the University, I wasn’t exactly in a shopping mindset. The show definitely felt more optimistic than it did last year, and companies are again pushing out new products. However, attendances was about 20% lower than 2008, and that was definitely noticeable on the show floor.