Recherche avancée

Médias (91)

Autres articles (28)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Que fait exactement ce script ?

    18 janvier 2011, par

    Ce script est écrit en bash. Il est donc facilement utilisable sur n’importe quel serveur.
    Il n’est compatible qu’avec une liste de distributions précises (voir Liste des distributions compatibles).
    Installation de dépendances de MediaSPIP
    Son rôle principal est d’installer l’ensemble des dépendances logicielles nécessaires coté serveur à savoir :
    Les outils de base pour pouvoir installer le reste des dépendances Les outils de développements : build-essential (via APT depuis les dépôts officiels) ; (...)

Sur d’autres sites (5180)

  • Introducing the Piwik Java Tracker – Analytics for your Java based applications

    18 novembre 2015, par Brett — Community, Development

    Hello Piwik Community !

    My name is Brett Csorba, a Software Engineer out of the US. I’d like to introduce the Piwik Java Tracker project, an easy way to track usage data within your Java applications !

    When would I need to track users in a Java application ? What’s wrong with front end tracking ?

    Absolutely nothing ! We encourage users to track information where it makes the most sense for them ! But in cases where

    • you have a 100% Java based application
    • you expose a REST layer where users can bypass your front end tracking code
    • you have valuable data you want to track that is unnecessary or too sensitive to pass back to the user

    the Piwik Java Tracker can help you track the data you need.

    What exactly can it track ?

    We aim to provide the full Tracking HTTP API. If you find we’ve left something out by mistake, let us know !

    You’ve sparked my curiosity, how would I use such a thing ?

    Well, once you’ve installed Piwik and set up your first website, you can grab the latest jar and include it in your project. The dependencies needed to both use and test this library can be found here.

    This library is intended to be used for projects that support Java 8. The released binaries are built, tested, and deployed from Oracle JDK 8.

    Using this API is as simple as creating a new request

    PiwikRequest request = new PiwikRequest(1, new URL("http://my-site.com/action")) ;

    Setting some more information if you want to

    request.setActionName("myAction") ;
    request.setPageCustomVariable("key", "value") ;

    and firing the request.

    PiwikTracker tracker = new PiwikTracker("http://your-piwik-domain.tld/piwik.php") ;
    HttpResponse response = tracker.sendRequest(request) ;

    Check out this guide to using the API for some more information !

    Looks cool so far, can I help out ?

    Yes ! Absolutely ! Download the project, try it, break it without mercy ! (Just make sure you tell us how.) Contribute to the project or let us know what we can do to it to improve it. As with all open source projects, we need your help to improve it.

  • H.264 and VP8 for still image coding : WebP ?

    http://x264.nl/developers/Dark_Shikari/imagecoding/output.ogv
    1er octobre 2010, par Dark Shikari — google, H.264, psychovisual optimizations, VP8

    Update : post now contains a Theora comparison as well ; see below.

    JPEG is a very old lossy image format. By today’s standards, it’s awful compression-wise : practically every video format since the days of MPEG-2 has been able to tie or beat JPEG at its own game. The reasons people haven’t switched to something more modern practically always boil down to a simple one — it’s just not worth the hassle. Even if JPEG can be beaten by a factor of 2, convincing the entire world to change image formats after 20 years is nigh impossible. Furthermore, JPEG is fast, simple, and practically guaranteed to be free of any intellectual property worries. It’s been tried before : JPEG-2000 first, then Microsoft’s JPEG XR, both tried to unseat JPEG. Neither got much of anywhere.

    Now Google is trying to dump yet another image format on us, “WebP”. But really, it’s just a VP8 intra frame. There are some obvious practical problems with this new image format in comparison to JPEG ; it doesn’t even support all of JPEG’s features, let alone many of the much-wanted features JPEG was missing (alpha channel support, lossless support). It only supports 4:2:0 chroma subsampling, while JPEG can handle 4:2:2 and 4:4:4. Google doesn’t seem interested in adding any of these features either.

    But let’s get to the meat and see how these encoders stack up on compressing still images. As I explained in my original analysis, VP8 has the advantage of H.264′s intra prediction, which is one of the primary reasons why H.264 has such an advantage in intra compression. It only has i4x4 and i16x16 modes, not i8x8, so it’s not quite as fancy as H.264′s, but it comes close.

    The test files are all around 155KB ; download them for the exact filesizes. For all three, I did a binary search of quality levels to get the file sizes close. For x264, I encoded with --tune stillimage --preset placebo. For libvpx, I encoded with --best. For JPEG, I encoded with ffmpeg, then applied jpgcrush, a lossless jpeg compressor. I suspect there are better JPEG encoders out there than ffmpeg ; if you have one, feel free to test it and post the results. The source image is the 200th frame of Parkjoy, from derf’s page (fun fact : this video was shot here ! More info on the video here.).

    Files : (x264 [154KB], vp8 [155KB], jpg [156KB])

    Results (decoded to PNG) : (x264, vp8, jpg)

    This seems rather embarrassing for libvpx. Personally I think VP8 looks by far the worst of the bunch, despite JPEG’s blocking. What’s going on here ? VP8 certainly has better entropy coding than JPEG does (by far !). It has better intra prediction (JPEG has just DC prediction). How could VP8 look worse ? Let’s investigate.

    VP8 uses a 4×4 transform, which tends to blur and lose more detail than JPEG’s 8×8 transform. But that alone certainly isn’t enough to create such a dramatic difference. Let’s investigate a hypothesis — that the problem is that libvpx is optimizing for PSNR and ignoring psychovisual considerations when encoding the image… I’ll encode with --tune psnr --preset placebo in x264, turning off all psy optimizations. 

    Files : (x264, optimized for PSNR [154KB]) [Note for the technical people : because adaptive quantization is off, to get the filesize on target I had to use a CQM here.]

    Results (decoded to PNG) : (x264, optimized for PSNR)

    What a blur ! Only somewhat better than VP8, and still worse than JPEG. And that’s using the same encoder and the same level of analysis — the only thing done differently is dropping the psy optimizations. Thus we come back to the conclusion I’ve made over and over on this blog — the encoder matters more than the video format, and good psy optimizations are more important than anything else for compression. libvpx, a much more powerful encoder than ffmpeg’s jpeg encoder, loses because it tries too hard to optimize for PSNR.

    These results raise an obvious question — is Google nuts ? I could understand the push for “WebP” if it was better than JPEG. And sure, technically as a file format it is, and an encoder could be made for it that’s better than JPEG. But note the word “could”. Why announce it now when libvpx is still such an awful encoder ? You’d have to be nuts to try to replace JPEG with this blurry mess as-is. Now, I don’t expect libvpx to be able to compete with x264, the best encoder in the world — but surely it should be able to beat an image format released in 1992 ?

    Earth to Google : make the encoder good first, then promote it as better than the alternatives. The reverse doesn’t work quite as well.

    Addendum (added Oct. 2, 03:51) :

    maikmerten gave me a Theora-encoded image to compare as well. Here’s the PNG and the source (155KB). And yes, that’s Theora 1.2 (Ptalarbvorm) beating VP8 handily. Now that is embarassing. Guess what the main new feature of Ptalarbvorm is ? Psy optimizations…

    Addendum (added Apr. 20, 23:33) :

    There’s a new webp encoder out, written from scratch by skal (available in libwebp). It’s significantly better than libvpx — not like that says much — but it should probably beat JPEG much more readily now. The encoder design is rather unique — it basically uses K-means for a large part of the encoding process. It still loses to x264, but that was expected.

    [155KB]
  • Playing With Emscripten and ASM.js

    1er mars 2014, par Multimedia Mike — General

    The last 5 years or so have provided a tremendous amount of hype about the capabilities of JavaScript. I think it really kicked off when Google announced their Chrome web browser in September, 2008 along with its V8 JS engine. This seemed to spark an arms race in JS engine performance along with much hyperbole that eventually all software could, would, and/or should be written in straight JavaScript for maximum portability and future-proofing, perhaps aided by Emscripten, a tool which magically transforms C and C++ code into JS. The latest round of rhetoric comes courtesy of something called asm.js which purports to narrow the gap between JS and native code performance.

    I haven’t been a believer, to express it charitably. But I wanted to be certain, so I set out to devise my own experiment to test modern JS performance.

    Up Front Summary
    I was extremely surprised that my experiment demonstrated JS performance FAR beyond my expectations. There might be something to these claims of magnficent JS speed in numerical applications. Basically, here were my thoughts during the process :

    • There’s no way that JavaScript can come anywhere close to C performance for a numerically intensive operation ; a simple experiment should demonstrate this.
    • Here’s a straightforward C program to perform a simple yet numerically intensive operation.
    • Let’s compile the C program on gcc and get some baseline performance numbers.
    • Let’s use Emscripten to convert the C program to JavaScript and run it under Chrome.
    • Ha ! Pitiful JS performance, just as I expected !
    • Try the same program under Firefox, since Firefox is supposed to have some crazy optimization for asm.js code, allegedly emitted by Emscripten.
    • LOL ! Firefox performs even worse than Chrome !
    • Wait a minute… the Emscripten documentation mentioned using optimization levels for generating higher performance JS, so try ‘-O1′.
    • Umm… wow : Chrome’s performance increased dramatically ! What about Firefox ? Not only is Firefox faster than Chrome, it’s faster than the gcc-generated code !
    • As my faith in C is suddenly shaken to its core, I remembered to compile the gcc version with an explicit optimization level. The native C version pulled ahead of Firefox again, but the Firefox code is still close.
    • Aha ! This is just desktop– but what about mobile ? One of the leading arguments for converting everything to pure JavaScript is that such programs will magically run perfectly in mobile browsers. So I wager that this is where the experiment will fall over.
    • I proceed to try the same converted program on a variety of mobile platforms.
    • The mobile platforms perform rather admirably as well.
    • I am surprised.

    The Experiment
    I wanted to run a simple yet numerically-intensive and relevant benchmark, and something I am familiar with. I settled on JPEG image decoding. Again, I wanted to keep this simple, ideally in a single file because I didn’t know how hard it might be to deal with Emscripten. I found NanoJPEG, which is a straightforward JPEG decoder contained in a single C file.

    I altered nanojpeg.c (to a new file called nanojpeg-static.c) such that the main() program would always load a 1920×1080 (a.k.a. 1080p) JPEG file (“bbb-1080p-title.jpg”, the Big Buck Bunny title), rather than requiring a command line argument. Then I used gettimeofday() to profile the core decoding function (njDecode()).

    Compiling with gcc and profiling execution :

    gcc -Wall nanojpeg-static.c -o nanojpeg-static
    ./nanojpeg-static
    

    Optimization levels such as -O0, -O3, or -Os can be applied to the compilation command.

    For JavaScript conversion, I installed Emscripten and converted using :

    /path/to/emscripten/emcc nanojpeg-static.c -o nanojpeg.html \
      —preload-file bbb-1080p-title.jpg -s TOTAL_MEMORY=32000000
    

    The ‘–preload-file’ option makes the file available to the program via standard C-style file I/O functions. The ‘-s TOTAL_MEMORY’ was necessary because the default of 16 MB wasn’t enough. Again, the -O optimization levels can be sent in.

    For running, the .html file is loaded (via webserver) in a web browser.

    Want To Try It Yourself ?
    I put the files here : http://multimedia.cx/emscripten/. The .c file, the JPEG file, and the Emscripten-converted files using -O0, -O1, -O2, -O3, -Os, and no optimization switch.

    Results and Charts
    Here is the spreadsheet with the raw results.

    I ran this experiment using Ubuntu Linux 12.04 on an Intel Atom N450-based netbook. For this part, I was able to compare the Chrome and Firefox browser results against the C results :



    These are the results for a 2nd generation Android Nexus 7 using both Chrome and Firefox :



    Here is the result for an iPad 2 running iOS 7 and Safari– there is no Firefox for iOS and while there is a version of Chrome for iOS, it apparently isn’t able to leverage an optimized JS engine. Chrome takes so long to complete this experiment that there’s no reason to muddy the graph with the results :



    Interesting that -O1 tends to provide better optimization than levels 2 or 3, and that -Os (optimize for size) seems to be a good all-around choice.

    Don’t Get Too Smug
    JavaScript can indeed get amazing performance in this day and age. Please be advised, however, that this isn’t the best that a C decoder implementation can possibly do. This version doesn’t leverage any SIMD extensions. According to profiling (using gprof against the C code), sample saturation in color conversion dominates followed by inverse DCT functions, common cases for SIMD ASM or intrinsics. Allegedly, there will be some support for JS SIMD optimizations some day. We’ll see.

    Implications For Development
    I’m still not especially motivated to try porting the entire Native Client game music player codebase to JavaScript. I’m still wondering about the recommended development flow. How are you supposed to develop for Emscripten and asm.js ? From what I can tell, Emscripten is not designed as a simple aide for porting C/C++ code to JS. No, it reduces the code into JS code you can’t possibly maintain. This seems to imply that the C/C++ code needs to be developed and debugged in its entirety and then converted to JS, which seems arduous.