Recherche avancée

Médias (0)

Mot : - Tags -/protocoles

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (42)

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

  • Activation de l’inscription des visiteurs

    12 avril 2011, par

    Il est également possible d’activer l’inscription des visiteurs ce qui permettra à tout un chacun d’ouvrir soit même un compte sur le canal en question dans le cadre de projets ouverts par exemple.
    Pour ce faire, il suffit d’aller dans l’espace de configuration du site en choisissant le sous menus "Gestion des utilisateurs". Le premier formulaire visible correspond à cette fonctionnalité.
    Par défaut, MediaSPIP a créé lors de son initialisation un élément de menu dans le menu du haut de la page menant (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (4857)

  • avcodec/jpeg2000 : replace naive pow call with smarter exp2fi

    8 décembre 2015, par Ganesh Ajjanagadde
    avcodec/jpeg2000 : replace naive pow call with smarter exp2fi
    

    pow is a very wasteful function for this purpose. A low hanging fruit
    would be simply to replace with exp2f, and that does yield some speedup.
    However, there are 2 drawbacks of this :
    1. It does not exploit the integer nature of the argument.
    2. (minor) Some platforms lack a proper exp2f routine, making benefits available
    only to non broken libm.
    3. exp2f does not solve the same issue that plagues pow, namely terrible
    worst case performance. This is a fundamental issue known as the
    "table-maker’s dilemma" recognized by Prof. Kahan himself and
    subsequently elaborated and researched by many others. All this is clear from benchmarks below.

    This exploits the IEEE-754 format to get very good performance even in
    the worst case for integer powers of 2. This solves all the issues noted
    above. Function tested with clang usan over [-1000, 1000] (beyond range of
    relevance for this, which is [-255, 255]), patch itself with FATE.

    Benchmarks obtained on x86-64, Haswell, GNU-Linux via 10^5 iterations of
    the pow call, START/STOP, and command ffplay /samples/jpeg2000/chiens_dcinema2K.mxf.
    Low number of runs also given to prove the point about worst case :

    pow :
    216270 decicycles in pow, 1 runs, 0 skips
    110175 decicycles in pow, 2 runs, 0 skips
    56085 decicycles in pow, 4 runs, 0 skips
    29013 decicycles in pow, 8 runs, 0 skips
    15472 decicycles in pow, 16 runs, 0 skips
    8689 decicycles in pow, 32 runs, 0 skips
    5295 decicycles in pow, 64 runs, 0 skips
    3599 decicycles in pow, 128 runs, 0 skips
    2748 decicycles in pow, 256 runs, 0 skips
    2304 decicycles in pow, 511 runs, 1 skips
    2072 decicycles in pow, 1022 runs, 2 skips
    1963 decicycles in pow, 2044 runs, 4 skips
    1894 decicycles in pow, 4091 runs, 5 skips
    1860 decicycles in pow, 8184 runs, 8 skips

    exp2f :
    134140 decicycles in pow, 1 runs, 0 skips
    68110 decicycles in pow, 2 runs, 0 skips
    34530 decicycles in pow, 4 runs, 0 skips
    17677 decicycles in pow, 8 runs, 0 skips
    9175 decicycles in pow, 16 runs, 0 skips
    4931 decicycles in pow, 32 runs, 0 skips
    2808 decicycles in pow, 64 runs, 0 skips
    1747 decicycles in pow, 128 runs, 0 skips
    1208 decicycles in pow, 256 runs, 0 skips
    952 decicycles in pow, 512 runs, 0 skips
    822 decicycles in pow, 1024 runs, 0 skips
    765 decicycles in pow, 2047 runs, 1 skips
    722 decicycles in pow, 4094 runs, 2 skips
    693 decicycles in pow, 8190 runs, 2 skips

    exp2fi :
    2740 decicycles in pow, 1 runs, 0 skips
    1530 decicycles in pow, 2 runs, 0 skips
    955 decicycles in pow, 4 runs, 0 skips
    622 decicycles in pow, 8 runs, 0 skips
    477 decicycles in pow, 16 runs, 0 skips
    368 decicycles in pow, 32 runs, 0 skips
    317 decicycles in pow, 64 runs, 0 skips
    291 decicycles in pow, 128 runs, 0 skips
    277 decicycles in pow, 256 runs, 0 skips
    268 decicycles in pow, 512 runs, 0 skips
    265 decicycles in pow, 1024 runs, 0 skips
    263 decicycles in pow, 2048 runs, 0 skips
    263 decicycles in pow, 4095 runs, 1 skips
    260 decicycles in pow, 8191 runs, 1 skips

    Reviewed-by : Michael Niedermayer <michael@niedermayer.cc>
    Signed-off-by : Ganesh Ajjanagadde <gajjanagadde@gmail.com>

    • [DH] libavcodec/jpeg2000.c
  • lavu/libm : add erf hack and make dynaudnorm available everywhere

    20 décembre 2015, par Ganesh Ajjanagadde
    lavu/libm : add erf hack and make dynaudnorm available everywhere
    

    Source code is from Boost :
    http://www.boost.org/doc/libs/1_46_1/boost/math/special_functions/erf.hpp
    with appropriate modifications for FFmpeg.

    Tested on interval -6 to 6 (beyond which it saturates), NAN, INFINITY
    under -fsanitize=undefined on clang to test for possible undefined behavior.

    This function turns out to actually be essentially as accurate and faster than the
    libm (GNU/BSD’s/Mac OS X), and I can think of 3 reasons why upstream
    does not use this :
    1. They are not aware of it.
    2. They are concerned about licensing - this applies especially to GNU
    libm.
    3. They do not know and/or appreciate the benefits of rational
    approximations over polynomial approximations. Boost uses them to great
    effect, see e.g swr/resample for bessel derived from them, which is also
    similarly superior to libm variants.

    First, performance.
    sample benchmark (clang -O3, Haswell, GNU/Linux) :

    3e8 values evenly spaced from 0 to 6
    time (libm) :
    ./test 13.39s user 0.00s system 100% cpu 13.376 total
    time (boost based) :
    ./test 9.20s user 0.00s system 100% cpu 9.190 total

    Second, accuracy.
    1e8 eval pts from 0 to 6
    maxdiff (absolute) : 2.2204460492503131e-16
    occuring at point where libm erf is correctly rounded, this is not.

    Illustration of superior rounding of this function :
    arg : 0.83999999999999997
    erf : 0.76514271145499457
    boost : 0.76514271145499446
    real : 0.76514271145499446

    i.e libm is actually incorrectly rounded. Note that this is clear from :
    https://github.com/JuliaLang/openlibm/blob/master/src/s_erf.c (the Sun
    implementation used by both BSD and GNU libm’s), where only 1 ulp is
    guaranteed.

    Reasons it is not easy/worthwhile to create a "correctly rounded"
    variant of this function (i.e 0.5ulp) :
    1. Upstream libm’s don’t do it anyway, so we can’t guarantee this unless
    we force this implementation on all platforms. This is not easy, as the
    linker would complain unless measures are taken.
    2. Nothing in FFmpeg cares or can care about such things, due to the
    above and FFmpeg’s nature.
    3. Creating a correctly rounded function will in practice need some use of long
    double/fma. long double, although C89/C90, unfortunately has problems on
    ppc. This needs fixing of toolchain flags/configure. In any case this
    will be slower for miniscule gain.

    Reviewed-by : James Almer <jamrial@gmail.com>
    Signed-off-by : Ganesh Ajjanagadde <gajjanagadde@gmail.com>

    • [DH] configure
    • [DH] libavutil/libm.h
  • Inside WebM Technology : The VP8 Alternate Reference Frame

    15 juin 2010, par noreply@blogger.com (John Luther) — inside webm, vp8

    Since the WebM project was open-sourced just a week ago, we’ve seen blog posts and articles about its capabilities. As an open project, we welcome technical scrutiny and contributions that improve the codec. We know from our extensive testing that VP8 can match or exceed other leading codecs, but to get the best results, it helps to understand more about how the codec works. In this first of a series of blog posts, I’ll explain some of the fundamental techniques in VP8, along with examples and metrics.

    The alternative reference frame is one of the most exciting quality innovations in VP8. Let’s delve into how VP8 uses these frames to improve prediction and thereby overall video quality.

    Alternate Reference Frames in VP8

    VP8 uses three types of reference frames for inter prediction : the last frame, a "golden" frame (one frame worth of decompressed data from the arbitrarily distant past) and an alternate reference frame. Overall, this design has a much smaller memory footprint on both encoders and decoders than designs with many more reference frames. In video compression, it is very rare for more than three reference frames to provide significant quality benefit, but the undesirable increase in memory footprint from the extra frames is substantial.

    Unlike other types of reference frames used in video compression, which are displayed to the user by the decoder, the VP8 alternate reference frame is decoded normally but is never shown to the user. It is used solely as a reference to improve inter prediction for other coded frames. Because alternate reference frames are not displayed, VP8 encoders can use them to transmit any data that are helpful to compression. For example, a VP8 encoder can construct one alternate reference frame from multiple source frames, or it can create an alternate reference frame using different macroblocks from hundreds of different video frames.

    The current VP8 implementation enables two different types of usage for the alternate reference frame : noise-reduced prediction and past/future directional prediction.

    Noise-Reduced Prediction

    The alternate reference frame is transmitted and decoded similar to other frames, hence its usage does not add extra computation in decoding. The VP8 encoder however is free to use more sophisticated processing to create them in off-line encoding. One application of the alternate reference frame is for noise-reduced prediction. In this application, the VP8 encoder uses multiple input source frames to construct one reference frame through temporal or spatial noise filtering. This "noise-free" alternate reference frame is then used to improve prediction for encoding subsequent frames.

    You can make use of this feature by setting ARNR parameters in VP8 encoding, where ARNR stands for "Alternate Reference Noise Reduction." A sample two-pass encoding setting with the parameters :

    --arnr-maxframes=5 --arnr-strength=3

    enables the encoder to use "5" consecutive input source frames to produce one alternate reference frame using a filtering strength of "3". Here is an example showing the quality benefit of using this experimental "ARNR" feature on the standard test clip "Hall Monitor." (Each line on the graph represents the quality of an encoded stream on a given clip at multiple datarates. The higher points on the Y axis (PSNR) indicates the stream with the better quality.)


    The only difference between the two curves in the graph is that VP8_ARNR was produced by encodings with ARNR parameters and VP8_NO_ARNR was not. As we can see from the graph, noise reduced prediction is very helpful to compression quality when encoding noisy sources. We’ve just started to explore this idea but have already seen strong improvements on noisy input clips similar to this "Hall Monitor." We feel there’s a lot more we can do in this area.

    Improving Prediction without B Frames

    The lack of B frames in VP8 has sparked some discussion about its ability to achieve competitive compression efficiency. VP8 encoders, however, can make intelligent use of the golden reference and the alternate reference frames to compensate for this. The VP8 encoder can choose to transmit an alternate reference frame similar to a "future" frame, and encoding of subsequent frames can make use of information from the past (last frame and golden frame) and from the future (alternate reference frame). Effectively, this helps the encoder to achieve results similar to bidirectional (B frame) prediction without requiring frame reordering in the decoder. Running in two-pass encoding mode, compression can be improved in the VP8 encoder by using encoding parameters that enable lagged encoding and automatic placement of alternate reference frames :

    --auto-alt-ref=1 --lag-in-frames=16

    Used this way, the VP8 encoder can achieve improved prediction and compression efficiency without increasing the decoder’s complexity :


    In the video compression community, "Mobile and calendar" is known as a clip that benefits significantly from the usage of B frames. The graph above illustrates that the use of alternate reference frame benefits VP8 significantly without using B frames.

    Keep an eye on this blog for more posts about VP8 encoding. You can find more information on above encoding parameters or other detailed instructions to use with our VP8 encoders on our site, or join our discussion list.

    Yaowu Xu, Ph.D. is a codec engineer at Google.