Recherche avancée

Médias (0)

Mot : - Tags -/auteurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (7)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

Sur d’autres sites (2239)

  • Codec profile of AAC media file [closed]

    9 janvier 2013, par Pavle

    I need to find out codec profile of some AAC media file. (I want to know if that file iz HE-ACC or some other codec profile).

    I tried with source like this :

    AVFormatContext* pFormatCtx;
    avformat_open_input(pFormatCtx, input_path, NULL, NULL);
    ..
    avformat_find_stream_info((*pFormatCtx), NULL);
    ..
    /* I hope i'll get FF_PROFILE_AAC_MAIN or FF_PROFILE_AAC_LOW or FF_PROFILE_AAC_HE ... */
    int wanted_profile = pFormatCtx->streams[audio_stream_index]->codec->profile;

    /* but I always get FF_PROFILE_UNKNOWN :( */

    Can anyone help me to find out how to do it, please.

  • VP8 Documentation and Test Vector Contributions

    14 octobre 2010, par noreply@blogger.com (John Luther)

    Janne Salonen of the WebM team in Oulu, Finland (formerly On2 Finland) has added a tabular description of the VP8 syntax to the VP8 Bitstream Guide. The new annex provides a concise reference of the elements in the bitstream and we hope will make implementing and testing VP8 decoders easier. The updated document and source can be downloaded from our documentation page.

    We’re working on more improvements to the bitstream guide and invite other community members to help. As with the VP8 code, we gladly give attribution credit to documentation contributors and have added an AUTHORS file to the bitstream-guide Git repository.

    New VP8 Test Vectors

    The Oulu team has also produced some new VP8 test vectors. We analyzed a large set of WebM videos and produced two important corner use cases. The first produces the worst-case memory bandwidth (i.e., lots of global motion, all fractional motion vectors). The second produces the worst-case boolean decoder bin rate over dozens of consecutive frames. These vectors have been added to the VP8 test repository. Our team will consider other corner cases in the next batch of streams we add to the repository.

    Aki Kuusela is Hantro Embedded Engineering Manager at Google.

  • Tour of Part of the VP8 Process

    18 novembre 2010, par Multimedia Mike — VP8

    My toy VP8 encoder outputs a lot of textual data to illustrate exactly what it’s doing. For those who may not be exactly clear on how this or related algorithms operate, this may prove illuminating.

    Let’s look at subblock 0 of macroblock 0 of a luma plane :

     subblock 0 (original)
      92  91  89  86
      91  90  88  86
      89  89  89  88
      89  87  88  93
    

    Since it’s in the top-left corner of the image to be encoded, the phantom samples above and to the left are implicitly 128 for the purpose of intra prediction (in the VP8 algorithm).

     subblock 0 (original)
         128 128 128 128
     128  92  91  89  86
     128  91  90  88  86
     128  89  89  89  88
     128  89  87  88  93
    


    Using the 4×4 DC prediction mode means averaging the 4 top predictors and 4 left predictors. So, the predictor is 128. Subtract this from each element of the subblock :

     subblock 0, predictor removed
     -36 -37 -39 -42
     -37 -38 -40 -42
     -39 -39 -39 -40
     -39 -41 -40 -35
    

    Next, run the subblock through the forward transform :

     subblock 0, transformed
     -312   7   1   0
        1  12  -5   2
        2  -3   3  -1
        1   0  -2   1
    

    Quantize (integer divide) each element ; the DC (first element) and AC (rest of the elements) quantizers are both 4 :

     subblock 0, quantized
     -78   1   0   0
       0   3  -1   0
       0   0   0   0
       0   0   0   0
    

    The above block contains the coefficients that are actually transmitted (zigzagged and entropy-encoded) through the bitstream and decoded on the other end.

    The decoding process looks something like this– after the same coefficients are decoded and rearranged, they are dequantized (multiplied) by the original quantizers :

     subblock 0, dequantized
     -312   4   0   0
        0  12  -4   0
        0   0   0   0
        0   0   0   0
    

    Note that these coefficients are not exactly the same as the original, pre-quantized coefficients. This is a large part of where the “lossy” in “lossy video compression” comes from.

    Next, the decoder generates a base predictor subblock. In this case, it’s all 128 (DC prediction for top-left subblock) :

     subblock 0, predictor
      128 128 128 128
      128 128 128 128
      128 128 128 128
      128 128 128 128
    

    Finally, the dequantized coefficients are shoved through the inverse transform and added to the base predictor block :

     subblock 0, reconstructed
      91  91  89  85
      90  90  89  87
      89  88  89  90
      88  88  89  92
    

    Again, not exactly the same as the original block, but an incredible facsimile thereof.

    Note that this decoding-after-encoding demonstration is not merely pedagogical– the encoder has to decode the subblock because the encoding of successive subblocks may depend on this subblock. The encoder can’t rely on the original representation of the subblock because the decoder won’t have that– it will have the reconstructed block.

    For example, here’s the next subblock :

     subblock 1 (original)
      84  84  87  90
      85  85  86  93
      86  83  83  89
      91  85  84  87
    

    Let’s assume DC prediction once more. The 4 top predictors are still all 128 since this subblock lies along the top row. However, the 4 left predictors are the right edge of the subblock reconstructed in the previous example :

     subblock 1 (original)
        128 128 128 128
     85  84  84  87  90
     87  85  85  86  93
     90  86  83  83  89
     92  91  85  84  87
    

    The DC predictor is computed as (128 + 128 + 128 + 128 + 85 + 87 + 90 + 92 + 4) / 8 = 108 (the extra +4 is for rounding considerations). (Note that in this case, using the original subblock’s right edge would also have resulted in 108, but that’s beside the point.)

    Continuing through the same process as in subblock 0 :

     subblock 1, predictor removed
     -24 -24 -21 -18
     -23 -23 -22 -15
     -22 -25 -25 -19
     -17 -23 -24 -21
    

    subblock 1, transformed
    -173 -9 14 -1
    2 -11 -4 0
    1 6 -2 3
    -5 1 0 1

    subblock 1, quantized
    -43 -2 3 0
    0 -2 -1 0
    0 1 0 0
    -1 0 0 0

    subblock 1, dequantized
    -172 -8 12 0
    0 -8 -4 0
    0 4 0 0
    -4 0 0 0

    subblock 1, predictor
    108 108 108 108
    108 108 108 108
    108 108 108 108
    108 108 108 108

    subblock 1, reconstructed
    84 84 87 89
    86 85 87 91
    86 83 84 89
    90 85 84 88

    I hope this concrete example (straight from a working codec) clarifies this part of the VP8 process.