Recherche avancée

Médias (91)

Autres articles (29)

  • Qualité du média après traitement

    21 juin 2013, par

    Le bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
    Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

Sur d’autres sites (4292)

  • VideoWriter do not write anything

    29 avril 2015, par Nelson

    When I try to use VideoWriter to write a frame it does not work. I already tried lots of FOURCC codes, like the defaul, h264, mjpg, divx, xvid, etc. And yes, I have installed ffmpeg with all the necessary configurations (—enable-shared, —enable-libx264, ...) and the opencv installation is with ffmpeg support on.

    I already google it a lot and everything that I found do not even nearly solve this problem, that is recurrent in OpenCV. The code is the simplest possible, and it worked a few weeks back, but not it does not.

    A few insights : get(CV_CAP_PROP_FPS) returns the unknown option message, with the -1 value (the same happens for the set). The following part of the cmake process of opencv is also interesting :

    -- Could NOT find Jasper (missing:  JASPER_LIBRARIES JASPER_INCLUDE_DIR)
    -- checking for module 'gstreamer-video-1.0'
    --   package 'gstreamer-video-1.0' not found
    -- checking for module 'gstreamer-app-1.0'
    --   package 'gstreamer-app-1.0' not found
    -- checking for module 'gstreamer-riff-1.0'
    --   package 'gstreamer-riff-1.0' not found
    -- checking for module 'gstreamer-pbutils-1.0'
    --   package 'gstreamer-pbutils-1.0' not found
    -- checking for module 'gstreamer-base-0.10'
    --   package 'gstreamer-base-0.10' not found
    -- checking for module 'gstreamer-video-0.10'
    --   package 'gstreamer-video-0.10' not found
    -- checking for module 'gstreamer-app-0.10'
    --   package 'gstreamer-app-0.10' not found
    -- checking for module 'gstreamer-riff-0.10'
    --   package 'gstreamer-riff-0.10' not found
    -- checking for module 'gstreamer-pbutils-0.10'
    --   package 'gstreamer-pbutils-0.10' not found
    -- Looking for linux/videodev.h
    -- Looking for linux/videodev.h - not found
    -- Looking for linux/videodev2.h
    -- Looking for linux/videodev2.h - found
    -- Looking for sys/videoio.h
    -- Looking for sys/videoio.h - not found
    -- Looking for libavformat/avformat.h
    -- Looking for libavformat/avformat.h - found
    -- Looking for ffmpeg/avformat.h
    -- Looking for ffmpeg/avformat.h - not found

    Opencv 2.4.10
    Ubuntu 14.04

    EDIT : I found out that the problem is that the VideoWriter object isn’t open, even after the constructor call :

    VideoWriter wr(outputFile, CV_FOURCC('D','I','V','X'), capture.get(CV_CAP_PROP_FPS,Size(capture.get(CV_CAP_PROP_FRAME_WIDTH),capture.get(CV_CAP_PROP_FRAME_HEIGHT)));

    And using the CV_CAP_PROP_FPS makes OpenCV show the following message in secution time :

    HIGHGUI ERROR: V4L2: Unable to get property <unknown property="property" string="string">(5) - Invalid argument
    </unknown>

    Why this hqppen, how can I fix that ?

  • Simply beyond ridiculous

    7 mai 2010, par Dark Shikari — H.265, speed

    For the past few years, various improvements on H.264 have been periodically proposed, ranging from larger transforms to better intra prediction. These finally came together in the JCT-VC meeting this past April, where over two dozen proposals were made for a next-generation video coding standard. Of course, all of these were in very rough-draft form ; it will likely take years to filter it down into a usable standard. In the process, they’ll pick the most useful features (hopefully) from each proposal and combine them into something a bit more sane. But, of course, it all has to start somewhere.

    A number of features were common : larger block sizes, larger transform sizes, fancier interpolation filters, improved intra prediction schemes, improved motion vector prediction, increased internal bit depth, new entropy coding schemes, and so forth. A lot of these are potentially quite promising and resolve a lot of complaints I’ve had about H.264, so I decided to try out the proposal that appeared the most interesting : the Samsung+BBC proposal (A124), which claims compression improvements of around 40%.

    The proposal combines a bouillabaisse of new features, ranging from a 12-tap interpolation filter to 12thpel motion compensation and transforms as large as 64×64. Overall, I would say it’s a good proposal and I don’t doubt their results given the sheer volume of useful features they’ve dumped into it. I was a bit worried about complexity, however, as 12-tap interpolation filters don’t exactly scream “fast”.

    I prepared myself for the slowness of an unoptimized encoder implementation, compiled their tool, and started a test encode with their recommended settings.

    I waited. The first frame, an I-frame, completed.

    I took a nap.

    I waited. The second frame, a P-frame, was done.

    I played a game of Settlers.

    I waited. The third frame, a B-frame, was done.

    I worked on a term paper.

    I waited. The fourth frame, a B-frame, was done.

    After a full 6 hours, 8 frames had encoded. Yes, at this rate, it would take a full two weeks to encode 10 seconds of HD video. On a Core i7. This is not merely slow ; this is over 1000 times slower than x264 on “placebo” mode. This is so slow that it is not merely impractical ; it is impossible to even test. This encoder is apparently designed for some sort of hypothetical future computer from space. And word from other developers is that the Intel proposal is even slower.

    This has led me to suspect that there is a great deal of cheating going on in the H.265 proposals. The goal of the proposals, of course, is to pick the best feature set for the next generation video compression standard. But there is an extra motivation : organizations whose features get accepted get patents on the resulting standard, and thus income. With such large sums of money in the picture, dishonesty becomes all the more profitable.

    There is a set of rules, of course, to limit how the proposals can optimize their encoders. If different encoders use different optimization techniques, the results will no longer be comparable — remember, they are trying to compare compression features, not methods of optimizing encoder-side decisions. Thus all encoders are required to use a constant quantizer, specified frame types, and so forth. But there are no limits on how slow an encoder can be or what algorithms it can use.

    It would be one thing if the proposed encoder was a mere 10 times slower than the current reference ; that would be reasonable, given the low level of optimization and higher complexity of the new standard. But this is beyond ridiculous. With the prize given to whoever can eke out the most PSNR at a given quantizer at the lowest bitrate (with no limits on speed), we’re just going to get an arms race of slow encoders, with every company trying to use the most ridiculous optimizations possible, even if they involve encoding the frame 100,000 times over to choose the optimal parameters. And the end result will be as I encountered here : encoders so slow that they are simply impossible to even test.

    Such an arms race certainly does little good in optimizing for reality where we don’t have 30 years to encode an HD movie : a feature that gives great compression improvements is useless if it’s impossible to optimize for in a reasonable amount of time. Certainly once the standard is finalized practical encoders will be written — but it makes no sense to optimize the standard for a use-case that doesn’t exist. And even attempting to “optimize” anything is difficult when encoding a few seconds of video takes weeks.

    Update : The people involved have contacted me and insist that there was in fact no cheating going on. This is probably correct ; the problem appears to be that the rules that were set out were simply not strict enough, making many changes that I would intuitively consider “cheating” to be perfectly allowed, and thus everyone can do it.

    I would like to apologize if I implied that the results weren’t valid ; they are — the Samsung-BBC proposal is definitely one of the best, which is why I picked it to test with. It’s just that I think any situation in which it’s impossible to test your own software is unreasonable, and thus the entire situation is an inherently broken one, given the lax rules, slow baseline encoder, and no restrictions on compute time.

  • Simply beyond ridiculous

    7 mai 2010, par Dark Shikari — H.265, speed

    For the past few years, various improvements on H.264 have been periodically proposed, ranging from larger transforms to better intra prediction. These finally came together in the JCT-VC meeting this past April, where over two dozen proposals were made for a next-generation video coding standard. Of course, all of these were in very rough-draft form ; it will likely take years to filter it down into a usable standard. In the process, they’ll pick the most useful features (hopefully) from each proposal and combine them into something a bit more sane. But, of course, it all has to start somewhere.

    A number of features were common : larger block sizes, larger transform sizes, fancier interpolation filters, improved intra prediction schemes, improved motion vector prediction, increased internal bit depth, new entropy coding schemes, and so forth. A lot of these are potentially quite promising and resolve a lot of complaints I’ve had about H.264, so I decided to try out the proposal that appeared the most interesting : the Samsung+BBC proposal (A124), which claims compression improvements of around 40%.

    The proposal combines a bouillabaisse of new features, ranging from a 12-tap interpolation filter to 12thpel motion compensation and transforms as large as 64×64. Overall, I would say it’s a good proposal and I don’t doubt their results given the sheer volume of useful features they’ve dumped into it. I was a bit worried about complexity, however, as 12-tap interpolation filters don’t exactly scream “fast”.

    I prepared myself for the slowness of an unoptimized encoder implementation, compiled their tool, and started a test encode with their recommended settings.

    I waited. The first frame, an I-frame, completed.

    I took a nap.

    I waited. The second frame, a P-frame, was done.

    I played a game of Settlers.

    I waited. The third frame, a B-frame, was done.

    I worked on a term paper.

    I waited. The fourth frame, a B-frame, was done.

    After a full 6 hours, 8 frames had encoded. Yes, at this rate, it would take a full two weeks to encode 10 seconds of HD video. On a Core i7. This is not merely slow ; this is over 1000 times slower than x264 on “placebo” mode. This is so slow that it is not merely impractical ; it is impossible to even test. This encoder is apparently designed for some sort of hypothetical future computer from space. And word from other developers is that the Intel proposal is even slower.

    This has led me to suspect that there is a great deal of cheating going on in the H.265 proposals. The goal of the proposals, of course, is to pick the best feature set for the next generation video compression standard. But there is an extra motivation : organizations whose features get accepted get patents on the resulting standard, and thus income. With such large sums of money in the picture, dishonesty becomes all the more profitable.

    There is a set of rules, of course, to limit how the proposals can optimize their encoders. If different encoders use different optimization techniques, the results will no longer be comparable — remember, they are trying to compare compression features, not methods of optimizing encoder-side decisions. Thus all encoders are required to use a constant quantizer, specified frame types, and so forth. But there are no limits on how slow an encoder can be or what algorithms it can use.

    It would be one thing if the proposed encoder was a mere 10 times slower than the current reference ; that would be reasonable, given the low level of optimization and higher complexity of the new standard. But this is beyond ridiculous. With the prize given to whoever can eke out the most PSNR at a given quantizer at the lowest bitrate (with no limits on speed), we’re just going to get an arms race of slow encoders, with every company trying to use the most ridiculous optimizations possible, even if they involve encoding the frame 100,000 times over to choose the optimal parameters. And the end result will be as I encountered here : encoders so slow that they are simply impossible to even test.

    Such an arms race certainly does little good in optimizing for reality where we don’t have 30 years to encode an HD movie : a feature that gives great compression improvements is useless if it’s impossible to optimize for in a reasonable amount of time. Certainly once the standard is finalized practical encoders will be written — but it makes no sense to optimize the standard for a use-case that doesn’t exist. And even attempting to “optimize” anything is difficult when encoding a few seconds of video takes weeks.

    Update : The people involved have contacted me and insist that there was in fact no cheating going on. This is probably correct ; the problem appears to be that the rules that were set out were simply not strict enough, making many changes that I would intuitively consider “cheating” to be perfectly allowed, and thus everyone can do it.

    I would like to apologize if I implied that the results weren’t valid ; they are — the Samsung-BBC proposal is definitely one of the best, which is why I picked it to test with. It’s just that I think any situation in which it’s impossible to test your own software is unreasonable, and thus the entire situation is an inherently broken one, given the lax rules, slow baseline encoder, and no restrictions on compute time.