
Recherche avancée
Autres articles (101)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (7210)
-
FFmpeg get header size
25 juin 2012, par DEgITxThe question is to get header size from format context (AVFormatContext) in ffmpeg.
Now i'm using first packet position to get it :
avformat_open_input(&m_formatContext, m_openedFilePath.toStdString().c_str(), NULL, NULL);
//...
AVPacket packet;
if(av_read_frame(m_formatContext, &packet) >= 0)
printf("Header size: %d", packet.pos); // First readed packet will be with header offsetAny better way to do it without reading frame ?
-
Video recoding with ffmpeg
8 novembre 2011, par Aleks GI asked in another question (http://stackoverflow.com/questions/8012494/sorry-this-video-cannot-be-played-streaming-mp4-to-android/8012874#8012874) about video playback in android using VideoView. Apparently, the problem there is due to the way my video is encoded, as another video (512Kb mp4 off the web) plays correctly using my code. As videos are uploaded by my end users to the web site, I don't have any control of the videos themselves, however I do have control over re-coding these. I re-code them using
ffmpeg
to bring them to a standard MP4 (H.264+AAC) format and scale them to the same size (320x240).Here's the ffmpeg info of a video that would not play :
sh-3.2$ ffmpeg -i video.bad.mp4
FFmpeg version SVN-r25679-snapshot, Copyright (c) 2000-2010 the FFmpeg developers
built on Nov 5 2010 09:34:37 with gcc 4.3.2
configuration: --prefix=/usr --enable-shared --enable-libmp3lame --enable-gpl --enable-libvorbis --enable-pthreads --enable-libfaac --enable-libxvid --enable-postproc --enable-libgsm --enable-x11grab --enable-libx264 --enable-libtheora --extra-cflags=-Wall --enable-swscale --enable-libdc1394 --enable-nonfree --disable-mmx --disable-stripping --enable-avfilter --disable-altivec --disable-armv5te --disable-armv6 --disable-vis --enable-nonfree --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-version3
libavutil 50.32. 6 / 50.32. 6
libavcore 0.12. 0 / 0.12. 0
libavcodec 52.94. 3 / 52.94. 3
libavformat 52.84. 0 / 52.84. 0
libavdevice 52. 2. 2 / 52. 2. 2
libavfilter 1.56. 0 / 1.56. 0
libswscale 0.12. 0 / 0.12. 0
libpostproc 51. 2. 0 / 51. 2. 0
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.bad.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf52.84.0
Duration: 00:00:45.93, start: 0.000000, bitrate: 591 kb/s
Stream #0.0(und): Video: h264, yuv420p, 320x240 [PAR 1:1 DAR 4:3], 535 kb/s, 15 fps, 15 tbr, 15 tbn, 30 tbc
Stream #0.1(und): Audio: aac, 48000 Hz, stereo, s16, 51 kb/sAnd here's the ffmpeg info of a video that plays correctly :
sh-3.2$ ffmpeg -i video.mp4
FFmpeg version SVN-r25679-snapshot, Copyright (c) 2000-2010 the FFmpeg developers
built on Nov 5 2010 09:34:37 with gcc 4.3.2
configuration: --prefix=/usr --enable-shared --enable-libmp3lame --enable-gpl --enable-libvorbis --enable-pthreads --enable-libfaac --enable-libxvid --enable-postproc --enable-libgsm --enable-x11grab --enable-libx264 --enable-libtheora --extra-cflags=-Wall --enable-swscale --enable-libdc1394 --enable-nonfree --disable-mmx --disable-stripping --enable-avfilter --disable-altivec --disable-armv5te --disable-armv6 --disable-vis --enable-nonfree --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-version3
libavutil 50.32. 6 / 50.32. 6
libavcore 0.12. 0 / 0.12. 0
libavcodec 52.94. 3 / 52.94. 3
libavformat 52.84. 0 / 52.84. 0
libavdevice 52. 2. 2 / 52. 2. 2
libavfilter 1.56. 0 / 1.56. 0
libswscale 0.12. 0 / 0.12. 0
libpostproc 51. 2. 0 / 51. 2. 0
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: mp41
title : crazytown - http://www.archive.org/details/Cartoon-Crazytown
encoder : Lavf51.10.0
Duration: 00:07:50.40, start: 0.000000, bitrate: 578 kb/s
Stream #0.0(und): Video: h264, yuv420p, 320x240, 510 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc
Stream #0.1(und): Audio: aac, 48000 Hz, stereo, s16, 63 kb/sI have two questions here, actually. First, which of the details in my "bad" video does android not like ? And, second, what parameters should I use with ffmpeg to recode my videos ? As present I use this :
ffmpeg -i $input_video_file -y -s 320x240 -vcodec libx264 -vpre medium -acodec libfaac -b 510K -ar 48000 -aspect 4:3 $tmpfile.mp4
qt-faststart $tmpfile.mp4 $output_video_file.mp4But this produces a video that's not playable on android. Any help is greatly appreciated.
-
Recommendations for real-time pixel-level analysis of television (TV) video
6 décembre 2011, par Randall Cook[Note : This is a rewrite of an earlier question that was considered inappropriate and closed.]
I need to do some pixel-level analysis of television (TV) video. The exact nature of this analysis is not pertinent, but it basically involves looking at every pixel of every frame of TV video, starting from an MPEG-2 transport stream. The host platform will be server-class, multiprocessor 64-bit Linux machines.
I need a library that can handle the decoding of the transport stream and present me with the image data in real-time. OpenCV and ffmpeg are two libraries that I am considering for this work. OpenCV is appealing because I have heard it has easy to use APIs and rich image analysis support, but I have no experience using it. I have used ffmpeg in the past for extracting video frame data from files for analysis, but it lacks image analysis support (though Intel's IPP can supplement).
In addition to general recommendations for approaches to this problem (excluding the actual image analysis), I have some more specific questions that would help me get started :
- Are ffmpeg or OpenCV commonly used in industry as a foundation for real-time
video analysis, or is there something else I should be looking at ? - Can OpenCV decode video frames in real time, and still leave enough
CPU left over to do nontrivial image analysis, also in real-time ? - Is sufficient to use ffpmeg for MPEG-2 transport stream decoding, or
is it preferable to just use an MPEG-2 decoding library directly (and if so, which one) ? - Are there particular pixel formats for the output frames that ffmpeg
or OpenCV is particularly efficient at producing (like RGB, YUV, or YUV422, etc) ?
- Are ffmpeg or OpenCV commonly used in industry as a foundation for real-time