
Recherche avancée
Médias (91)
-
GetID3 - Boutons supplémentaires
9 avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
-
Core Media Video
4 avril 2013, par
Mis à jour : Juin 2013
Langue : français
Type : Video
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (34)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Emballe Médias : Mettre en ligne simplement des documents
29 octobre 2010, parLe plugin emballe médias a été développé principalement pour la distribution mediaSPIP mais est également utilisé dans d’autres projets proches comme géodiversité par exemple. Plugins nécessaires et compatibles
Pour fonctionner ce plugin nécessite que d’autres plugins soient installés : CFG Saisies SPIP Bonux Diogène swfupload jqueryui
D’autres plugins peuvent être utilisés en complément afin d’améliorer ses capacités : Ancres douces Légendes photo_infos spipmotion (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...)
Sur d’autres sites (3852)
-
ffmpeg : programmatically use libavcodec and encode and decode raw bitmap, all in just few milliseconds and small compressed size on Raspberry Pi 4
15 mars 2023, par Jerry SwitalskiWe need to compress the size of the 1024x2048 image we produce, to size of about jpeg (200-500kb) from raw 32bits RGBA (8Mb) on Raspberry Pi 4. All in c/c++ program.


The compression needs to be just in few milliseconds, otherwise it is pointless to us.


We decided to try supported encoding using ffmpeg dev library and c/c++ code.


The problem we are facing is that when we edited example of the encoding, provided by ffmpeg developers, the times we are dealing are unacceptable.


Here you can see the edited code where the frames are created :


for (i = 0; i < 25; i++)
{
#ifdef MEASURE_TIME
 auto start_time = std::chrono::high_resolution_clock::now();
 std::cout << "START Encoding frame...\n";
#endif
 fflush(stdout);

 ret = av_frame_make_writable(frame);
 if (ret < 0)
 exit(1);

 //I try here, to convert our 32 bits RGBA image to YUV pixel format:

 for (y = 0; y < c->height; y++)
 {
 for (x = 0; x < c->width; x++)
 {
 int imageIndexY = y * frame->linesize[0] + x;

 uint32_t rgbPixel = ((uint32_t*)OutputDataImage)[imageIndexY];

 double Y, U, V;
 uint8_t R = rgbPixel << 24;
 uint8_t G = rgbPixel << 16;
 uint8_t B = rgbPixel << 8;

 YUVfromRGB(Y, U, V, (double)R, (double)G, (double)B);
 frame->data[0][imageIndexY] = (uint8_t)Y;

 if (y % 2 == 0 && x % 2 == 0)
 {
 int imageIndexU = (y / 2) * frame->linesize[1] + (x / 2);
 int imageIndexV = (y / 2) * frame->linesize[2] + (x / 2);

 frame->data[1][imageIndexU] = (uint8_t)U;
 frame->data[2][imageIndexV] = (uint8_t)Y;
 }
 }
 }

 frame->pts = i;

 /* encode the image */
 encode(c, frame, pkt, f);

#ifdef MEASURE_TIME
 auto end_time = std::chrono::high_resolution_clock::now();
 auto time = end_time - start_time;
 std::cout << "FINISHED Encoding frame in: " << time / std::chrono::milliseconds(1) << "ms.\n";

#endif
 }



Here are some important parts of the previous parts of that function :


codec_name = "mpeg4";

codec = avcodec_find_encoder_by_name(codec_name);

c = avcodec_alloc_context3(codec);
 
c->bit_rate = 1000000; 
c->width = IMAGE_WIDTH;
c->height = IMAGE_HEIGHT;
c->gop_size = 1;
c->max_b_frames = 1;
c->pix_fmt = AV_PIX_FMT_YUV420P; 



IMAGE_WIDTH and IMAGE_HEIGHT are 1024 and 2048 corresponding.


The result I have ran on Raspberry Pi 4 look like this :


START Encoding frame...
Send frame 0
FINISHED Encoding frame in: 40ms.
START Encoding frame...
Send frame 1
Write packet 0 (size=11329)
FINISHED Encoding frame in: 60ms.
START Encoding frame...
Send frame 2
Write packet 1 (size=11329)
FINISHED Encoding frame in: 58ms.



Since I am completely green in encoding and using codecs, my question will be how to do it the best way and correct way, meaning the way which would reduce timing to few ms, and I am not sure the codec was chosen the best for the job, or the pixel format.


The rest of the meaningful code you can see here (the encode() function you can find in the ffmpeg developer example I gave link to above) :


void RGBfromYUV(double& R, double& G, double& B, double Y, double U, double V)
{
 Y -= 16;
 U -= 128;
 V -= 128;
 R = 1.164 * Y + 1.596 * V;
 G = 1.164 * Y - 0.392 * U - 0.813 * V;
 B = 1.164 * Y + 2.017 * U;
}



-
how to find video and audio stream times without the deprecated values ? libav 10.5
23 novembre 2014, par Narayana James EmeryAfter looking into the documentation, AVStream.pts has been deprecated. Before I was using this loop to try and get the video and audio time. it’s the same as whats in the api-example.c (which no longer works) :
if (audioStream_)
{
VideoTime = (double)videoStream_->pts.val*videoStream_->time_base.num/videoStream_->time_base.den;
do
{
AudioTime = (double)audioStream_->pts.val*audioStream_->time_base.num/audioStream_->time_base.den;
ret = WriteAudioFrame();
}
while (AudioTime < VideoTime && ret);
if (ret < 0)
return ret;
}what is the current alternative ? I haven’t been able to get anything to work as of yet and I’ve been searching for around 3 hours.
-
avfilter/vf_ssim360 : Use correct type in sizeof
12 mars 2023, par Andreas Rheinhardtavfilter/vf_ssim360 : Use correct type in sizeof
SSIM360Context.ssim360_hist is an array of four pointers to double ;
so sizeof(*ssim360_hist[0]) (=sizeof(double)) is the correct size
to use to calculate the amount of memory to allocate, not
sizeof(*ssim360_hist) (which is sizeof(double*)).Use FF_ALLOCZ_TYPED_ARRAY to avoid this issue altogether.
Fixes Coverity issue #1520671.
Reviewed-by : Anton Khirnov <anton@khirnov.net>
Reviewed-by : Jan Ekström <jeebjp@gmail.com>
Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>