
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (69)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
MediaSPIP Core : La Configuration
9 novembre 2010, parMediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...) -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
Sur d’autres sites (5092)
-
RGB to YUV conversion with libav (ffmpeg) triplicates image
17 avril 2021, par José Tomás TocinoI'm building a small program to capture the screen (using X11 MIT-SHM extension) on video. It works well if I create individual PNG files of the captured frames, but now I'm trying to integrate libav (ffmpeg) to create the video and I'm getting... funny results.


The furthest I've been able to reach is this. The expected result (which is a PNG created directly from the RGB data of the XImage file) is this :




However, the result I'm getting is this :




As you can see the colors are funky and the image appears cropped three times. I have a loop where I capture the screen, and first I generate the individual PNG files (currently commented in the code below) and then I try to use libswscale to convert from RGB24 to YUV420 :


while (gRunning) {
 printf("Processing frame framecnt=%i \n", framecnt);

 if (!XShmGetImage(display, RootWindow(display, DefaultScreen(display)), img, 0, 0, AllPlanes)) {
 printf("\n Ooops.. Something is wrong.");
 break;
 }

 // PNG generation
 // snprintf(imageName, sizeof(imageName), "salida_%i.png", framecnt);
 // writePngForImage(img, width, height, imageName);

 unsigned long red_mask = img->red_mask;
 unsigned long green_mask = img->green_mask;
 unsigned long blue_mask = img->blue_mask;

 // Write image data
 for (int y = 0; y < height; y++) {
 for (int x = 0; x < width; x++) {
 unsigned long pixel = XGetPixel(img, x, y);

 unsigned char blue = pixel & blue_mask;
 unsigned char green = (pixel & green_mask) >> 8;
 unsigned char red = (pixel & red_mask) >> 16;

 pixel_rgb_data[y * width + x * 3] = red;
 pixel_rgb_data[y * width + x * 3 + 1] = green;
 pixel_rgb_data[y * width + x * 3 + 2] = blue;
 }
 }

 uint8_t* inData[1] = { pixel_rgb_data };
 int inLinesize[1] = { in_w };

 printf("Scaling frame... \n");
 int sliceHeight = sws_scale(sws_context, inData, inLinesize, 0, height, pFrame->data, pFrame->linesize);

 printf("Obtained slice height: %i \n", sliceHeight);
 pFrame->pts = framecnt * (pVideoStream->time_base.den) / ((pVideoStream->time_base.num) * 25);

 printf("Frame pts: %li \n", pFrame->pts);
 int got_picture = 0;

 printf("Encoding frame... \n");
 int ret = avcodec_encode_video2(pCodecCtx, &pkt, pFrame, &got_picture);

// int ret = avcodec_send_frame(pCodecCtx, pFrame);

 if (ret != 0) {
 printf("Failed to encode! Error: %i\n", ret);
 return -1;
 }

 printf("Succeed to encode frame: %5d - size: %5d\n", framecnt, pkt.size);

 framecnt++;

 pkt.stream_index = pVideoStream->index;
 ret = av_write_frame(pFormatCtx, &pkt);

 if (ret != 0) {
 printf("Error writing frame! Error: %framecnt \n", ret);
 return -1;
 }

 av_packet_unref(&pkt);
 }



I've placed the entire code at this gist. This question right here looks pretty similar to mine, but not quite, and the solution did not work for me, although I think this has something to do with the way the line stride is calculated.


-
bash : receive single frames from ffmpeg pipe
30 août 2014, par manuI’m trying to achieve single-frame handling in a pipe where the the j2c encoder "kdu_compress" (Kakadu) only accepts single files. To save harddrive space. I didn’t manage to pipe frames directly, so I’m trying to handle them via a bash script, by creating each picture, process it, and overwrite it with the next.
Here is my approach. Thanks for your advice, I really want to climb this mountain, though I’m a bit fresh here thanks.
Is it possible to pipe an ffmpeg output to a bash script and save the individual frame,
do further commands with the file before the next frame is handled ?Best result so far is, that ALL frames are added into the intermediate file, without recognizing the end of a frame.
I used this ffmpeg setting to pipe, example with .ppm :
ffmpeg -y -i "/path/to/source.mov" -an -c:v ppm -updatefirst 1 -f image2 - \
| /path/to/receiver.shand this script as a receiver.sh
#!/bin/bash
while read a;
do
cat /dev/null > "/path/to/tempfile.ppm"; #to empty the file first
cat $a >> "/path/to/tempfile.ppm"; #to fill one picture
kdu_compress -i /path/to/tempfile.ppm -otherparams #to process this intermediate
done
exit;Thank you very much.
-
How can I best utilize an AWS service to segment a video into smaller chunks and then combine them back to together ? [on hold]
19 avril 2018, par Justin MalinI am trying to do processing on videos uploaded to AWS S3 using an AWS Lambda function in Python. However, FFmpeg and ffmpeg-python (as far as I am aware) are unable to process objects and must do processing on stored files. Lambda only allows for 500 MB of storage in the /tmp/ folder, thus limiting the size of video that I can do processing on.
If there is an alternative to FFmpeg that allows me to work on object files that I am unaware of, that would be a reasonable solution because I can scale up the memory of the Lambda function (although there is still a limit).
Alternatively, I have looked into segmenting the video using AWS Elastic Transcoder, but I do not think I can dynamically segment the video using that service. If there is a service similar to this that could segment the video into individual frames (and back), that would be even better.
I have also considered using AWS EC2, but I would only be using the EC2 service to segment videos sporadically, so it would be a waste to constantly have a server that capable running. If I use the AWS Elastic Beanstalk, would it automatically start a more powerful instance of EC2 to do the video segmentation (and reformation) when that is called and revert back to a much smaller instance when dormant ?
Essentially, I would like to know if there are any services (preferably within AWS) that allow me to segment a video into shorter videos or into each frame at-will.