
Recherche avancée
Médias (1)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (92)
-
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...) -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)
Sur d’autres sites (7059)
-
Saving scatterplot animations with matplotlib produces blank video file
1er avril 2013, par user2175850I am having a very similar problem to this question
but the suggested solution doesn't work for me.
I have set up an animated scatter plot using the matplotlib animation module. This works fine when it is displaying live. I would like to save it to an avi file or something similar. The code I have written to do this does not error out but the video it produces just shows a blank set of axes or a black screen. I've done several checks and the data is being run and figure updated it's just not getting saved to video...
I tried removing "animated=True" and "blit=True" as suggested in this question but that did not fix the problem.
I have placed the relevant code below but can provide more if necessary. Could anyone suggest what I should do to get this working ?
def initAnimation(self):
rs, cfgs = next(self.jumpingDataStreamIterator)
#self.scat = self.axAnimation.scatter(rs[0], rs[1], c=cfgs[0], marker='o')
self.scat = self.axAnimation.scatter(rs[0], rs[1], c=cfgs[0], marker='o', animated=True)
return self.scat,
def updateAnimation(self, i):
"""Update the scatter plot."""
rs, cfgs = next(self.jumpingDataStreamIterator)
# Set x and y data...
self.scat.set_offsets(rs[:2,].transpose())
#self.scat = self.axAnimation.scatter(rs[0], rs[1], c=cfgs[0], animated=True)
# Set sizes...
#self.scat._sizes = 300 * abs(data[2])**1.5 + 100
# Set colors..
#self.scat.set_array(cfgs[0])
# We need to return the updated artist for FuncAnimation to draw..
# Note that it expects a sequence of artists, thus the trailing comma.
matplotlib.pyplot.draw()
return self.scat,
def animate2d(self, steps=None, showEvery=50, size = 25):
self.figAnimation, self.axAnimation = matplotlib.pyplot.subplots()
self.axAnimation.set_aspect("equal")
self.axAnimation.axis([-size, size, -size, size])
self.jumpingDataStreamIterator = self.jumpingDataStream(showEvery)
self.univeseAnimation = matplotlib.animation.FuncAnimation(self.figAnimation,
self.updateAnimation, init_func=self.initAnimation,
blit=True)
matplotlib.pyplot.show()
def animate2dVideo(self,fileName=None, steps=10000, showEvery=50, size=25):
self.figAnimation, self.axAnimation = matplotlib.pyplot.subplots()
self.axAnimation.set_aspect("equal")
self.axAnimation.axis([-size, size, -size, size])
self.Writer = matplotlib.animation.writers['ffmpeg']
self.writer = self.Writer(fps=1, metadata=dict(artist='Universe Simulation'))
self.jumpingDataStreamIterator = self.jumpingDataStream(showEvery)
self.universeAnimation = matplotlib.animation.FuncAnimation(self.figAnimation,
self.updateAnimation, scipy.arange(1, 25), init_func=self.initAnimation)
self.universeAnimation.save('C:/universeAnimation.mp4', writer = self.writer) -
Writing a series of images into a video file using libavcodec (ffmpeg)
19 novembre 2013, par user2978372Requirements :
I have a bunch of images (to be more specific they are 1024x768, 24bpp RGB PNG files) that I want to encode into a video files.
And I need to use 'libavcodec' library, not 'ffmpeg' tool. (well I know they are basically same in the origin, I am emphasizing because someone may answer to use 'ffmpeg' tool, but that's not a solution what I am looking for)
I am using h264 encoder.
Target :
A high quality video with equal resolution (1024 x 768), YUV420P
each image has a duration of 1 second.
24 fpsProblems :
i've tried with many different (but same resolution and bits) png images, and all have failed to output a good video.For some series of images, only the frames of first second was shown in a good shape, but the remaining frames was distorted and color changed (lighter).
For some series of images, it seemed the images were zoomed-in and distorted again.
and etc.
Question :
I am a total AV newbie and I need someone to verify my steps for encoding. I am total AV newbie.1) av_register_all()
2) avcodec_register_all()
3) avcodec_find_encoder()
4) avcodec_alloc_context3()
5) sets codec configuraton to context.
6) avcodec_open2()
7) opens a output file using fopen_s()
8)
for(int second=1; second<=10; ++seconds)
{
Read a image from local using Gdiplus
Create a gdiplus bitmap and draw the image onto this bitmap
Get the raw byte data using LockBits
Transfer this RGB raw byte into YUV420 frame using 'swscontext', 'sws_scale'
for(int f=0; f<24; ++f)
{
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
pFrame->pts = f;
ret = avcodec_encode_video2(pContext, &pkt, pFrame, &got_output);
if(got_output)
{
fwrite(pkt.data, 1, pkt.size, outputFile);
av_free_packet(&pkt);
}
}
}
/* get the delayed frames */
for (got_output = 1; got_output; i++) {
fflush(stdout);
ret = avcodec_encode_video2(c, &pkt, NULL, &got_output);
if (ret < 0) {
fprintf(stderr, "Error encoding frame\n");
exit(1);
}
if (got_output) {
printf("Write frame %3d (size=%5d)\n", i, pkt.size);
fwrite(pkt.data, 1, pkt.size, f);
av_free_packet(&pkt);
}
}
close everything requiredI am sure that I misunderstood some steps using ffmpeg api. The above pseudo codes are based on the 'encoding' example of ffmpeg. which part am I doing wrong ? please can someone help me ?
p.s sorry about broken english. english is not my natvie language. I tried my best =P
-
How to convert ffmpeg video frame to YUV444 ?
21 octobre 2019, par Edward SeverinsenI have been following a tutorial on how to use ffmpeg and SDL to make a simple video player with no audio (yet). While looking through the tutorial I realized it was out of date and many of the functions it used, for both ffmpeg and SDL, were deprecated. So I searched for an up-to-date solution and found a stackoverflow question answer that completed what the tutorial was missing.
However, it uses YUV420 which is of low quality. I want to implement YUV444 and after studying chroma-subsampling for a bit and looking at the different formats for YUV am confused as to how to implement it. From what I understand YUV420 is a quarter of the quality YUV444 is. YUV444 means every pixel has its own chroma sample and as such is more detailed while YUV420 means pixels are grouped together and have the same chroma sample and therefore is less detailed.
And from what I understand the different formats of YUV(420, 422, 444) are different in the way they order y, u, and v. All of this is a bit overwhelming because I haven’t done much with codecs, conversions, etc. Any help would be much appreciated and if additional info is needed please let me know before downvoting.
Here is the code from the answer I mentioned concerning the conversion to YUV420 :
texture = SDL_CreateTexture(
renderer,
SDL_PIXELFORMAT_YV12,
SDL_TEXTUREACCESS_STREAMING,
pCodecCtx->width,
pCodecCtx->height
);
if (!texture) {
fprintf(stderr, "SDL: could not create texture - exiting\n");
exit(1);
}
// initialize SWS context for software scaling
sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
AV_PIX_FMT_YUV420P,
SWS_BILINEAR,
NULL,
NULL,
NULL);
// set up YV12 pixel array (12 bits per pixel)
yPlaneSz = pCodecCtx->width * pCodecCtx->height;
uvPlaneSz = pCodecCtx->width * pCodecCtx->height / 4;
yPlane = (Uint8*)malloc(yPlaneSz);
uPlane = (Uint8*)malloc(uvPlaneSz);
vPlane = (Uint8*)malloc(uvPlaneSz);
if (!yPlane || !uPlane || !vPlane) {
fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
exit(1);
}
uvPitch = pCodecCtx->width / 2;
while (av_read_frame(pFormatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if (packet.stream_index == videoStream) {
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
// Did we get a video frame?
if (frameFinished) {
AVPicture pict;
pict.data[0] = yPlane;
pict.data[1] = uPlane;
pict.data[2] = vPlane;
pict.linesize[0] = pCodecCtx->width;
pict.linesize[1] = uvPitch;
pict.linesize[2] = uvPitch;
// Convert the image into YUV format that SDL uses
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height, pict.data,
pict.linesize);
SDL_UpdateYUVTexture(
texture,
NULL,
yPlane,
pCodecCtx->width,
uPlane,
uvPitch,
vPlane,
uvPitch
);
SDL_RenderClear(renderer);
SDL_RenderCopy(renderer, texture, NULL, NULL);
SDL_RenderPresent(renderer);
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
SDL_PollEvent(&event);
switch (event.type) {
case SDL_QUIT:
SDL_DestroyTexture(texture);
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(screen);
SDL_Quit();
exit(0);
break;
default:
break;
}
}
// Free the YUV frame
av_frame_free(&pFrame);
free(yPlane);
free(uPlane);
free(vPlane);
// Close the codec
avcodec_close(pCodecCtx);
avcodec_close(pCodecCtxOrig);
// Close the video file
avformat_close_input(&pFormatCtx);EDIT :
After more research I learned that in YUV420 is stored with all Y’s first then a combination of U and V bytes one after another as illustrated by this image :
(source : wikimedia.org)However I also learned that YUV444 is stored in the order U, Y, V and repeats like this picture shows :
I tried changing some things around in code :
// I changed SDL_PIXELFORMAT_YV12 to SDL_PIXELFORMAT_UYVY
// as to reflect the order of YUV444
texture = SDL_CreateTexture(
renderer,
SDL_PIXELFORMAT_UYVY,
SDL_TEXTUREACCESS_STREAMING,
pCodecCtx->width,
pCodecCtx->height
);
if (!texture) {
fprintf(stderr, "SDL: could not create texture - exiting\n");
exit(1);
}
// Changed AV_PIX_FMT_YUV420P to AV_PIX_FMT_YUV444P
// for rather obvious reasons
sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
AV_PIX_FMT_YUV444P,
SWS_BILINEAR,
NULL,
NULL,
NULL);
// There are as many Y, U and V bytes as pixels I just
// made yPlaneSz and uvPlaneSz equal to the number of pixels
yPlaneSz = pCodecCtx->width * pCodecCtx->height;
uvPlaneSz = pCodecCtx->width * pCodecCtx->height;
yPlane = (Uint8*)malloc(yPlaneSz);
uPlane = (Uint8*)malloc(uvPlaneSz);
vPlane = (Uint8*)malloc(uvPlaneSz);
if (!yPlane || !uPlane || !vPlane) {
fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
exit(1);
}
uvPitch = pCodecCtx->width * 2;
while (av_read_frame(pFormatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if (packet.stream_index == videoStream) {
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
// Rearranged the order of the planes to reflect UYV order
// then set linesize to the number of Y, U and V bytes
// per row
if (frameFinished) {
AVPicture pict;
pict.data[0] = uPlane;
pict.data[1] = yPlane;
pict.data[2] = vPlane;
pict.linesize[0] = pCodecCtx->width;
pict.linesize[1] = pCodecCtx->width;
pict.linesize[2] = pCodecCtx->width;
// Convert the image into YUV format that SDL uses
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height, pict.data,
pict.linesize);
SDL_UpdateYUVTexture(
texture,
NULL,
yPlane,
1,
uPlane,
uvPitch,
vPlane,
uvPitch
);
//.................................................But now I get an access violation at the call to
SDL_UpdateYUVTexture
... I’m honestly not sure what’s wrong. I think it may have to do with settingAVPicture pic
’s memberdata
andlinesize
improperly but I’m not positive.