
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (45)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...) -
Les images
15 mai 2013
Sur d’autres sites (4714)
-
Using OpenMAX (IL ?) for audio/video decoding on Android
14 septembre 2012, par Christopher CorsiMany of the newer hardware platforms running Android, in particular NVIDIA's Tegra 2, support OpenMAX for media acceleration. It's effectively impossible on today's devices to decode 720p video without this support, but the number of demuxers supported on Android are quite slim. The only public API I've been able to find has been through the MediaPlayer class in the Android SDK. There are multiple places in the Android source tree with OpenMAX related tidbits, however.
On my device (Samsung Galaxy Tab 10.1) I've got access to hardware decoders through a multitude of OpenMAX libs in /system/lib, and it would be great to interface my video application with these. Can anyone point me to information on implementing a decoder powered by OpenMAX ? I've found the documentation from Khronos, but nothing in the way of example code or tutorials. I've already got demuxing and even software decoding taken care of (via libavcodec/libavformat), I'd just like to put hooks in to enable hardware encoding. I'm also assuming here it would be necessary to link directly to the ones available on the device, which makes it pretty lackluster in terms of portability, but it works.
Alternatively, I'm interested in anything anyone knows about private APIs for accessing the video decoding available on Tegra 2 devices. Especially if there's a vdpau interface like what NVIDIA implements for desktop linux distributions, since there's plenty available for that - but I wasn't able to find shared libraries that indicate that support.
-
How to publish selfmade stream with ffmpeg and c++ to rtmp server ?
25 octobre 2013, par Alexandr RHave a nice day to you, people !
I am writing an application for Windows that will capture the screen and send the stream to Wowza server by rtmp (for broadcasting). My application use ffmpeg and Qt.
I capture the screen with WinApi, convert a buffer to YUV444(because it's simplest) and encode frame as described at the file decoding_encoding.c (from FFmpeg examples) :///////////////////////////
//Encoder initialization
///////////////////////////
avcodec_register_all();
codec=avcodec_find_encoder(AV_CODEC_ID_H264);
c = avcodec_alloc_context3(codec);
c->width=scr_width;
c->height=scr_height;
c->bit_rate = 400000;
int base_num=1;
int base_den=1;//for one frame per second
c->time_base= (AVRational){base_num,base_den};
c->gop_size = 10;
c->max_b_frames=1;
c->pix_fmt = AV_PIX_FMT_YUV444P;
av_opt_set(c->priv_data, "preset", "slow", 0);
frame = avcodec_alloc_frame();
frame->format = c->pix_fmt;
frame->width = c->width;
frame->height = c->height;
for(int counter=0;counter<10;counter++)
{
///////////////////////////
//Capturing Screen
///////////////////////////
GetCapScr(shotbuf,scr_width,scr_height);//result: shotbuf is filled by screendata from HBITMAP
///////////////////////////
//Convert buffer to YUV444 (standard formula)
//It's handmade function because of problems with prepare buffer to swscale from HBITMAP
///////////////////////////
RGBtoYUV(shotbuf,frame->linesize,frame->data,scr_width,scr_height);//result in frame->data
///////////////////////////
//Encode Screenshot
///////////////////////////
av_init_packet(&pkt);
pkt.data = NULL; // packet data will be allocated by the encoder
pkt.size = 0;
frame->pts = counter;
avcodec_encode_video2(c, &pkt, frame, &got_output);
if (got_output)
{
//I think that sending packet by rtmp must be here!
av_free_packet(&pkt);
}
}
// Get the delayed frames
for (int got_output = 1,i=0; got_output; i++)
{
ret = avcodec_encode_video2(c, &pkt, NULL, &got_output);
if (ret < 0)
{
fprintf(stderr, "Error encoding frame\n");
exit(1);
}
if (got_output)
{
//I think that sending packet by rtmp must be here!
av_free_packet(&pkt);
}
}
///////////////////////////
//Deinitialize encoder
///////////////////////////
avcodec_close(c);
av_free(c);
av_freep(&frame->data[0]);
avcodec_free_frame(&frame);I need to send video stream generated by this code to RTMP server.
In other words, I need c++/c analog for this command :ffmpeg -re -i "sample.h264" -f flv rtmp://sample.url.com/screen/test_stream
It's useful, but I don't want to save stream to file, I want to use ffmpeg libraries for realtime encoding screen capture and sending encoded frames to RTMP server inside my own application.
Please give me a little example how to initialize AVFormatContext properly and to send my encoded video AVPackets to server.Thanks.
-
Multiple video sources combined into one
28 septembre 2011, par OdedI am looking for an efficient way to do the following :
Using several source videos (of approximately the same length), I need to generate an output video that is composed of all of the original sources each running in its own area (like a bunch of PIPs in several different sizes). So, the end result is that all the original are running side-by-side, each in its own area/box.
The source and output need to be
flv
and the platform I am using is Windows (dev on Windows 7 64bit, deployment to Windows server 2008).I have looked at avisynth but unfortunately it can't handle
flv
and non of the plugins and flv splitters I have tried worked.My current process uses ffmpeg in the following manner :
- Use ffmpeg to generate 25 png's per second per video, resizing the original as needed.
- Use the
System.Drawing
namespace to combine each set of frames into a new image, starting with a static background, then loading each frame into anImage
and drawing to the backgroundGraphics
object - this gives me the combined frames. - Use ffmpeg to combine the generated images to a video.
All this is very IO intensive (which is my processing bottleneck at the moment) and I feel there must be a more efficient way to reach my goal. I do not have much experience with video processing, and don't know what options are out there.
Can anyone suggest a more efficient way of processing these ?