
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (105)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community. -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...)
Sur d’autres sites (6774)
-
Write audio packet to file using ffmpeg
27 février 2017, par iamyzI am trying to write audio packet to file using ffmpeg. The source device sending the packet after some interval. e.g.
First packet has a time stamp 00:00:00
Second packet has a time stamp 00:00:00.5000000
Third packet has a time stamp 00:00:01
And so on...Means two packet per second.
I want to encode those packets and write to a file.
I am referring the Ffmpeg example from link Muxing.c
While encoding and writing there is no error. But output file has only 2 sec audio duration and speed is also super fast.
The video frames are proper according the settings.
I think the problem is related to calculation of pts, dts and duration of packet.
How should I calculate proper values for pts, dts and duration. Or is this problem related to other thing ?
Code :
void AudioWriter::WriteAudioChunk(IntPtr chunk, int lenght, TimeSpan timestamp)
{
int buffer_size = av_samples_get_buffer_size(NULL, outputStream->tmp_frame->channels, outputStream->tmp_frame->nb_samples, outputStream->AudioStream->codec->sample_fmt, 0);
uint8_t *audioData = reinterpret_cast(static_cast(chunk));
int ret = avcodec_fill_audio_frame(outputStream->tmp_frame,outputStream->Channels, outputStream->AudioStream->codec->sample_fmt, audioData, buffer_size, 1);
if (!ret)
throw gcnew System::IO::IOException("A audio file was not opened yet.");
write_audio_frame(outputStream->FormatContext, outputStream, audioData);
}
static int write_audio_frame(AVFormatContext *oc, AudioWriterData^ ost, uint8_t *audioData)
{
AVCodecContext *c;
AVPacket pkt = { 0 };
int ret;
int got_packet;
int dst_nb_samples;
av_init_packet(&pkt);
c = ost->AudioStream->codec;
AVFrame *frame = ost->tmp_frame;
if (frame)
{
dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples, c->sample_rate, c->sample_rate, AV_ROUND_UP);
if (dst_nb_samples != frame->nb_samples)
throw gcnew Exception("dst_nb_samples != frame->nb_samples");
ret = av_frame_make_writable(ost->AudioFrame);
if (ret < 0)
throw gcnew Exception("Unable to make writable.");
ret = swr_convert(ost->swr_ctx, ost->AudioFrame->data, dst_nb_samples, (const uint8_t **)frame->data, frame->nb_samples);
if (ret < 0)
throw gcnew Exception("Unable to convert to destination format.");
frame = ost->AudioFrame;
AVRational timebase = { 1, c->sample_rate };
frame->pts = av_rescale_q(ost->samples_count, timebase, c->time_base);
ost->samples_count += dst_nb_samples;
}
ret = avcodec_encode_audio2(c, &pkt, frame, &got_packet);
if (ret < 0)
throw gcnew Exception("Error encoding audio frame.");
if (got_packet)
{
ret = write_frame(oc, &c->time_base, ost->AudioStream, &pkt);
if (ret < 0)
throw gcnew Exception("Audio is not written.");
}
else
throw gcnew Exception("Audio packet encode failed.");
return (ost->AudioFrame || got_packet) ? 0 : 1;
}
static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
{
av_packet_rescale_ts(pkt, *time_base, st->time_base);
pkt->stream_index = st->index;
return av_interleaved_write_frame(fmt_ctx, pkt);
} -
How to fetch both live video frame and timestamp from ffmpeg to python on Windows
6 mars 2017, par vijiboySearching for an alternative as OpenCV would not provide timestamps for live camera stream (on Windows), which are required in my computer vision algorithm, I found ffmpeg and this excellent article https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
The solution uses ffmpeg, accessing its standard output (stdout) stream. I extended it to read the standard error (stderr) stream as well.Working up the python code on windows, while I received the video frames from ffmpeg stdout, but the stderr freezes after delivering the showinfo videofilter details (timestamp) for first frame.
I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected ?
Expected : It should write video frames to disk as well as print timestamp details.
Actual : It writes video files but does not get the timestamp (showinfo) details.Here’s the code I tried :
import subprocess as sp
import numpy
import cv2
command = [ 'ffmpeg',
'-i', 'e:\sample.wmv',
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo',
'-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
'-an','-sn', #-an, -sn disables audio and sub-title processing respectively
'-f', 'image2pipe', '-'] # we need to output to a pipe
pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes???
for i in range(10):
raw_image = pipe.stdout.read(1280*720*3)
img_info = pipe.stderr.read(244) # 244 characters is the current output of showinfo video filter
print "showinfo output", img_info
image1 = numpy.fromstring(raw_image, dtype='uint8')
image2 = image1.reshape((720,1280,3))
# write video frame to file just to verify
videoFrameName = 'Video_Frame{0}.png'.format(i)
cv2.imwrite(videoFrameName,image2)
# throw away the data in the pipe's buffer.
pipe.stdout.flush()
pipe.stderr.flush()So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm...
-
How to fetch both live video frame and its timestamp from ffmpeg on Windows
22 février 2017, par vijiboySearching for an alternative as OpenCV would not provide timestamps for live camera stream (on Windows), which are required in my computer vision algorithm, I found ffmpeg and this excellent article https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
The solution uses ffmpeg, accessing its standard output (stdout) stream. I extended it to read the standard error (stderr) stream as well.Working up the python code on windows, while I received the video frames from ffmpeg stdout, but the stderr freezes after delivering the showinfo videofilter details (timestamp) for first frame.
I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected ?
Expected : It should write video frames to disk as well as print timestamp details.
Actual : It writes video files but does not get the timestamp (showinfo) details.Here’s the code I tried :
import subprocess as sp
import numpy
import cv2
command = [ 'ffmpeg',
'-i', 'e:\sample.wmv',
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo',
'-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
'-an','-sn', #-an, -sn disables audio and sub-title processing respectively
'-f', 'image2pipe', '-'] # we need to output to a pipe
pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes???
for i in range(10):
raw_image = pipe.stdout.read(1280*720*3)
img_info = pipe.stderr.read(244) # 244 characters is the current output of showinfo video filter
print "showinfo output", img_info
image1 = numpy.fromstring(raw_image, dtype='uint8')
image2 = image1.reshape((720,1280,3))
# write video frame to file just to verify
videoFrameName = 'Video_Frame{0}.png'.format(i)
cv2.imwrite(videoFrameName,image2)
# throw away the data in the pipe's buffer.
pipe.stdout.flush()
pipe.stderr.flush()So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm...