
Recherche avancée
Médias (1)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (34)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)
Sur d’autres sites (3719)
-
OpenCV's VideoCapture::open Video Source Dialog
13 novembre 2015, par swtdrgnIn my current project, when I call VideoCapture::open(
camera device index
) and the camera is in used by another program, it shows aVideo Source
dialog and returns true when I select a device that is already in use.However, in my [previous] experiment project, when I called VideoCapture::open(
camera device index
), it doesn’t show this dialog.I want to know what is causing the
Video Source
dialog to show and the program to behave differently from the experimental project.This is the source code to the experiment project :
int main (int argc, char *argv[])
{
//vars
time_duration td, td1;
ptime nextFrameTimestamp, currentFrameTimestamp, initialLoopTimestamp, finalLoopTimestamp;
int delayFound = 0;
int totalDelay= 0;
// initialize capture on default source
VideoCapture capture;
std::cout << "capture.open(0): " << capture.open(0) << std::endl;
std::cout << "NOOO" << std::endl;
namedWindow("video", 1);
// set framerate to record and capture at
int framerate = 15;
// Get the properties from the camera
double width = capture.get(CV_CAP_PROP_FRAME_WIDTH);
double height = capture.get(CV_CAP_PROP_FRAME_HEIGHT);
// print camera frame size
//cout << "Camera properties\n";
//cout << "width = " << width << endl <<"height = "<< height << endl;
// Create a matrix to keep the retrieved frame
Mat frame;
// Create the video writer
VideoWriter video("capture.avi",0, framerate, cvSize((int)width,(int)height) );
// initialize initial timestamps
nextFrameTimestamp = microsec_clock::local_time();
currentFrameTimestamp = nextFrameTimestamp;
td = (currentFrameTimestamp - nextFrameTimestamp);
// start thread to begin capture and populate Mat frame
boost::thread captureThread(captureFunc, &frame, &capture);
// loop infinitely
for(bool q=true;q;)
{
if(frame.empty()){continue;}
//if(cvWaitKey( 5 ) == 'q'){ q=false; }
// wait for X microseconds until 1second/framerate time has passed after previous frame write
while(td.total_microseconds() < 1000000/framerate){
//determine current elapsed time
currentFrameTimestamp = microsec_clock::local_time();
td = (currentFrameTimestamp - nextFrameTimestamp);
if(cvWaitKey( 5 ) == 'q'){
std::cout << "B" << std::endl;
q=false;
boost::posix_time::time_duration timeout = boost::posix_time::milliseconds(0);
captureThread.timed_join(timeout);
break;
}
}
// determine time at start of write
initialLoopTimestamp = microsec_clock::local_time();
// Save frame to video
video << frame;
imshow("video", frame);
//write previous and current frame timestamp to console
cout << nextFrameTimestamp << " " << currentFrameTimestamp << " ";
// add 1second/framerate time for next loop pause
nextFrameTimestamp = nextFrameTimestamp + microsec(1000000/framerate);
// reset time_duration so while loop engages
td = (currentFrameTimestamp - nextFrameTimestamp);
//determine and print out delay in ms, should be less than 1000/FPS
//occasionally, if delay is larger than said value, correction will occur
//if delay is consistently larger than said value, then CPU is not powerful
// enough to capture/decompress/record/compress that fast.
finalLoopTimestamp = microsec_clock::local_time();
td1 = (finalLoopTimestamp - initialLoopTimestamp);
delayFound = td1.total_milliseconds();
cout << delayFound << endl;
//output will be in following format
//[TIMESTAMP OF PREVIOUS FRAME] [TIMESTAMP OF NEW FRAME] [TIME DELAY OF WRITING]
if(!q || cvWaitKey( 5 ) == 'q'){
std::cout << "C" << std::endl;
q=false;
boost::posix_time::time_duration timeout = boost::posix_time::milliseconds(0);
captureThread.timed_join(timeout);
break;
}
}
// Exit
return 0;
} -
FFmpeg encoding produces slightly incompatible MKV/MP4 container
11 juin 2018, par james2048I’ve been using the FFmpeg libraries to read and write media files using the C API.
So far, reading seems to be pretty straightforward. I am able to read frames which I can then process, convert to RGB, process, and then convert back to YUV420 to be encoded.
The encoded files play back with VLC media player fine, and Windows Media Player if I have a codec pack installed. However, they do behave strangely : the stock Windows 10 player won’t play them, same for Adobe Premiere. Also thumbnailers don’t work on it.
Basically it seems like nothing other than VLC or FFmpeg itself can play/process the file. I have seen this with both MP4 and MKV, so it is not a format-specific issue.The problems go away once you remux the file with FFmpeg, for example "ffmpeg -i input.mkv -c copy output.mkv". Everything can play the file correctly.
Also, the "remuxing.c" sample from the official samples works as well, with the same library version and compilers that I’m using (Visual Studio 2017, FFmpeg compiled with MinGW). It will fix the file and make it playable in all software.I’m not sure what could be causing this. I also don’t understand what the remuxing "fixed". It must be a container issue as the frames aren’t touched by remuxing.
I have analysed the output MKVs with FFprobe -show_packets. It seems to have budged the packet timestamps a little constant factor, and the output stream now has
is_avc=true and nal_length_size=4 instead of is_avc=false and nal_length_size=0, but apart from that the files are identical.Now here’s the output of FFprobe with the 3 last test packets, stream info and format info for both streams. As you can see, they are identical except for a couple of field. But something in here must have been "fixed" during remuxing to make it work.
[PACKET]
codec_type=video
stream_index=0
pts=59050
pts_time=59.050000
dts=58890
dts_time=58.890000
duration=1
duration_time=0.001000
convergence_duration=N/A
convergence_duration_time=N/A
size=427
pos=277358
flags=__
[/PACKET]
[PACKET]
codec_type=video
stream_index=0
pts=58970
pts_time=58.970000
dts=58970
dts_time=58.970000
duration=1
duration_time=0.001000
convergence_duration=N/A
convergence_duration_time=N/A
size=205
pos=277792
flags=__
[/PACKET]
[PACKET]
codec_type=video
stream_index=0
pts=59130
pts_time=59.130000
dts=59050
dts_time=59.050000
duration=1
duration_time=0.001000
convergence_duration=N/A
convergence_duration_time=N/A
size=268
pos=278004
flags=__
[/PACKET]
[STREAM]
index=0
codec_name=h264
codec_long_name=H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10
profile=Main
codec_type=video
codec_time_base=1/2000
codec_tag_string=[0][0][0][0]
codec_tag=0x0000
width=720
height=576
coded_width=720
coded_height=576
has_b_frames=2
sample_aspect_ratio=N/A
display_aspect_ratio=N/A
pix_fmt=yuv420p
level=50
color_range=unknown
color_space=unknown
color_transfer=unknown
color_primaries=unknown
chroma_location=left
field_order=progressive
timecode=N/A
refs=1
is_avc=false
nal_length_size=0
id=N/A
r_frame_rate=299/12
avg_frame_rate=1000/1
time_base=1/1000
start_pts=0
start_time=0.000000
duration_ts=N/A
duration=N/A
bit_rate=N/A
max_bit_rate=N/A
bits_per_raw_sample=8
nb_frames=N/A
nb_read_frames=N/A
nb_read_packets=737
DISPOSITION:default=1
DISPOSITION:dub=0
DISPOSITION:original=0
DISPOSITION:comment=0
DISPOSITION:lyrics=0
DISPOSITION:karaoke=0
DISPOSITION:forced=0
DISPOSITION:hearing_impaired=0
DISPOSITION:visual_impaired=0
DISPOSITION:clean_effects=0
DISPOSITION:attached_pic=0
DISPOSITION:timed_thumbnails=0
TAG:DURATION=00:00:59.211000000
[/STREAM]
[FORMAT]
filename=testEncLeft.mkv
nb_streams=1
nb_programs=0
format_name=matroska,webm
format_long_name=Matroska / WebM
start_time=0.000000
duration=59.211000
size=278349
bit_rate=37607
probe_score=100
TAG:COMMENT=Slickline Player Export
TAG:ENCODER=Lavf57.83.100
[/FORMAT]And the info after remuxing, which works :
[PACKET]
codec_type=video
stream_index=0
pts=59050
pts_time=59.050000
dts=58890
dts_time=58.890000
duration=1
duration_time=0.001000
convergence_duration=N/A
convergence_duration_time=N/A
size=427
pos=277418
flags=__
[/PACKET]
[PACKET]
codec_type=video
stream_index=0
pts=58970
pts_time=58.970000
dts=58970
dts_time=58.970000
duration=1
duration_time=0.001000
convergence_duration=N/A
convergence_duration_time=N/A
size=205
pos=277852
flags=__
[/PACKET]
[PACKET]
codec_type=video
stream_index=0
pts=59130
pts_time=59.130000
dts=59050
dts_time=59.050000
duration=1
duration_time=0.001000
convergence_duration=N/A
convergence_duration_time=N/A
size=268
pos=278064
flags=__
[/PACKET]
[STREAM]
index=0
codec_name=h264
codec_long_name=H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10
profile=Main
codec_type=video
codec_time_base=1/2000
codec_tag_string=[0][0][0][0]
codec_tag=0x0000
width=720
height=576
coded_width=720
coded_height=576
has_b_frames=2
sample_aspect_ratio=N/A
display_aspect_ratio=N/A
pix_fmt=yuv420p
level=50
color_range=unknown
color_space=unknown
color_transfer=unknown
color_primaries=unknown
chroma_location=left
field_order=progressive
timecode=N/A
refs=1
is_avc=true
nal_length_size=4
id=N/A
r_frame_rate=299/12
avg_frame_rate=1000/1
time_base=1/1000
start_pts=0
start_time=0.000000
duration_ts=N/A
duration=N/A
bit_rate=N/A
max_bit_rate=N/A
bits_per_raw_sample=8
nb_frames=N/A
nb_read_frames=N/A
nb_read_packets=737
DISPOSITION:default=1
DISPOSITION:dub=0
DISPOSITION:original=0
DISPOSITION:comment=0
DISPOSITION:lyrics=0
DISPOSITION:karaoke=0
DISPOSITION:forced=0
DISPOSITION:hearing_impaired=0
DISPOSITION:visual_impaired=0
DISPOSITION:clean_effects=0
DISPOSITION:attached_pic=0
DISPOSITION:timed_thumbnails=0
TAG:DURATION=00:00:59.212000000
[/STREAM]
[FORMAT]
filename=fixedLeft.mkv
nb_streams=1
nb_programs=0
format_name=matroska,webm
format_long_name=Matroska / WebM
start_time=0.000000
duration=59.212000
size=278409
bit_rate=37615
probe_score=100
TAG:COMMENT=Slickline Player Export
TAG:ENCODER=Lavf58.12.100
[/FORMAT]Here is how I’m setting up the output context, for reference : it’s pretty standard, following the sample code.
int ret;
avformat_alloc_output_context2(&outputFormatCtx, nullptr, nullptr, outFilePath.c_str());
av_dict_set(&outputFormatCtx->metadata, "comment", "FFmpeg Export", 0);
if (!outputFormatCtx)
{
LOG_AND_THROW("Could not allocate output context");
}
outputVideoStream = avformat_new_stream(outputFormatCtx, nullptr);
outputVideoStream->time_base = AVRational{ 1, AV_TIME_BASE }; // Stream timebase will be used by codec
if (!outputVideoStream)
{
LOG_AND_THROW("Failed allocating output stream");
}
// defaults to "libx264"
AVCodec *outCodec = avcodec_find_encoder_by_name(selectedCodecName.c_str());
if (!outCodec)
{
LOG_AND_THROW("Failed finding output codec");
}
AVDictionary *opts = nullptr;
if (selectedCodecName == "libx264")
{
opts = getX264CodecOptions();
}
encoderCtx = avcodec_alloc_context3(outCodec);
if (!encoderCtx)
{
LOG_AND_THROW("Failed to allocate the encoder context");
}
encoderCtx->width = width;
encoderCtx->height = height;
encoderCtx->pix_fmt = AV_PIX_FMT_YUV420P;
// time base for the frames we will provide to the encoder
encoderCtx->time_base = AVRational{ 1, AV_TIME_BASE };
// convert framerate from double to rational
encoderCtx->framerate = AVRational{ (int)(frameRate * AV_TIME_BASE), AV_TIME_BASE};
// Match encoderCtx time base for the stream
outputVideoStream->time_base = encoderCtx->time_base;
ret = avcodec_open2(encoderCtx, outCodec, &opts);
if (ret < 0)
{
LOG_AND_THROW_PARAM("Cannot open video encoder for stream: %d", ret);
}
// Fill in some params for MP4 stream, details about encoder
ret = avcodec_parameters_from_context(outputVideoStream->codecpar, encoderCtx);
if (ret < 0)
{
LOG_AND_THROW_PARAM("Failed to copy encoder parameters to output stream: %d", ret);
}
if (outputFormatCtx->oformat->flags & AVFMT_GLOBALHEADER)
{
encoderCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
av_dump_format(outputFormatCtx, 0, filePath.c_str(), 1);
// End of encoder settings, setting up MP4
if (!(outputFormatCtx->oformat->flags & AVFMT_NOFILE))
{
ret = avio_open(&outputFormatCtx->pb, outFilePath.c_str(), AVIO_FLAG_WRITE);
if (ret < 0)
{
LOG_AND_THROW_PARAMSTR("Could not open output file '%s'", outFilePath.c_str());
}
}
ret = avformat_write_header(outputFormatCtx, nullptr);
if (ret < 0)
{
LOG_AND_THROW_PARAM("Error occurred when opening output file for writing: %d", ret);
}Can anyone help me figure out why the container is not playing properly ?
Thanks in advance.
-James
-
Extracting audio from video command fails for some videos
16 juin 2018, par Java AndroidI am using below FFmpeg command for extracting audio from video-
String[] complexCommand = new String[]"-y","-ss", "" +
startMs / 1000, "-t", "" + (endMs - startMs) / 1000, "-i",
inputFileAbsolutePath, "-vn", "-ar", "44100", "-ac", "2", "-b:a",
"256k", "-f", "mp3", outputFileAbsolutePath ;It works fine in my device but through analytics I came to know that its failing for many users or for many videos.I failed to understand whats wrong with the command and why its failing many times ?