
Recherche avancée
Autres articles (94)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (5922)
-
Could not open encoder using ffmpeg C APi for MOV format
22 juin 2016, par lupodContext
I am writing a program for doing some video processing on an input file. I wrote two classes for handling the "reading/writing frames" part that essentially wrap the functions of ffmpeg. These classes may be instantiated by providing an input and output file name, and in their constructor I initialize everything that is needed (or at least I hope so).
This are the two routines that are called inside the constructors :
// InputVideoHandler.cpp
void InputVideoHandler::init(char* name) {
streamIndex = -1;
int numStreams;
if (avformat_open_input(&formatCtx, name, NULL, NULL) != 0)
throw std::exception("Invalid input file name.");
if (avformat_find_stream_info(formatCtx, NULL)<0)
throw std::exception("Could not find stream information.");
numStreams = formatCtx->nb_streams;
if (numStreams < 0)
throw std::exception("No streams in input video file.");
for (int i = 0; i < numStreams; i++) {
if (formatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
streamIndex = i;
break;
}
}
if (streamIndex < 0)
throw std::exception("No video stream in input video file.");
// find decoder using id
codec = avcodec_find_decoder(formatCtx->streams[streamIndex]->codec->codec_id);
if (codec == nullptr)
throw std::exception("Could not find suitable decoder for input file.");
// copy context from input stream
codecCtx = avcodec_alloc_context3(codec);
if (avcodec_copy_context(codecCtx, formatCtx->streams[streamIndex]->codec) != 0)
throw std::exception("Could not copy codec context from input stream.");
if (avcodec_open2(codecCtx, codec, NULL) < 0)
throw std::exception("Could not open decoder.");
codecCtx->refcounted_frames = 1;
}
// OutputVideoBuilder.cpp
void OutputVideoBuilder::init(char* name, AVCodecContext* inputCtx) {
if (avformat_alloc_output_context2(&formatCtx, NULL, NULL, name) < 0)
throw std::exception("Could not determine file extension from provided name.");
codec = avcodec_find_encoder(inputCtx->codec_id);
if (codec == nullptr) {
throw std::exception("Could not find suitable encoder.");
}
codecCtx = avcodec_alloc_context3(codec);
if (avcodec_copy_context(codecCtx, inputCtx) < 0)
throw std::exception("Could not copy output codec context from input");
codecCtx->time_base = inputCtx->time_base;
if (avcodec_open2(codecCtx, codec, NULL) < 0)
throw std::exception("Could not open encoder.");
stream = avformat_new_stream(formatCtx, codec);
if (stream == nullptr) {
throw std::exception("Could not allocate stream.");
}
stream->id = formatCtx->nb_streams - 1;
stream->codec = codecCtx;
stream->time_base = codecCtx->time_base;
av_dump_format(formatCtx, 0, name, 1);
if (!(formatCtx->oformat->flags & AVFMT_NOFILE)) {
if (avio_open(&formatCtx->pb, name, AVIO_FLAG_WRITE) < 0) {
throw std::exception("Could not open output file.");
}
}
if (avformat_write_header(formatCtx, NULL) < 0) {
throw std::exception("Error occurred when opening output file.");
}
}As you see, for the Init function of the output handler I require that an AVCodecContext must be provided. In my code, I pass to the constructor the AVCodecContext that is stored in the input handler and that was previously created.
Question :
The two functions work fine when I test my program with some video formats, like .mpg or .avi. When I try to process .mov/.mp4 files, however, my code throws the exception that I labeled "Could not open encoder." in OutputVideoBuilder::Init(). Why is this happening ? I read from the General documentation of ffmpeg that that format should be supported as well by ffmpeg. I am assuming that I am doing something wrong in my code, which I do not completely understand because it was created by trying to adapt the tutorials of the documentation to my specific case. Also for this reason, any comments on things that are useless or things that are missing will be greatly appreciated.
-
Encoder (codec png) not found for output stream #0:0 [duplicate]
7 juin 2016, par Anubhav DhawanThis question already has an answer here :
-
ffmpeg & png watermark issue
2 answers
I’m trying to create a NodeJS app that converts a video into a GIF image.
I’m using node-gify plugin for this purpose, which uses FFmpeg and GraphicsMagick.
Here’s my sample code :
var gify = require('./');
var http = require('http');
var fs = require('fs');
var opts = {
height: 300,
rate: 10
};
console.time('convert');
gify('out.mp4', 'out.gif', opts, function(err) {
if (err) throw err;
console.timeEnd('convert');
var s = fs.statSync('out.gif');
console.log('size: %smb', s.size / 1024 / 1024 | 0);
});And here’s my console error :
> gify@0.2.0 start /home/daffodil/repos/node-gify-master
> node example.js
/home/daffodil/repos/node-gify-master/example.js:24
if (err) throw err;
^
Error: Command failed: /bin/sh -c ffmpeg -i out.mp4 -filter:v scale=-1:300 -r 10 /tmp/IP5OXJZELd/%04d.png
ffmpeg version 3.0.2 Copyright (c) 2000-2016 the FFmpeg developers
built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04)
configuration: --disable-yasm
libavutil 55. 17.103 / 55. 17.103
libavcodec 57. 24.102 / 57. 24.102
libavformat 57. 25.100 / 57. 25.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 31.100 / 6. 31.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'out.mp4':
Metadata:
major_brand : mp42
minor_version : 1
compatible_brands: mp42mp41
creation_time : 2005-02-25 02:35:57
Duration: 00:01:10.00, start: 0.000000, bitrate: 106 kb/s
Stream #0:0(eng): Audio: aac (LC) (mp4a / 0x6134706D), 8000 Hz, stereo, fltp, 19 kb/s (default)
Metadata:
creation_time : 2005-02-25 02:35:57
handler_name : Apple Sound Media Handler
Stream #0:1(eng): Video: mpeg4 (Advanced Simple Profile) (mp4v / 0x7634706D), yuv420p, 192x242 [SAR 1:1 DAR 96:121], 76 kb/s, 15 fps, 15 tbr, 600 tbn, 1k tbc (default)
Metadata:
creation_time : 2005-02-25 02:35:57
handler_name : Apple Video Media Handler
Stream #0:2(eng): Data: none (rtp / 0x20707472), 4 kb/s (default)
Metadata:
creation_time : 2005-02-25 02:35:57
handler_name : hint media handler
Stream #0:3(eng): Data: none (rtp / 0x20707472), 3 kb/s (default)
Metadata:
creation_time : 2005-02-25 02:35:57
handler_name : hint media handler
Output #0, image2, to '/tmp/IP5OXJZELd/%04d.png':
Metadata:
major_brand : mp42
minor_version : 1
compatible_brands: mp42mp41
Stream #0:0(eng): Video: png, none, q=2-31, 128 kb/s (default)
Metadata:
creation_time : 2005-02-25 02:35:57
handler_name : Apple Video Media Handler
Stream mapping:
Stream #0:1 -> #0:0 (mpeg4 (native) -> ? (?))
Encoder (codec png) not found for output stream #0:0
at ChildProcess.exithandler (child_process.js:213:12)
at emitTwo (events.js:100:13)
at ChildProcess.emit (events.js:185:7)
at maybeClose (internal/child_process.js:827:16)
at Socket.<anonymous> (internal/child_process.js:319:11)
at emitOne (events.js:90:13)
at Socket.emit (events.js:182:7)
at Pipe._onclose (net.js:471:12)
</anonymous>PS : I had a couple of problems installing FFmpeg on my Ubuntu 14.04.
- First, FFmpeg is removed from Ubuntu 14.04 (legal issues AFAIK). But I managed to
apt-get
it through this. - Second, when I tried to
./configure
(as mentioned in its README.md), I got this error -yasm/nasm not found or too old. Use --disable-yasm for a crippled build.
. So I used./configure --disable-yasm
instead, and it (somehow) worked.
Update #1
After read this log a couple of times, I managed to produce a sample GIF from my mp4 file, by changing the command, which
example.js
tries to run :From
ffmpeg -i out.mp4 -filter:v scale=-1:300 -r 10 /tmp/Lz43nx6wv1/%04d.png
To
ffmpeg -i out.mp4 -filter:v scale=-1:300 -r 10 out.gif
But it’s still using command line, I need to do this by code.
So I dived into the code and found that this wrong url is coming from the plugin’s index.js :
...
// tmpfile(s)
var id = uid(10);
var dir = path.resolve('/tmp/' + id);
var tmp = path.join(dir, '/%04d.png');
...Is this an issue with the plugin, or am I doing something wrong here ?
In any case, please put the correct stub here, because I don’t want to touch this part unless I know what I’m doing ?Update #2
Now I installed zlib1g-dev, and then reinstalled both FFmpeg and graphicsMagick, and now I see this error :
gm convert: No decode delegate for this image format (/tmp/ZQbEAynAcf/0702.png).
Thanks in advance :)
-
ffmpeg & png watermark issue
-
Losing quality when encoding with ffmpeg
22 mai 2016, par lupodI am using the c libraries of ffmpeg to read frames from a video and create an output file that is supposed to be identical to the input.
However, somewhere during this process some quality gets lost and the result is "less sharp". My guess is that the problem is the encoding and that the frames are too compressed (also because the size of the file decreases quite significantly). Is there some parameter in the encoder that allows me to control the quality of the result ? I found that AVCodecContext has a compression_level member, but changing it that does not seem to have any effect.I post here part of my code in case it could help. I would say that something must be changed in the init function of OutputVideoBuilder when I set the codec. The AVCodecContext that is passed to the method is the same of InputVideoHandler.
Here are the two main classes that I created to wrap the ffmpeg functionalities :// This class opens the video files and sets the decoder
class InputVideoHandler {
public:
InputVideoHandler(char* name);
~InputVideoHandler();
AVCodecContext* getCodecContext();
bool readFrame(AVFrame* frame, int* success);
private:
InputVideoHandler();
void init(char* name);
AVFormatContext* formatCtx;
AVCodec* codec;
AVCodecContext* codecCtx;
AVPacket packet;
int streamIndex;
};
void InputVideoHandler::init(char* name) {
streamIndex = -1;
int numStreams;
if (avformat_open_input(&formatCtx, name, NULL, NULL) != 0)
throw std::exception("Invalid input file name.");
if (avformat_find_stream_info(formatCtx, NULL)<0)
throw std::exception("Could not find stream information.");
numStreams = formatCtx->nb_streams;
if (numStreams < 0)
throw std::exception("No streams in input video file.");
for (int i = 0; i < numStreams; i++) {
if (formatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
streamIndex = i;
break;
}
}
if (streamIndex < 0)
throw std::exception("No video stream in input video file.");
// find decoder using id
codec = avcodec_find_decoder(formatCtx->streams[streamIndex]->codec->codec_id);
if (codec == nullptr)
throw std::exception("Could not find suitable decoder for input file.");
// copy context from input stream
codecCtx = avcodec_alloc_context3(codec);
if (avcodec_copy_context(codecCtx, formatCtx->streams[streamIndex]->codec) != 0)
throw std::exception("Could not copy codec context from input stream.");
if (avcodec_open2(codecCtx, codec, NULL) < 0)
throw std::exception("Could not open decoder.");
}
// frame must be initialized with av_frame_alloc() before!
// Returns true if there are other frames, false if not.
// success == 1 if frame is valid, 0 if not.
bool InputVideoHandler::readFrame(AVFrame* frame, int* success) {
*success = 0;
if (av_read_frame(formatCtx, &packet) < 0)
return false;
if (packet.stream_index == streamIndex) {
avcodec_decode_video2(codecCtx, frame, success, &packet);
}
av_free_packet(&packet);
return true;
}
// This class opens the output and write frames to it
class OutputVideoBuilder{
public:
OutputVideoBuilder(char* name, AVCodecContext* inputCtx);
~OutputVideoBuilder();
void writeFrame(AVFrame* frame);
void writeVideo();
private:
OutputVideoBuilder();
void init(char* name, AVCodecContext* inputCtx);
void logMsg(AVPacket* packet, AVRational* tb);
AVFormatContext* formatCtx;
AVCodec* codec;
AVCodecContext* codecCtx;
AVStream* stream;
};
void OutputVideoBuilder::init(char* name, AVCodecContext* inputCtx) {
if (avformat_alloc_output_context2(&formatCtx, NULL, NULL, name) < 0)
throw std::exception("Could not determine file extension from provided name.");
codec = avcodec_find_encoder(inputCtx->codec_id);
if (codec == nullptr) {
throw std::exception("Could not find suitable encoder.");
}
codecCtx = avcodec_alloc_context3(codec);
if (avcodec_copy_context(codecCtx, inputCtx) < 0)
throw std::exception("Could not copy output codec context from input");
codecCtx->time_base = inputCtx->time_base;
codecCtx->compression_level = 0;
if (avcodec_open2(codecCtx, codec, NULL) < 0)
throw std::exception("Could not open encoder.");
stream = avformat_new_stream(formatCtx, codec);
if (stream == nullptr) {
throw std::exception("Could not allocate stream.");
}
stream->id = formatCtx->nb_streams - 1;
stream->codec = codecCtx;
stream->time_base = codecCtx->time_base;
av_dump_format(formatCtx, 0, name, 1);
if (!(formatCtx->oformat->flags & AVFMT_NOFILE)) {
if (avio_open(&formatCtx->pb, name, AVIO_FLAG_WRITE) < 0) {
throw std::exception("Could not open output file.");
}
}
if (avformat_write_header(formatCtx, NULL) < 0) {
throw std::exception("Error occurred when opening output file.");
}
}
void OutputVideoBuilder::writeFrame(AVFrame* frame) {
AVPacket packet = { 0 };
int success;
av_init_packet(&packet);
if (avcodec_encode_video2(codecCtx, &packet, frame, &success))
throw std::exception("Error encoding frames");
if (success) {
av_packet_rescale_ts(&packet, codecCtx->time_base, stream->time_base);
packet.stream_index = stream->index;
logMsg(&packet,&stream->time_base);
av_interleaved_write_frame(formatCtx, &packet);
}
av_free_packet(&packet);
}This is the part of the main function that reads and write frames :
while (inputHandler->readFrame(frame,&gotFrame)) {
if (gotFrame) {
try {
outputBuilder->writeFrame(frame);
}
catch (std::exception e) {
std::cout << e.what() << std::endl;
return -1;
}
}
}