
Recherche avancée
Médias (1)
-
The Slip - Artworks
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (58)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)
Sur d’autres sites (9065)
-
Adding a custom filter to FFmpeg
16 juillet 2024, par AbhimanyuI am looking to configure and compile FFmpeg with custom filter.
custom filters are available on this git repository which is compiled with older ffmpeg version
https://github.com/numberwolf/FFmpeg-PlusPlus


I am following the steps mentioned on the ffmpeg official website
https://github.com/FFmpeg/FFmpeg/blob/master/doc/writing_filters.txt


I am getting warning plusglshader filter is unknown and same for all other filters


Here are my steps :


First, create your filter source files in the libavfilter directory :


`libavfilter/vf_plusglshader.c
libavfilter/vf_pipglshader.c
libavfilter/vf_lutglshader.c
libavfilter/vf_fadeglshader.c`



Update the FFmpeg build system by adding your new filter files to libavfilter/Makefile :


I am getting following below steps


First, create your filter source files in the libavfilter directory :


`libavfilter/vf_plusglshader.c
libavfilter/vf_pipglshader.c
libavfilter/vf_lutglshader.c
libavfilter/vf_fadeglshader.c`



Update the FFmpeg build system by adding your new filter files to libavfilter/Makefile :


`OBJS-$(CONFIG_PLUSGLSHADER_FILTER) += vf_plusglshader.o
OBJS-$(CONFIG_PIPGLSHADER_FILTER) += vf_pipglshader.o
OBJS-$(CONFIG_LUTGLSHADER_FILTER) += vf_lutglshader.o
OBJS-$(CONFIG_FADEGLSHADER_FILTER) += vf_fadeglshader.o`



In libavfilter/allfilters.c, add your filter declarations :


`extern const AVFilter ff_vf_plusglshader;
extern const AVFilter ff_vf_pipglshader;
extern const AVFilter ff_vf_lutglshader;
extern const AVFilter ff_vf_fadeglshader;`



In libavfilter/filter_list.c (not allfilters.c), add your filters to the filter_list array :


`static const AVFilter * const filter_list[] = {
 // ... existing filters ...
 &ff_vf_plusglshader,
 &ff_vf_pipglshader,
 &ff_vf_lutglshader,
 &ff_vf_fadeglshader,
 NULL
};`



In libavfilter/allfilters.c, add your filter declarations :


`extern const AVFilter ff_vf_plusglshader;
extern const AVFilter ff_vf_pipglshader;
extern const AVFilter ff_vf_lutglshader;
extern const AVFilter ff_vf_fadeglshader;`



In libavfilter/filter_list.c (not allfilters.c), add your filters to the filter_list array :


`static const AVFilter * const filter_list[] = {
 // ... existing filters ...
 &ff_vf_plusglshader,
 &ff_vf_pipglshader,
 &ff_vf_lutglshader,
 &ff_vf_fadeglshader,
 NULL
};`



`


-
FFMpeg Concatenation Filters : Stream specifier ':0' in filtergraph matches no streams
7 février 2017, par Anthony EdenI am developing an application that relies heavily on FFMpeg to perform various transformations on audio files. I am currently testing my FFMpeg configuration on the command line.
I am trying to concatenate multiple audio files which are in different formats (Primarily MP3, MP2 & WAV). I have been using the official TRAC documentation (https://trac.ffmpeg.org/wiki/How%20to%20concatenate%20(join%2C%20merge)%20media%20files#differentcodec) to help me with this and have created the following command :
ffmpeg -i OHIn.wav -i OHOut.wav -filter_complex '[0:0] [1:0] concat=n=2:a=1 [a]' -map '[a]' output.wav
However, when I run this on Mac OS X using version 2.0.1 of FFMpeg, I get the following error message :
Stream specifier ':0' in filtergraph description [0:0] [1:0] concat=n=2:a=1 [a] matches no streams.
Here is my full output from the terminal :
~/ffmpeg -i OHIn.wav -i OHOut.wav -filter_complex '[0:0] [1:0] concat=n=2:a=1 [a]' -map '[a]' output.wav
ffmpeg version 2.0.1 Copyright (c) 2000-2013 the FFmpeg developers
built on Aug 15 2013 10:56:46 with llvm-gcc 4.2.1 (LLVM build 2336.11.00)
configuration: --prefix=/Volumes/Ramdisk/sw --enable-gpl --enable-pthreads --enable-version3 --enable-libspeex --enable-libvpx --disable-decoder=libvpx --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-avfilter --enable-libopencore_amrwb --enable-libopencore_amrnb --enable-filters --enable-libgsm --arch=x86_64 --enable-runtime-cpudetect
libavutil 52. 38.100 / 52. 38.100
libavcodec 55. 18.102 / 55. 18.102
libavformat 55. 12.100 / 55. 12.100
libavdevice 55. 3.100 / 55. 3.100
libavfilter 3. 79.101 / 3. 79.101
libswscale 2. 3.100 / 2. 3.100
libswresample 0. 17.102 / 0. 17.102
libpostproc 52. 3.100 / 52. 3.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, wav, from 'OHIn.wav':
Duration: 00:00:06.71, bitrate: 1411 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, wav, from 'OHOut.wav':
Duration: 00:00:07.19, bitrate: 1411 kb/s
Stream #1:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
Stream specifier ':0' in filtergraph description [0:0] [1:0] concat=n=2:a=1 [a] matches no streams.I do not understand why this does not work. FFMpeg shows that the streams 0:0 and 1:0 exist in the source files. The only other similar problems online have surrounded the use of the single quote in Windows, however testing of this confirm it does not apply to my Mac command line.
Any help would be much appreciated.
-
why is ffmpeg so fast
8 juillet 2017, par jx.xuI have written a ffmpeg-based C++ program about converting yuv to rgb using libswscale, similar to the official example.
Just simple copy and modify the official example before build in my visual studio 2017 on windows 10. However, the time performance is much slower than the ffmpeg.exe executable file, as 36 seconds vs 12 seconds.
I already know that ffmpeg uses some optimization techniques like SIMD instructions. While in my performance profiling, the bottleneck is disk I/O writing which takes at least 2/3 time.
Then I develop a concurrent version where a dedicated thread will handle all I/O task while the situation doesn’t seem to improve. It’s worth noting that I use Boost C++ library to utilize multi-thread and asynchronous events.So, I just wanna know how can I modify program using the libraries of ffmpeg or the time performance gap towards ffmpeg.exe just can’ be catched up.
As requested by friendly answers, I post my codes. Compiler is msvc in vs2017 and I turn on the full optimization /Ox .
Supplement my main question, I make another plain disk I/O test merely copying the file of the same size. It’s surprising to find that plain sequential disk I/O costs 28 seconds while the front codes cost 36 seconds in total... Any one knows how can ffmpeg finishes the same job in only 12 seconds ? That must use some optimization techniques, like random disk I/O or memory buffer reusing ?
#include "stdafx.h"
#define __STDC_CONSTANT_MACROS
extern "C" {
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>parseutils.h>
#include <libswscale></libswscale>swscale.h>
}
#ifdef _WIN64
#pragma comment(lib, "avformat.lib")
#pragma comment(lib, "avcodec.lib")
#pragma comment(lib, "avutil.lib")
#pragma comment(lib, "swscale.lib")
#endif
#include <common></common>cite.hpp> // just include headers of c++ STL/Boost
int main(int argc, char **argv)
{
chrono::duration<double> period;
auto pIn = fopen("G:/Panorama/UHD/originalVideos/DrivingInCountry_3840x1920_30fps_8bit_420_erp.yuv", "rb");
auto time_mark = chrono::steady_clock::now();
int src_w = 3840, src_h = 1920, dst_w, dst_h;
enum AVPixelFormat src_pix_fmt = AV_PIX_FMT_YUV420P, dst_pix_fmt = AV_PIX_FMT_RGB24;
const char *dst_filename = "G:/Panorama/UHD/originalVideos/out.rgb";
const char *dst_size = "3840x1920";
FILE *dst_file;
int dst_bufsize;
struct SwsContext *sws_ctx;
int i, ret;
if (av_parse_video_size(&dst_w, &dst_h, dst_size) < 0) {
fprintf(stderr,
"Invalid size '%s', must be in the form WxH or a valid size abbreviation\n",
dst_size);
exit(1);
}
dst_file = fopen(dst_filename, "wb");
if (!dst_file) {
fprintf(stderr, "Could not open destination file %s\n", dst_filename);
exit(1);
}
/* create scaling context */
sws_ctx = sws_getContext(src_w, src_h, src_pix_fmt,
dst_w, dst_h, dst_pix_fmt,
SWS_BILINEAR, NULL, NULL, NULL);
if (!sws_ctx) {
fprintf(stderr,
"Impossible to create scale context for the conversion "
"fmt:%s s:%dx%d -> fmt:%s s:%dx%d\n",
av_get_pix_fmt_name(src_pix_fmt), src_w, src_h,
av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h);
ret = AVERROR(EINVAL);
exit(1);
}
io_service srv; // Boost.Asio class
auto work = make_shared(srv);
thread t{ bind(&io_service::run,&srv) }; // I/O worker thread
vector> result;
/* utilize function class so that lambda can capture itself */
function)> recursion;
recursion = [&](int left, unique_future<bool> writable)
{
if (left <= 0)
{
writable.wait();
return;
}
uint8_t *src_data[4], *dst_data[4];
int src_linesize[4], dst_linesize[4];
/* promise-future pair used for thread synchronizing so that the file part is written in the correct sequence */
promise<bool> sync;
/* allocate source and destination image buffers */
if ((ret = av_image_alloc(src_data, src_linesize,
src_w, src_h, src_pix_fmt, 16)) < 0) {
fprintf(stderr, "Could not allocate source image\n");
}
/* buffer is going to be written to rawvideo file, no alignment */
if ((ret = av_image_alloc(dst_data, dst_linesize,
dst_w, dst_h, dst_pix_fmt, 1)) < 0) {
fprintf(stderr, "Could not allocate destination image\n");
}
dst_bufsize = ret;
fread(src_data[0], src_h*src_w, 1, pIn);
fread(src_data[1], src_h*src_w / 4, 1, pIn);
fread(src_data[2], src_h*src_w / 4, 1, pIn);
result.push_back(async([&] {
/* convert to destination format */
sws_scale(sws_ctx, (const uint8_t * const*)src_data,
src_linesize, 0, src_h, dst_data, dst_linesize);
if (left>0)
{
assert(writable.get() == true);
srv.post([=]
{
/* write scaled image to file */
fwrite(dst_data[0], 1, dst_bufsize, dst_file);
av_freep((void*)&dst_data[0]);
});
}
sync.set_value(true);
av_freep(&src_data[0]);
}));
recursion(left - 1, sync.get_future());
};
promise<bool> root;
root.set_value(true);
recursion(300, root.get_future()); // .yuv file only has 300 frames
wait_for_all(result.begin(), result.end()); // wait for all unique_future to callback
work.reset(); // io_service::work releses
srv.stop(); // io_service stops
t.join(); // I/O thread joins
period = steady_clock::now() - time_mark; // calculate valid time
fprintf(stderr, "Scaling succeeded. Play the output file with the command:\n"
"ffplay -f rawvideo -pix_fmt %s -video_size %dx%d %s\n",
av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h, dst_filename);
cout << "period" << et << period << en;
end:
fclose(dst_file);
// av_freep(&src_data[0]);
// av_freep(&dst_data[0]);
sws_freeContext(sws_ctx);
return ret < 0;
}
</bool></bool></bool></double>