
Recherche avancée
Médias (91)
-
Les Miserables
9 décembre 2019, par
Mis à jour : Décembre 2019
Langue : français
Type : Textuel
-
VideoHandle
8 novembre 2019, par
Mis à jour : Novembre 2019
Langue : français
Type : Video
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
-
Un test - mauritanie
3 avril 2014, par
Mis à jour : Avril 2014
Langue : français
Type : Textuel
-
Pourquoi Obama lit il mes mails ?
4 février 2014, par
Mis à jour : Février 2014
Langue : français
-
IMG 0222
6 octobre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Image
Autres articles (28)
-
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...)
Sur d’autres sites (4546)
-
Stream Video from ios device
29 avril 2014, par iJoseI have integrated FFmpeg libraries to my project.
Now
i want stream a video that is captured using my ios device (iPhone, iPad, iPod)
to an RTMP server using FFMpeg.I did post a similar question and googled for the same but did not end up with any solution.
Can anyone of you suggest me a tutorial or atleast direct me as i am badly stuck over here and not able to move ahead.
Kindly Pour your knowledge.
Thanking you in advance.
-
Cuda Memory Management : re-using device memory from C calls (multithreaded, ffmpeg), but failing on cudaMemcpy
4 mars 2013, par Nuke StollakI'm trying to CUDA-fy my ffmpeg filter that was taking over 90% of the CPU time, according to gprof. I first went from one core to OpenMP on 4 cores and got a 3.8x increase in frames encoded per second, but it's still too slow. CUDA seemed like the next natural step.
I've gotten a modest (20% ?) increase by replacing one of my filter's functions with a CUDA kernel call, and just to get things up and running, I was cudaMalloc'ing and cudaMemcpy'ing on each frame. I suspected I would get better results if I weren't doing this each frame, so before I go ahead and move the rest of my code to CUDA, I wanted to fix this by allocating the memory before my filter is called and freeing it afterwards, but the device memory isn't having it. I'm only storing the device memory locations outside of code that knows about CUDA ; I'm not trying to use the data there, just save it for the next time I call a CUDA-aware function that needs it.
Here's where I am so far :
Environment : the last AMI Linux on EC2's GPU Cluster, latest updates installed. Everything is fairly standard.
My filter is split into two files : vf_myfilter.c (compiled by gcc, like almost every other file in ffmpeg) and vf_myfilter_cu.cu (compiled by nvcc). My Makefile's link step includes
-lcudart
and both .o files. I build vf_myfilter_cu.o using (as one line)nvcc -I. -I./ -I/opt/nvidia/cuda/include $(CPPFLAGS)
-Xcompiler "$(CFLAGS)"
-c -o libfilter/vf_myfilter_cu.o libfilter/vf_myfilter_cu.cuWhen the variables (set by configure) are expanded, here's what I get, again all in one line but split up here for easier reading. I just noticed the duplicate include path directives, but it shouldn't hurt.
nvcc -I. -I./ -I/opt/nvidia/cuda/include -I. -I./ -D_ISOC99_SOURCE
-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_POSIX_C_SOURCE=200112
-D_XOPEN_SOURCE=600 -DHAVE_AV_CONFIG_H
-XCompiler "-fopenmp -std=c99 -fomit-frame-pointer -pthread -g
-Wdeclaration-after-statment -Wall -Wno-parentheses
-Wno-switch -Wno-format-zero-length -Wdisabled-optimization
-Wpointer-arith -Wredundant-decls -Wno-pointer-sign
-Wwrite-strings -Wtype-limits -Wundef -Wmissing-prototypes
-Wno-pointer-to-int-case -Wstrict-prototypes -O3 -fno-math-errno
-fno-signed-zeros -fno-tree-vectorize
-Werror=implicit-function-declaration -Werror=missing-prototypes
-Werror=vla "
-c -o libavfilter/vf_myfilter_cu.o libavfilter/vf_myfilter_cu.cuvf_myfilter.c calls three functions from vf_myfilter_cu.cu file which handle memory and call the CUDA kernel code. I thought I would be able to save the device pointers from my memory initialization, which runs once per ffmpeg run, and re-use that space each time I called the wrapper for my kernel function, but when I cudaMemcpy from my host memory to my device memory that I stored, it fails with cudaInvalidValue. If I cudaMalloc my device memory on every frame, I'm fine.
I plan on using pinned host memory, once I have everything up in CUDA code and have minimized the number of times I need to return to the main ffmpeg code.
Steps taken :
First sign of trouble : search the web. I found Passing a pointer to device memory between classes in CUDA and printed out the pointers at various places in my execution to ensure that the device memory values were the same everywhere, and they are. FWIW, they seem to start around 0x90010000.
ffmpeg's
configure
gave me -pthreads, so I checked to see if my filter was being called from multiple threads according to how can I tell if pthread_self is the main (first) thread in the process ? and checkingsyscall(SYS_gettid) == getpid()
to ensure that I'm not calling CUDA from different threads—I'm indeed in the primary thread at every step, according to those two funcs. I am still using OpenMP later around some for loops in the main .c filter function, but the calls to CUDA don't occur in those loops.Code Overview :
ffmpeg provides me a MyfilterContext structure pointer on each frame, as well as on the filter's config_input and uninit routines (called once per file), so I added some *host_var and *dev_var variables (a few of each, float and unsigned char).
There is a whole lot of code I skipped for this post, but most of it has to do with my algorithm and details involved in writing an ffmpeg filter. I'm actually using about 6 host variables and 7 device variables right now, but for demonstration I limited it to one of each.
Here is, broadly, what my vf_myfilter.c looks like.
// declare my functions from vf_myfilter_cu.cu
extern void cudaMyInit(unsigned char **dev_var, size_t mysize);
extern void cudaMyUninit(unsigned char *dev_var);
extern void cudaMyFunction(unsigned char *host_var, unsigned char *dev_var, size_t mysize);
// part of the MyFilterContext structure, which ffmpeg keeps track of for me.
typedef struct {
unsigned char *host_var;
unsigned char *dev_var;
} MyFilterContext;
// ffmpeg calls this function once per file, before any frames are processed.
static int config_input(AVFilterLink *inlink) {
// how ffmpeg passes me my context, fairly standard.
MyfilterContext * myContext = inlink->dst->priv;
// compute the size one video plane of one frame of video
size_t mysize = sizeof(unsigned char) * inlink->w * inlink->h;
// av_mallocz is a malloc wrapper provided and required by ffmpeg
myContext->host_var = (unsigned char*) av_mallocz(size);
// Here's where I attempt to allocate my device memory.
cudaMyInit( & myContext->dev_var, mysize);
}
// Called once per frame of video
static int filter_frame(AVFilterLink *inlink, AVFilterBufferRef *frame) {
MyFilterContext *myContext = inlink->dst->priv;
// sanity check to make sure that this isn't part of the multithreaded code
if ( syscall(SYS_gettid) == getpid() )
av_log(.... ); // This line never runs, so it's not threaded?
// ...fill host_var with data from frame,
// set mysize to the size of the buffer
// Call my wrapper function defined in the .cu file
cudaMyFunction(myContext->host_var, myContext->dev_var, mysize);
// ... take the results from host_var and apply them to frame
// ... and return the processed frame to ffmpeg
}
// called after everything else has happened: free up the memory.
static av_cold void uninit(AVFilterContext *ctx) {
MyFilterContext *myContext = ctx->priv;
// free my host_var
if(myContext->host_var!=NULL) {
av_free(myContext->host_var);
myContext->host_var=NULL;
}
// free my dev_var
cudaMyUninit(myContext->dev_var);
}Here is, broadly, what my vf_myfilter_cu.cu looks like :
// my kernel function that does the work.
__global__ void myfunc(unsigned char *dev_var, size_t mysize) {
// find the offset for this particular GPU thread to process
// exit this function if the block/thread combo points to somewhere
// outside the frame
// make sure we're less than mysize bytes from the beginning of dev_var
// do things to dev_var[some_offset]
}
// Allocate the device memory
extern "C" void cudaMyInit(unsigned char **dev_var, size_t mysize) {
if(cudaMalloc( (void**) dev_var, mysize) != cudaSuccess) {
printf("Cannot allocate the memory\n");
}
}
// Free the device memory.
extern "C" void cudaMyUninit(unsigned char *dev_var) {
cudaFree(dev_var);
}
// Copy data from the host to the device,
// Call the kernel function, and
// Copy data from the device to the host.
extern "C" void cudaMyFunction(
unsigned char *host_var,
unsigned char *dev_var,
size_t mysize )
{
cudaError_t cres;
// dev_works is what I want to get rid of, but
// to make sure that there's not something more obvious going
// on, I made sure that my cudaMemcpy works if I'm allocating
// the device memory in every frame.
unsigned char *dev_works;
if(cudaMalloc( (void **) &dev_works, mysize)!=cudaSuccess) {
// I don't see this message
printf("failed at per-frame malloc\n");
}
// THIS PART WORKS, copying host_var to dev_works
cres=cudaMemcpy( (void *) dev_works, host_var, mysize, cudaMemcpyHostToDevice);
if(cres!=cudaSuccess) {
if(cres==cudaErrorInvalidValue) {
// I don't see this message.
printf("cudaErrorInvalidValue at per-frame cudaMemcpy\n");
}
}
// THIS PART FAILS, copying host_var to dev_var
cres=cudaMemcpy( (void *) dev_var, host_var, mysize, cudaMemcpyHostToDevice);
if(cres!=cudaSuccess) {
if(cres==cudaErrorInvalidValue) {
// this is the error code that prints.
printf("cudaErrorInvalidValue at per-frame cudaMemcpy\n");
}
// I check for other error codes, but they're not being hit.
}
// and this works with dev_works
myfunc<<>>(dev_works, mysize);
if(cudaMemcpy(host_var, dev_works, mysize, cudaMemcpyDeviceToHost)!=cudaSuccess) {
// I don't see this message.
printf("Failed to copy post-kernel func\n");
}
cudaFree(dev_works);
}Any ideas ?
-
Benefits and Shortcomings of Multi-Touch Attribution
13 mars 2023, par Erin — Analytics Tips