Recherche avancée

Médias (91)

Autres articles (98)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

Sur d’autres sites (3783)

  • ffmpeg : mix audio and video of different length

    24 octobre 2016, par user3445678

    I have 2 files : 1 video file (without sound) - length 6 seconds, 1 audio - length 10 seconds.
    Both audio and video contains same conversation, but audio starts 4 seconds earlier and after that was started video.

    [----------] audio
       [------] video

    So, I want to mix them together to video file with length 10 seconds where first 4 seconds black screen with audio then goes real video and audio.

    [====------] audio+video (where '=' is black screen)

    I hope my description was clear enough ).
    How can I do this with ffmpeg or gstreamer ?

  • avcodec/rv34 : Don't needlessly copy VLC length and symbol arrays

    22 octobre 2020, par Andreas Rheinhardt
    avcodec/rv34 : Don't needlessly copy VLC length and symbol arrays
    

    Most of the VLCs used by RealVideo 3 and 4 obey three simple rules :
    Shorter codes are on the left of the tree, for each length, the symbols
    are ascending from left to right and the symbols either form a
    permutation of 1..size or 0..(size - 1). For the latter case, one just
    needs to store the length of each symbol and create the codes according
    to the other rules ; no explicit code or symbol array must be stored.
    The former case is also treated in much the same way by artificially
    assigning a length of zero to the symbol 0 ; when a length of zero was
    encountered, the element was ignored except that the symbol counter was
    still incremented. If the length was nonzero, the symbol would be
    assigned via the symbol counter and the length copied over into a new
    array.

    Yet this is unnecessary, as ff_init_vlc_sparse() follows exactly the
    same pattern : If a length of zero is encountered, the element is ignored
    and only the symbol counter incremented. So one can directly forward the
    length array and also need not create a symbol table oneself, because
    ff_init_vlc_sparse() will infer the same symbol table in this case.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>

    • [DH] libavcodec/rv34.c
  • Cuda Memory Management : re-using device memory from C calls (multithreaded, ffmpeg), but failing on cudaMemcpy

    4 mars 2013, par Nuke Stollak

    I'm trying to CUDA-fy my ffmpeg filter that was taking over 90% of the CPU time, according to gprof. I first went from one core to OpenMP on 4 cores and got a 3.8x increase in frames encoded per second, but it's still too slow. CUDA seemed like the next natural step.

    I've gotten a modest (20% ?) increase by replacing one of my filter's functions with a CUDA kernel call, and just to get things up and running, I was cudaMalloc'ing and cudaMemcpy'ing on each frame. I suspected I would get better results if I weren't doing this each frame, so before I go ahead and move the rest of my code to CUDA, I wanted to fix this by allocating the memory before my filter is called and freeing it afterwards, but the device memory isn't having it. I'm only storing the device memory locations outside of code that knows about CUDA ; I'm not trying to use the data there, just save it for the next time I call a CUDA-aware function that needs it.

    Here's where I am so far :

    Environment : the last AMI Linux on EC2's GPU Cluster, latest updates installed. Everything is fairly standard.

    My filter is split into two files : vf_myfilter.c (compiled by gcc, like almost every other file in ffmpeg) and vf_myfilter_cu.cu (compiled by nvcc). My Makefile's link step includes -lcudart and both .o files. I build vf_myfilter_cu.o using (as one line)

    nvcc -I. -I./ -I/opt/nvidia/cuda/include $(CPPFLAGS)
        -Xcompiler "$(CFLAGS)"
         -c -o libfilter/vf_myfilter_cu.o libfilter/vf_myfilter_cu.cu

    When the variables (set by configure) are expanded, here's what I get, again all in one line but split up here for easier reading. I just noticed the duplicate include path directives, but it shouldn't hurt.

    nvcc -I. -I./ -I/opt/nvidia/cuda/include -I. -I./ -D_ISOC99_SOURCE
       -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_POSIX_C_SOURCE=200112
       -D_XOPEN_SOURCE=600 -DHAVE_AV_CONFIG_H
       -XCompiler "-fopenmp -std=c99 -fomit-frame-pointer -pthread -g
                   -Wdeclaration-after-statment -Wall -Wno-parentheses
                   -Wno-switch -Wno-format-zero-length -Wdisabled-optimization  
                   -Wpointer-arith -Wredundant-decls -Wno-pointer-sign
                   -Wwrite-strings -Wtype-limits -Wundef -Wmissing-prototypes
                   -Wno-pointer-to-int-case -Wstrict-prototypes -O3 -fno-math-errno
                   -fno-signed-zeros -fno-tree-vectorize
                   -Werror=implicit-function-declaration -Werror=missing-prototypes
                   -Werror=vla "
       -c -o libavfilter/vf_myfilter_cu.o libavfilter/vf_myfilter_cu.cu

    vf_myfilter.c calls three functions from vf_myfilter_cu.cu file which handle memory and call the CUDA kernel code. I thought I would be able to save the device pointers from my memory initialization, which runs once per ffmpeg run, and re-use that space each time I called the wrapper for my kernel function, but when I cudaMemcpy from my host memory to my device memory that I stored, it fails with cudaInvalidValue. If I cudaMalloc my device memory on every frame, I'm fine.

    I plan on using pinned host memory, once I have everything up in CUDA code and have minimized the number of times I need to return to the main ffmpeg code.

    Steps taken :

    First sign of trouble : search the web. I found Passing a pointer to device memory between classes in CUDA and printed out the pointers at various places in my execution to ensure that the device memory values were the same everywhere, and they are. FWIW, they seem to start around 0x90010000.

    ffmpeg's configure gave me -pthreads, so I checked to see if my filter was being called from multiple threads according to how can I tell if pthread_self is the main (first) thread in the process ? and checking syscall(SYS_gettid) == getpid() to ensure that I'm not calling CUDA from different threads—I'm indeed in the primary thread at every step, according to those two funcs. I am still using OpenMP later around some for loops in the main .c filter function, but the calls to CUDA don't occur in those loops.

    Code Overview :

    ffmpeg provides me a MyfilterContext structure pointer on each frame, as well as on the filter's config_input and uninit routines (called once per file), so I added some *host_var and *dev_var variables (a few of each, float and unsigned char).

    There is a whole lot of code I skipped for this post, but most of it has to do with my algorithm and details involved in writing an ffmpeg filter. I'm actually using about 6 host variables and 7 device variables right now, but for demonstration I limited it to one of each.

    Here is, broadly, what my vf_myfilter.c looks like.

    // declare my functions from vf_myfilter_cu.cu
    extern void cudaMyInit(unsigned char **dev_var, size_t mysize);
    extern void cudaMyUninit(unsigned char *dev_var);
    extern void cudaMyFunction(unsigned char *host_var, unsigned char *dev_var, size_t mysize);

    // part of the MyFilterContext structure, which ffmpeg keeps track of for me.
    typedef struct {
       unsigned char *host_var;
       unsigned char *dev_var;
    } MyFilterContext;

    // ffmpeg calls this function once per file, before any frames are processed.
    static int config_input(AVFilterLink *inlink) {
           // how ffmpeg passes me my context, fairly standard.
       MyfilterContext * myContext = inlink->dst->priv;
           // compute the size one video plane of one frame of video
       size_t mysize = sizeof(unsigned char) * inlink->w * inlink->h;
           // av_mallocz is a malloc wrapper provided and required by ffmpeg
       myContext->host_var = (unsigned char*) av_mallocz(size);
           // Here&#39;s where I attempt to allocate my device memory.
       cudaMyInit( &amp; myContext->dev_var, mysize);  
    }

    // Called once per frame of video
    static int filter_frame(AVFilterLink *inlink, AVFilterBufferRef *frame) {
       MyFilterContext *myContext = inlink->dst->priv;

       // sanity check to make sure that this isn&#39;t part of the multithreaded code
       if ( syscall(SYS_gettid) == getpid() )
           av_log(.... ); // This line never runs, so it&#39;s not threaded?

       // ...fill host_var with data from frame,
       // set mysize to the size of the buffer

       // Call my wrapper function defined in the .cu file
       cudaMyFunction(myContext->host_var, myContext->dev_var, mysize);

       // ... take the results from host_var and apply them to frame
       // ... and return the processed frame to ffmpeg
    }

    // called after everything else has happened:  free up the memory.
    static av_cold void uninit(AVFilterContext *ctx) {
       MyFilterContext *myContext = ctx->priv;
       // free my host_var
       if(myContext->host_var!=NULL) {
           av_free(myContext->host_var);
           myContext->host_var=NULL;
       }
       // free my dev_var
       cudaMyUninit(myContext->dev_var);
    }

    Here is, broadly, what my vf_myfilter_cu.cu looks like :

    // my kernel function that does the work.
    __global__ void myfunc(unsigned char *dev_var, size_t mysize) {
       // find the offset for this particular GPU thread to process
       // exit this function if the block/thread combo points to somewhere
       //     outside the frame
       // make sure we&#39;re less than mysize bytes from the beginning of dev_var
       // do things to dev_var[some_offset]
    }
    // Allocate the device memory
    extern "C" void cudaMyInit(unsigned char **dev_var, size_t mysize) {
       if(cudaMalloc( (void**) dev_var, mysize) != cudaSuccess) {
           printf("Cannot allocate the memory\n");
       }
    }

    // Free the device memory.
    extern "C" void cudaMyUninit(unsigned char *dev_var) {
       cudaFree(dev_var);
    }

    // Copy data from the host to the device,
    // Call the kernel function, and
    // Copy data from the device to the host.
    extern "C" void cudaMyFunction(
           unsigned char *host_var,
           unsigned char *dev_var,
           size_t mysize         )
    {
       cudaError_t cres;

       // dev_works is what I want to get rid of, but
       // to make sure that there&#39;s not something more obvious going
       // on, I made sure that my cudaMemcpy works if I&#39;m allocating
       // the device memory in every frame.
       unsigned char *dev_works;  
       if(cudaMalloc( (void **) &amp;dev_works, mysize)!=cudaSuccess) {
           // I don&#39;t see this message
           printf("failed at per-frame malloc\n");
       }

       // THIS PART WORKS, copying host_var to dev_works
       cres=cudaMemcpy( (void *) dev_works, host_var, mysize, cudaMemcpyHostToDevice);
       if(cres!=cudaSuccess) {
           if(cres==cudaErrorInvalidValue) {
               // I don&#39;t see this message.
               printf("cudaErrorInvalidValue at per-frame cudaMemcpy\n");
           }
       }

       // THIS PART FAILS, copying host_var to dev_var
       cres=cudaMemcpy( (void *) dev_var, host_var, mysize, cudaMemcpyHostToDevice);
       if(cres!=cudaSuccess) {
           if(cres==cudaErrorInvalidValue) {
               // this is the error code that prints.
               printf("cudaErrorInvalidValue at per-frame cudaMemcpy\n");
           }
           // I check for other error codes, but they&#39;re not being hit.
       }

       // and this works with dev_works
       myfunc&lt;&lt;>>(dev_works, mysize);

       if(cudaMemcpy(host_var, dev_works, mysize, cudaMemcpyDeviceToHost)!=cudaSuccess) {
           // I don&#39;t see this message.
           printf("Failed to copy post-kernel func\n");
       }

       cudaFree(dev_works);

    }

    Any ideas ?