Recherche avancée

Médias (91)

Autres articles (111)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Participer à sa documentation

    10 avril 2011

    La documentation est un des travaux les plus importants et les plus contraignants lors de la réalisation d’un outil technique.
    Tout apport extérieur à ce sujet est primordial : la critique de l’existant ; la participation à la rédaction d’articles orientés : utilisateur (administrateur de MediaSPIP ou simplement producteur de contenu) ; développeur ; la création de screencasts d’explication ; la traduction de la documentation dans une nouvelle langue ;
    Pour ce faire, vous pouvez vous inscrire sur (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

Sur d’autres sites (3769)

  • Filename in android rejected by ffmpeg command

    28 avril 2020, par ark1974

    Planning to use ffmpeg in Android for A/V conversion. Installed Android Studio 3.5.3. I am fairly new to Android development and the folder names, unlike in windows system, is fairly confusing to me. I am able to build the gradle without any error but the fetched pathname is rejected by ffmpeg commandline.

    



    Questions :

    



    1) Resulting path_name shows both pathname and filename which is cool. Is the resulting path_name correct or expected ? However, ffmpeg raised error flag citing that directory/file do not exist corresponding to the resulting path_name.

    



    2) Inside android properties, the path starts with "Device storage/..." but Android Studio command starts with "/document/". Why I see this variation ?

    



    3) onActivityResult() do not work with @override private prefix but works with @override public, is it expected ? Many examples on internet, however use private though.

    



    4) MediaStore.Audio.Media.DATA code do not work at all, is it deprecated in Android 3.5.5 ?

    



    Java code :

    



    @Override public void onActivityResult(int requestCode, int resultCode, Intent data) {
        if(requestCode == 7 &&  resultCode == RESULT_OK){    
                    path_name = data.getData().getPath();    
        }
    }


    



    Result :

    



    path_name = "/document/primary:WhatsApp/Media/WhatsApp Audio/AUD-20200402-WA0006.mp3" **strong text**


    


  • Use FFMPEG to blend streaming overlay onto second stream

    31 août 2017, par Louwrens Benade

    I’m trying to build a form of monitoring that can be superimposed onto a live stream.

    Monitor Overlay

    ffmpeg -i rtmp://localhost/pool/livestream -filter_complex \
     "nullsrc=1024x576[1:v]; \
     [0:a]showvolume=v=0:o=1:t=0:f=0.1,drawbox=x=ih-40:y=0:w=40:h=ih[volume]; \
     [1:v]drawtext=x=(main_w/2)-(text_w/2):y=text_h:fontsize=30:fontcolor=white:borderw=1:text='Stream Label',scale=-1:-1[label]; \
     [label][volume]overlay=x=main_w-40:y=0[output]" \
     -map "[output]" -f flv rtmp://localhost/pool/livestream_overlay

    What I would like to accomplish is that this stream be superimposed onto the original stream and pushed to a third RTMP endpoint, like this :

    ffmpeg -i rtmp://localhost/pool/livestream -i rtmp://localhost/pool/livestream_overlay \
     -filter_complex "[0:v][1:v]overlay=shortest=1[output]" \
     -f flv rtmp://localhost/pool/livestream_monitor

    While the workflow seems to be working, the overlay is not blending (subtracted ?) onto the original video :

    Actual output

    Actual output

    Expected output

    Expected output

    Note : codec options have been removed for brevity’s sake.

  • How would I assign multiple MMAP's from single file descriptor ?

    9 juin 2011, par Alex Stevens

    So, for my final year project, I'm using Video4Linux2 to pull YUV420 images from a camera, parse them through to x264 (which uses these images natively), and then send the encoded stream via Live555 to an RTP/RTCP compliant video player on a client over a wireless network. All of this I'm trying to do in real-time, so there'll be a control algorithm, but that's not the scope of this question. All of this - except Live555 - is being written in C. Currently, I'm near the end of encoding the video, but want to improve performance.

    To say the least, I've hit a snag... I'm trying to avoid User Space Pointers for V4L2 and use mmap(). I'm encoding video, but since it's YUV420, I've been malloc'ing new memory to hold the Y', U and V planes in three different variables for x264 to read upon. I would like to keep these variables as pointers to an mmap'ed piece of memory.

    However, the V4L2 device has one single file descriptor for the buffered stream, and I need to split the stream into three mmap'ed variables adhering to the YUV420 standard, like so...

    buffers[n_buffers].y_plane = mmap(NULL, (2 * width * height) / 3,
                                       PROT_READ | PROT_WRITE, MAP_SHARED,
                                       fd, buf.m.offset);
    buffers[n_buffers].u_plane = mmap(NULL, width * height / 6,
                                       PROT_READ | PROT_WRITE, MAP_SHARED,
                                       fd, buf.m.offset +
                                       ((2 * width * height) / 3 + 1) /
                                       sysconf(_SC_PAGE_SIZE));
    buffers[n_buffers].v_plane = mmap(NULL, width * height / 6,
                                       PROT_READ | PROT_WRITE, MAP_SHARED,
                                       fd, buf.m.offset +
                                       ((2 * width * height) / 3 +
                                       width * height / 6 + 1) /
                                       sysconf(_SC_PAGE_SIZE));

    Where "width" and "height" is the resolution of the video (eg. 640x480).

    From what I understand... MMAP seeks through a file, kind of like this (pseudoish-code) :

    fd = v4l2_open(...);
    lseek(fd, buf.m.offset + (2 * width * height) / 3);
    read(fd, buffers[n_buffers].u_plane, width * height / 6);

    My code is located in a Launchpad Repo here (for more background) :
    http://bazaar.launchpad.net/ alex-stevens/+junk/spyPanda/files (Revision 11)

    And the YUV420 format can be seen clearly from this Wiki illustration : http://en.wikipedia.org/wiki/File:Yuv420.svg (I essentially want to split up the Y, U, and V bytes into each mmap'ed memory)

    Anyone care to explain a way to mmap three variables to memory from the one file descriptor, or why I went wrong ? Or even hint at a better idea to parse the YUV420 buffer to x264 ? :P

    Cheers ! ^^