Recherche avancée

Médias (91)

Autres articles (19)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Sélection de projets utilisant MediaSPIP

    29 avril 2011, par

    Les exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
    Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
    Ferme MediaSPIP @ Infini
    L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)

Sur d’autres sites (4366)

  • Building FFmpeg for android to run command line args

    11 septembre 2012, par Zargoon

    I am trying to build the FFmpeg library to use in my android app with the NDK. The reason for this is because I am using the native video capture feature in android because I really don't want to write my own video recorder. However, the native video capture only allows for either high-quality encoding, or low quality encoding. I want something in between, and I believe that the solution is to use the FFmpeg library to re-encode the high quality video to be lighter.

    So far I have been able to build the FFmpeg library according to this guide : http://www.roman10.net/how-to-build-ffmpeg-for-android/ and which a few tweaks I have been able to get it to work.

    However, everything that I've found seems to be about writing your own encoder, which seems like overkill to me. All that I really want to do is send a string in command line format to the main() function of FFmpeg and re-encode my video. However, I can't seem to figure out how I build FFmpeg to give me access to the main method. I found this post : Compile ffmpeg.c and call its main() via JNI which links to a project doing what I want more of less, but for the life of me I cannot figure out what is going on. It also seems like he is compiling more than I want, and I would really like to keep my application as light weight as possible.

    Some additional direction would be extremely helpful. Thank you.

  • Broken output from libavcodec/swscale, depending on resolution

    3 juin 2014, par dtumaykin

    I am writing a video conference software, I have a H.264 stream decoded with libavcoded into IYUV and than rendered into a window with VMR9 in windowless mode. I use a DirectShow graph to do so.

    To avoid unnecessary conversion into RGB and back (see link), I convert IYUV video into YUY2 before passing it to VMR9, with libswscale.

    I noticed that with video resolution of 848x480, output video is broken, so I investigated further and came up that for some resolutions video is always broken. To exclude the libswscale from elaboration, I added support for IYUV+padding to IYUV conversion, and it worked, with all resolutions.

    Still, I was willing to avoid slow IYUV, so I implemented support for NV12 (with libswscale) and YV12 (manually, essentially the same as IYUV). After doing some tests on two different computers, I came up with strange results.

    resolution  YUY2    NV12    IYUV    YV12
    PC 1 (my laptop)                
    640x360     ok      broken  ok      broken
    848x480     broken  broken  ok      broken
    960x540     broken  broken  ok      broken
    1024x576    ok      ok      ok      ok
    1280x720    ok      ok      ok      broken
    1920x1080   ok      broken  ok      broken

    PC 2                
    640x360     ok      ok      ok      ok
    848x480     ok      broken  ok      broken
    960x540     ok      ok      ok      ok
    1024x576    ok      ok      ok      ok
    1280x720    ok      broken  ok      ok
    1920x1080   ok      ok      ok      ok

    To exclude VMR9 fault, I substituted it with EVR, but with same results.

    I know that padding is needed for memory alignment, and that the size of padding depends on CPU used (libavcodec doc), that may explain difference between two computers(first has Intel i7-3820QM, the second Intel Core 2 Quad Q6600). I suppose it has something to do with padding, because images are corrupted in certain way.
    You can see my blue t-shirt in lower part of image.
    You can see my blue t-shirt in lower part of image, and my face in the upper one.

    To follow is the code for the conversion. NV12 and YUY2 conversions are performed with libswscale, while IYUV and YV12 manually.

    int pixels = _outputFrame->width * _outputFrame->height;
    if (_outputFormat == "YUY2") {
       int stride = _outputFrame->width * 2;
       sws_scale(_convertCtx, _outputFrame->data, _outputFrame->linesize, 0, _outputFrame->height, &out, &stride);
    }
    else if (_outputFormat == "NV12") {
       int stride[] = { _outputFrame->width, _outputFrame->width };
       uint8_t * dst[] = { out, out + pixels };
       sws_scale(_convertCtx, _outputFrame->data, _outputFrame->linesize, 0, _outputFrame->height, dst, stride);
    }
    else if (_outputFormat == "IYUV") { // clean ffmpeg padding
       for (int i = 0; i < _outputFrame->height; i++) // copy Y
           memcpy(out + i * _outputFrame->width, _outputFrame->data[0] + i * _outputFrame->linesize[0] , _outputFrame->width);
       for (int i = 0; i < _outputFrame->height / 2; i++) // copy U
           memcpy(out + pixels + i * _outputFrame->width / 2, _outputFrame->data[1] + i * _outputFrame->linesize[1] , _outputFrame->width / 2);            
       for (int i = 0; i < _outputFrame->height / 2; i++) // copy V
           memcpy(out + pixels + pixels/4 + i * _outputFrame->width / 2, _outputFrame->data[2] + i * _outputFrame->linesize[2] , _outputFrame->width / 2);
    }
    else if (_outputFormat == "YV12") { // like IYUV, but U is inverted with V plane
       for (int i = 0; i < _outputFrame->height; i++) // copy Y
           memcpy(out + i * _outputFrame->width, _outputFrame->data[0] + i * _outputFrame->linesize[0], _outputFrame->width);
       for (int i = 0; i < _outputFrame->height / 2; i++) // copy V
           memcpy(out + pixels + i * _outputFrame->width / 2, _outputFrame->data[2] + i * _outputFrame->linesize[2], _outputFrame->width / 2);
       for (int i = 0; i < _outputFrame->height / 2; i++) // copy U
           memcpy(out + pixels + pixels / 4 + i * _outputFrame->width / 2, _outputFrame->data[1] + i * _outputFrame->linesize[1], _outputFrame->width / 2);
    }

    out is an output buffer. _outputFrame is libavcodec output AVFrame. _convertCtx is initialized as follows.

    if (_outputFormat == "YUY2")
       _convertCtx = sws_getContext(_width, _height, AV_PIX_FMT_YUV420P,
                                    _width, _height, AV_PIX_FMT_YUYV422, SWS_FAST_BILINEAR, nullptr, nullptr, nullptr);
    else if (_outputFormat == "NV12")
       _convertCtx = sws_getContext(_width, _height, AV_PIX_FMT_YUV420P,
                                    _width, _height, AV_PIX_FMT_NV12, SWS_FAST_BILINEAR, nullptr, nullptr, nullptr);

    Questions :

    1. Are manual conversions correct ?
    2. Are my assumptions correct ?
    3. Is previous two answers are positive, where is the problem ? And especially...
    4. Why it presents only with some resolutions and not others ?
    5. What additional info can I provide ?
  • ffmpeg get frame time stamp

    19 novembre 2018, par Anshul G.

    I am trying to record a webcam video using ffmpeg. I have a logitech c922 Pro Stream Webcam. This is the command that I use :

    ffmpeg -f v4l2 -framerate 60 -video_size 1280x720 -input_format mjpeg -i /dev/video1 out.mp4

    My application requires me to get the exact timestamp for every frame. While I could use my knowledge of the framerate and frame number to add the required interval to the start time, I am afraid that this might not be completely accurate.

    Firstly, I have noticed that while recording, the console seems to initially display a far higher fps than the one I have set :

    Press [q] to stop, [?] for help
    frame=  177 fps= 85 q=-1.0 Lsize=     502kB time=00:00:02.91 bitrate=1410.8kbits/s dup=144 drop=0    

    Also, I think that ffmpeg drops frames in between sometimes.

    However, my videos seem to have the correct number of frames so I think that the fps value displayed could instead be referring to the encoding/ decoding speed. I am not sure about frame dropping.

    I would be happy of you could let me know what you think, or suggest an alternative so that I can timestamp my frames accurately. Thanks !

    Edit :

    I have understood that frame rate is correlated to ambient light which can lead to high duplication in frames. I am currently recording on windows and have set frame rate as the priority in logitech gaming software. However there is still the occasional drop in frame or duplication. Does this affect time stamp of the frames ? Or can I extrapolate from the start time ?