Recherche avancée

Médias (91)

Autres articles (70)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (4717)

  • RTP/UDP or RTSP for accessing stream and passing frame to OpenCV ?

    15 janvier 2020, par xor31four

    Apologies for my inexperience in this domain..I am trying to implement an algorithm that detects the occurrence of a particular event in real-time. The particular event is a consecutive growth of motion across 5 consecutive frames.. almost analogous to a growing sphere or beach ball.

    I am able to detect the event on pre-recorded video that is in .avi format (mjpeg frames) with EmguCV (C# wrapper for OpenCV). The method I use is based on background subtraction.. outlined here https://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/

    The problem is that the live video transport stream is usually in the format rtsp ://XXX.XXX.X.XX/stream1.sdp

    EmguCV on windows can’t decode this h.264 stream for some reason that I am still trying to figure out ... I tried the same url using Python and OpenCV and received a non-matching transport in server reply message similar to this one "Nonmatching transport in server reply" when cv2.VideoCapture rtsp onvif camera, how to fix ? - the answer didn’t work for me.

    I can open the rtsp URL using VLCPlayer and its corresponding C# library - from my understanding it is using ffmpeg, although I may be wrong. FFmpeg on the command line can access the stream.

    EmguCV also uses ffmpeg as a backend which is why I am very confused as to why it can’t open the rtsp URL.

    Here is an image of the module tree when VLCPlayer opens the rtsp stream : enter image description here.

    From my understanding, EmguCV doesn’t use live555 or avcodec..

    I’ve noticed that if I change the streamer configuration to use UDP or RTP rather than RTSP, EmguCV can access the h.264 URL, although the URL is now in the format rtp/udp ://XXX.XXX.X.XX:XXXXX - no .sdp extension.

    I would highly appreciate if someone with more experience can give me some pointers.
    I have a great deal to learn even though I have spent a lot of time researching this topic. In regards to the detections remaining successful would it be recommended to process H.264 frames with possible distortion or MJPEG frames ?

    I can’t afford a delay longer than 1-2 seconds, and would ideally like to continue with the current method used to detect the event.

    From my current understanding, here are the routes I can take :

    1) Use RTP/UDP and process h.264 video using EmguCV - there is some distortion in the video when there is a large amount of movement.. I also receive several h264 error messages during the stream

    [h264 @ 00000124f13a5080] SPS unavailable in decode_picture_timing
    [h264 @ 00000124f13a5080] non-existing PPS 0 referenced
    [h264 @ 00000124f13a5080] decode_slice_header error
    [h264 @ 00000124f13a5080] no frame!
    [h264 @ 00000124f135eac0] Missing reference picture, default is 0
    [h264 @ 00000124f135eac0] decode_slice_header error
    [h264 @ 00000124f13a5080] cbp too large (6929) at 11 20
    [h264 @ 00000124f13a5080] error while decoding MB 11 20
    [h264 @ 00000124f135eac0] top block unavailable for requested intra mode -1
    [h264 @ 00000124f135eac0] error while decoding MB 3 0
    [h264 @ 00000124f124e580] cbp too large (96) at 33 0
    [h264 @ 00000124f124e580] error while decoding MB 33 0
    [h264 @ 00000124f19940c0] top block unavailable for requested intra mode
    [h264 @ 00000124f19940c0] error while decoding MB 1 1

    H.264 video

    2) Keep RTSP protocol, use libav to decode the frames and pass to EmguCV.. following this answer https://www.raspberrypi.org/forums/viewtopic.php?t=83127 .. I’m not sure if this will introduce a huge delay

    3) Keep RTSP protocol, use ffmpeg to convert h.264 stream to MJPEG and access this URL instead ?
    Again I’m not sure if this will be a feasible solution if it will introduce a great delay.

    4) Use a Linux machine rather than windows and configure a gstreamer backend - not ideal

    Thank you for taking the time to read this post.

  • FFMPEG on ubuntu wont start. Process is just killed [on hold]

    8 novembre 2018, par AK47

    I am trying to run a command on ffmpeg. When I run on my local computer it works fine but when running on my ubuntu server it won’t start. It just keeps giving an error saying the process is killed when it trys to start. Is this something related to memory or a different issue ?

    Killed   14 fps=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A dup=2 drop=0 speed=   0x

    Any help would be greatly appreciated

    Below are logs that are thrown.

    root@kickpush:/var/www/kick-push.co.uk/public_html/allblacks# ffmpeg -loop 1 -i q.png -r 30 -t 10 -i a.png -r 30 -t 5 image2.mp4
    ffmpeg version 3.3.3 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
     configuration: --extra-libs=-ldl --prefix=/opt/ffmpeg --mandir=/usr/share/man --enable-avresample --disable-debug --enable-nonfree --enable-gpl --enable-version3 --enable-libopencore-amrnb --enable-libopencore-amrwb --disable-decoder=amrnb --disable-decoder=amrwb --enable-libpulse --enable-libfreetype --enable-gnutls --disable-ffserver--enable-libx264 --enable-libx265 --enable-libfdk-aac --enable-libvorbis --enable-libtheora --enable-libmp3lame --enable-libopus --enable-libvpx --enable-libspeex --enable-libass --enable-avisynth --enable-libsoxr --enable-libxvid --enable-libvidstab --enable-libwavpack --enable-nvenc --enable-libzimg
     libavutil      55. 58.100 / 55. 58.100
     libavcodec     57. 89.100 / 57. 89.100
     libavformat    57. 71.100 / 57. 71.100
     libavdevice    57.  6.100 / 57.  6.100
     libavfilter     6. 82.100 /  6. 82.100
     libavresample   3.  5.  0 /  3.  5.  0
     libswscale      4.  6.100 /  4.  6.100
     libswresample   2.  7.100 /  2.  7.100
     libpostproc    54.  5.100 / 54.  5.100
    Input #0, png_pipe, from 'q.png':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: png, rgba(pc), 1080x1920, 25 fps, 25 tbr, 25 tbn, 25 tbc
    Input #1, png_pipe, from 'a.png':
     Duration: N/A, bitrate: N/A
       Stream #1:0: Video: png, rgba(pc), 1080x1920, 25 tbr, 25 tbn, 25 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (png (native) -> h264 (libx264))
    Press [q] to stop, [?] for help
    No pixel format specified, yuv444p for H.264 encoding chosen.
    Use -pix_fmt yuv420p for compatibility with outdated media players.
    [libx264 @ 0x31a4a40] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 AVX2 LZCNT BMI2
    [libx264 @ 0x31a4a40] profile High 4:4:4 Predictive, level 4.0, 4:4:4 8-bit
    [libx264 @ 0x31a4a40] 264 - core 148 r2643 5c65704 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=1 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to 'image2.mp4':
     Metadata:
       encoder         : Lavf57.71.100
       Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv444p, 1080x1920, q=-1--1, 30 fps, 15360 tbn, 30 tbc
       Metadata:
         encoder         : Lavc57.89.100 libx264
       Side data:
         cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
    Killed   14 fps=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A dup=2 drop=0 speed=   0x
  • FFMpeg(Android) av_read_frame or avcodec_decode_video2 returning same colour

    5 décembre 2012, par Cehm

    I've been experimenting FFMpeg for the past 2 weeks and I'm having a bit of trouble...
    First I've been working with a Galaxy S3, which worked super fine, gave me the best pictures ever but I recently switched to a Galaxy NEXUS which gave me a bunch of problems...

    What I'm doing : I just extract frame from a video

    How I'm doing :

    while(av_read_frame(gFormatCtx, &packet)>=0)
           {
               // Is this a packet from the video stream?
               if(packet.stream_index==videoStream)
               {
                   // Decode video frame
                   avcodec_decode_video2(gVideoCodecCtx, pFrame, &frameFinished, &packet);
                   // Did we get a video frame?
                   if(frameFinished)
                   {//and so on... But our problem is already here...

    Ok, now pFrame is holding a YUV representation of my frame... So, in order to check what I'm getting from the avcodec_decode_video2(...) function I'm just writing pFrame to a file so I can see it with any YUV reader on the web.

    char yuvFileName[100];
    sprintf(yuvFileName,"/storage/sdcard0/yuv%d.yuv",index);
    FILE* fp = fopen(yuvFileName, "wb");
    int y;
    // Write pixel data
    for(y=0; yheight; y++)
    {
       fwrite(pFrame->data[0]+y*pFrame->linesize[0], 1, gVideoCodecCtx->width, fp);    
    }
    for(y=0; yheight/2; y++)
    {
       fwrite(pFrame->data[1]+y*pFrame->linesize[1], 1, gVideoCodecCtx->width/2, fp);
    }
    for(y=0; yheight/2; y++)
    {
       fwrite(pFrame->data[2]+y*pFrame->linesize[2], 1, gVideoCodecCtx->width/2, fp);
    }
    fclose(fp);

    Ok so Here I now have my result on a file store @ /storage/sdcard0/blabla.YUV on my Galaxy Nexus root memory.

    But If I open the file with (for example XnView, which is meant to display YUV type properly) I only see Dark green on the picture.

    What bothers me is that everything worked properly on Galaxy S3 but something failed on GNexus...

    So here's my question : Why doesn't it work on Galaxy Nexus ?

    Compatibility problem between Gnexus and armeabiv7 ?

    I don't know !

    Regards,
    Cehm