Recherche avancée

Médias (91)

Autres articles (43)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

  • Emballe Médias : Mettre en ligne simplement des documents

    29 octobre 2010, par

    Le plugin emballe médias a été développé principalement pour la distribution mediaSPIP mais est également utilisé dans d’autres projets proches comme géodiversité par exemple. Plugins nécessaires et compatibles
    Pour fonctionner ce plugin nécessite que d’autres plugins soient installés : CFG Saisies SPIP Bonux Diogène swfupload jqueryui
    D’autres plugins peuvent être utilisés en complément afin d’améliorer ses capacités : Ancres douces Légendes photo_infos spipmotion (...)

Sur d’autres sites (5546)

  • ffplay + color-grading filter settings : live-adjust filter settings while playback ?

    5 octobre 2019, par raven

    following up on a previous topic, a color-grading ffmpeg M$ batch script which you guys have brillantly helped me to set up (link : ffmpeg - color-grading video material AND display original source as picture-in-picture, using -filter_complex), I now have another daring question for you :))

    The script renders one input video file and displays the original (non-graded) source material as a PIP overlay (lower-right corner), based on some pre-defined filter settings. These filter settings are hard-coded (M$ batch file). This works really nice, and I am quite happy with it, thanks again to the helping hands of the contributors !

    Could this now be taken one step further, i.e., having the filter parameters dynamically changing during playback ? Being able to adjust color-grading parameters on-the-fly, during playback, would certainly help to quicker find the correct color grading values for later rendering, using the aforementioned script.

    Maybe one has to ditch the PIP for this. So, let’s go playback only (no PIP encoding) and use ffplay, with some adjustable filter settings while playing the video material. Is such a dynamic filter adjustment even possible with ffplay, during video playback ?

    I’ll remain curious, and would like to thank you guys in advance for your time and support ! Cheers, raven.

  • Stacking different length videos not working with ffmpeg and -itsoffset

    3 avril 2019, par Lucas Madalozzo

    I developed a video conferencing app that records the video streams separately, and I am now looking for a way to merge them. At the moment I am experimenting with -itsoffset and hstack to stack 2 videos side by side using this command :

    ffmpeg \
    -itsoffset 17 -i smaller.mp4 \
    -itsoffset 0 -i bigger.mp4 \
    -filter_complex hstack=inputs=2 \
    -c:v libx264 -crf 23 out.mp4

    The result is a side by side video where both streams remain frozen for 17 seconds then start playing, even the bigger.mp4 video that should start at time 0.

    Any help would be really appreciated !

    ffmpeg verbose :

    ffmpeg version 4.1 Copyright (c) 2000-2018 the FFmpeg developers
     built with gcc 4.9.2 (Debian 4.9.2-10+deb8u1)
     configuration: --enable-gpl --enable-postproc --enable-swscale --enable-avfilter --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libx264 --enable-libspeex --enab                                                                                                                                       le-shared --enable-pthreads --enable-libopenjpeg --enable-nonfree --enable-libopus --enable-libvorbis --enable-libvpx
     libavutil      56. 22.100 / 56. 22.100
     libavcodec     58. 35.100 / 58. 35.100
     libavformat    58. 20.100 / 58. 20.100
     libavdevice    58.  5.100 / 58.  5.100
     libavfilter     7. 40.101 /  7. 40.101
     libswscale      5.  3.100 /  5.  3.100
     libswresample   3.  3.100 /  3.  3.100
     libpostproc    55.  3.100 / 55.  3.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'smaller.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf58.20.100
     Duration: 00:00:05.16, start: 0.000000, bitrate: 444 kb/s
       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 480x360 [SAR 1:1 DAR 4:3], 330 kb/s, 32 fps, 32 tbr, 16384 tbn, 64 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 103 kb/s (default)
       Metadata:
         handler_name    : SoundHandler
    Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'bigger.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf56.36.100
     Duration: 00:00:22.03, start: 0.000000, bitrate: 290 kb/s
       Stream #1:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 480x360 [SAR 1:1 DAR 4:3], 177 kb/s, 32 fps, 32 tbr, 16384 tbn, 64 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #1:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 103 kb/s (default)
       Metadata:
         handler_name    : SoundHandler
    File 'out.mp4' already exists. Overwrite ? [y/N] y
    Stream mapping:
     Stream #0:0 (h264) -> hstack:input0 (graph 0)
     Stream #1:0 (h264) -> hstack:input1 (graph 0)
     hstack (graph 0) -> Stream #0:0 (libx264)
     Stream #0:1 -> #0:1 (aac (native) -> aac (native))
    Press [q] to stop, [?] for help
    [libx264 @ 0x206ed00] using SAR=1/1
    [libx264 @ 0x206ed00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 AVX2 LZCNT BMI2
    [libx264 @ 0x206ed00] profile High, level 3.1
    [libx264 @ 0x206ed00] 264 - core 146 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex                                                                                                                                        subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 lookahead_threads=1 sliced_thre                                                                                                                                       ads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scene                                                                                                                                       cut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to 'out.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf58.20.100
       Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 960x360 [SAR 1:1 DAR 8:3], q=-1--1, 32 fps, 16384 tbn, 32 tbc (default)
       Metadata:
         encoder         : Lavc58.35.100 libx264
       Side data:
         cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 128 kb/s (default)
       Metadata:
         handler_name    : SoundHandler
         encoder         : Lavc58.35.100 aac
    frame=  709 fps=130 q=-1.0 Lsize=     573kB time=00:00:22.12 bitrate= 212.2kbits/s dup=544 drop=0 speed=4.05x
    video:478kB audio:81kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 2.445685%
    [libx264 @ 0x206ed00] frame I:3     Avg QP:16.86  size: 38355
    [libx264 @ 0x206ed00] frame P:190   Avg QP:18.22  size:  1633
    [libx264 @ 0x206ed00] frame B:516   Avg QP:16.23  size:   123
    [libx264 @ 0x206ed00] consecutive B-frames:  1.3%  2.8%  6.8% 89.1%
    [libx264 @ 0x206ed00] mb I  I16..4:  6.6% 54.8% 38.6%
    [libx264 @ 0x206ed00] mb P  I16..4:  0.4%  1.5%  0.2%  P16..4:  8.7%  4.0%  1.9%  0.0%  0.0%    skip:83.3%
    [libx264 @ 0x206ed00] mb B  I16..4:  0.0%  0.1%  0.0%  B16..8:  4.2%  0.3%  0.0%  direct: 0.0%  skip:95.4%  L0:37.2% L1:58.9% BI: 4.0%
    [libx264 @ 0x206ed00] 8x8 transform intra:66.2% inter:63.4%
    [libx264 @ 0x206ed00] coded y,uvDC,uvAC intra: 66.1% 65.6% 21.6% inter: 1.7% 1.1% 0.0%
    [libx264 @ 0x206ed00] i16 v,h,dc,p: 21% 26% 11% 42%
    [libx264 @ 0x206ed00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 25% 23% 13%  5%  7%  8%  7%  7%  6%
    [libx264 @ 0x206ed00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 24%  9%  6%  7%  7%  6%  7%  7%
    [libx264 @ 0x206ed00] i8c dc,h,v,p: 46% 25% 20%  9%
    [libx264 @ 0x206ed00] Weighted P-Frames: Y:1.1% UV:0.0%
    [libx264 @ 0x206ed00] ref P L0: 66.0% 19.4% 12.0%  2.6%  0.0%
    [libx264 @ 0x206ed00] ref B L0: 87.2% 11.8%  1.0%
    [libx264 @ 0x206ed00] ref B L1: 95.2%  4.8%
    [libx264 @ 0x206ed00] kb/s:176.52
    [aac @ 0x204aa00] Qavg: 247.398
  • How to match the video bitrate of opencv with ffmpeg ?

    24 avril 2019, par user10890282

    I am trying to read three cameras simultaneously one of which is done through a video grabber card. I want to be able to stream the data and also write out videos from these sources. I used FFmpeg to write out the data from the video grabber card and OpenCV writer for the normal USB cameras. But the bit rates of the files do not match nor does the size. This becomes a problem as I post-process the files. I tried to convert the OpenCV written files using FFmpeg later but the duration of the files still remain different though the bit rates have changed. I really would appreciate pointers to know how to go about this from the script itself. The ideal case would be that I use FFmpeg for all the three sources with the same settings. But how can I do this without spitting out all the information on the console as writing out the files, since there would have to be three sources written out simultaneously ? Could someone tell me how to have multiple FFmpeg threads going on for writing out files while streaming videos using OpenCV ?

    I have attached my current code with OpenCV writer and FFmpeg for one source :

    from threading import Thread
    import cv2
    import time
    import sys
    import subprocess as sp
    import os
    import datetime
    old_stdout=sys.stdout



    maindir= "E:/Trial1/"
    os.chdir(maindir)
    maindir=os.getcwd()

    class VideoWriterWidget(object):
       def __init__(self, video_file_name, src=0):
           if (src==3):
               self.frame_name= "Cam_"+str(src)+"(Right)"
           if(src==2):
               self.frame_name= "Cam_"+str(src)+"(Left)"
           # Create a VideoCapture object
           #self.frame_name =str(src)
           self.video_file = video_file_name+"_"
           self.now=datetime.datetime.now()
           self.ts=datetime.datetime.now()
           self.video_file_nameE="{}.avi".format("Endo_"+self.ts.strftime("%Y%m%d_%H-%M-%S"))
           self.video_file_name = "{}.avi".format(video_file_name+self.ts.strftime("%Y%m%d_%H-%M-%S"))
           self.FFMPEG_BIN = "C:/ffmpeg/bin/ffmpeg.exe"
           self.command=[self.FFMPEG_BIN,'-y','-f','dshow','-rtbufsize','1024M','-video_size','640x480','-i', 'video=Datapath VisionAV Video 01','-pix_fmt', 'bgr24', '-r','60', self.video_file_name]
           self.capture = cv2.VideoCapture(src)

           # Default resolutions of the frame are obtained (system dependent)

           self.frame_width = int(self.capture.get(3))#480
           self.frame_height = int(self.capture.get(4))# 640


           # Set up codec and output video settings
           if(src==2 or src==3):
               self.codec = cv2.VideoWriter_fourcc('M','J','P','G')
               self.output_video = cv2.VideoWriter(self.video_file_name, self.codec, 30, (self.frame_width, self.frame_height))

           # Start the thread to read frames from the video stream
               self.thread = Thread(target=self.update, args=(src,))
               self.thread.daemon = True
               self.thread.start()

           # Start another thread to show/save frames
               self.start_recording()
               print('initialized {}'.format(self.video_file))

           if (src==0):
               self.e_recording_thread = Thread(target=self.endo_recording_thread, args=())
               self.e_recording_thread.daemon = True
               self.e_recording_thread.start()
               print('initialized endo recording')


       def update(self,src):
           # Read the next frame from the stream in a different thread
           while True:
               if self.capture.isOpened():
                   (self.status, self.frame) = self.capture.read()
                   if (src==3):
                       self.frame= cv2.flip(self.frame,-1)


       def show_frame(self):
           # Display frames in main program
           if self.status:
               cv2.namedWindow(self.frame_name, cv2.WINDOW_NORMAL)
               cv2.imshow(self.frame_name, self.frame)


           # Press Q on keyboard to stop recording0000
           key = cv2.waitKey(1)
           if key == ord('q'):#
               self.capture.release()
               self.output_video.release()
               cv2.destroyAllWindows()
               exit(1)

       def save_frame(self):
           # Save obtained frame into video output file
           self.output_video.write(self.frame)

       def start_recording(self):
           # Create another thread to show/save frames
           def start_recording_thread():
               while True:
                   try:
                       self.show_frame()
                       self.save_frame()
                   except AttributeError:
                       pass
           self.recording_thread = Thread(target=start_recording_thread, args=())
           self.recording_thread.daemon = True
           self.recording_thread.start()

       def endo_recording_thread(self):
            self.Pr1=sp.call(self.command)



    if __name__ == '__main__':
       src1 = 'Your link1'
       video_writer_widget1 = VideoWriterWidget('Endo_', 0)
       src2 = 'Your link2'
       video_writer_widget2 = VideoWriterWidget('Camera2_', 2)
       src3 = 'Your link3'
       video_writer_widget3 = VideoWriterWidget('Camera3_', 3)

       # Since each video player is in its own thread, we need to keep the main thread alive.
       # Keep spinning using time.sleep() so the background threads keep running
       # Threads are set to daemon=True so they will automatically die
       # when the main thread dies
       while True:
         time.sleep(5)