Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (42)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (7379)

  • lavc/h264dsp : optimise R-V V weight for shorter heights

    1er septembre 2024, par Rémi Denis-Courmont
    lavc/h264dsp : optimise R-V V weight for shorter heights
    

    The height is a power of two of up to 16 rows. The current code was
    optimised for large sample counts.

    T-Head C908 :
    h264_weight2_8_c : 211.7 ( 1.00x)
    h264_weight2_8_rvv_i32 : before 184.0 ( 1.15x)
    h264_weight2_8_rvv_i32 : after 54.2 ( 3.90x)
    h264_weight4_8_c : 285.7 ( 1.00x)
    h264_weight4_8_rvv_i32 : before 341.2 ( 0.86x)
    h264_weight4_8_rvv_i32 : after 82.2 ( 3.47x)
    h264_weight8_8_c : 498.7 ( 1.00x)
    h264_weight8_8_rvv_i32 : before 683.7 ( 0.73x)
    h264_weight8_8_rvv_i64 : after 128.5 ( 3.95x)
    h264_weight16_8_c : 878.2 ( 1.00x)
    h264_weight16_8_rvv_i32 : unchanged 239.5 ( 3.67x)

    SpacemiT X60 :
    h264_weight2_8_c : 207.2 ( 1.00x)
    h264_weight2_8_rvv_i32 : before 259.6 ( 0.80x)
    h264_weight2_8_rvv_i32 : after 82.2 ( 2.52x)
    h264_weight4_8_c : 290.8 ( 1.00x)
    h264_weight4_8_rvv_i32 : before 509.6 ( 0.57x)
    h264_weight4_8_rvv_i32 : after 61.5 ( 4.73x)
    h264_weight8_8_c : 498.8 ( 1.00x)
    h264_weight8_8_rvv_i32 : before 1019.8 ( 0.49x)
    h264_weight8_8_rvv_i64 : after 71.8 ( 6.95x)
    h264_weight16_8_c : 874.0 ( 1.00x)
    h264_weight16_8_rvv_i32 : unchanged 249.0 ( 3.51x)

    • [DH] libavcodec/riscv/h264dsp_init.c
    • [DH] libavcodec/riscv/h264dsp_rvv.S
  • How to take snapshot with multiple web cameras at the same time using PHP in Centos ?

    13 novembre 2014, par galengodis

    I’m trying to take still photos / snapshots with multiple web cameras at the same time through php / shell_execute.

    This is what I use so I can the cameras in the background.

    shell_exec('ffmpeg -f video4linux2 -s 1280x960 -i /dev/video0 -q:v 0 -b:v 10000k -vcodec mjpeg -vframes 1 /var/www/html/cam1.jpg -y > /dev/null 2>/dev/null &');

    shell_exec('ffmpeg -f video4linux2 -s 1280x960 -i /dev/video1 -q:v 0 -b:v 10000k -vcodec mjpeg -vframes 1 /var/www/html/cam2.jpg -y > /dev/null 2>/dev/null &');

    It outputs only one image from the cameras. If i run them one at a time with the same code everything works. Fyi the "&" at the end makes the php run in background. Read more here about shell_exec background process : Is there a way to use shell_exec without waiting for the command to complete ?

    > [root@localhost ~]# lsusb
    > Bus 001 Device 002: ID 8087:8000 Intel Corp.
    > Bus 002 Device 004: ID 1a40:0201 Terminus Technology Inc. FE 2.1
    > 7-port Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root
    > hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus
    > 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 002
    > Device 006: ID 046d:0825 Logitech, Inc. Webcam C270 Bus 002 Device
    > 005: ID 0c45:6340 Microdia
    >
    > [root@localhost ~]# find /dev/bus/ /dev/bus/ /dev/bus/usb
    > /dev/bus/usb/003 /dev/bus/usb/003/001 /dev/bus/usb/002
    > /dev/bus/usb/002/006 /dev/bus/usb/002/005 /dev/bus/usb/002/004
    > /dev/bus/usb/002/001 /dev/bus/usb/001 /dev/bus/usb/001/002
    > /dev/bus/usb/001/001

    I’ve tried with altering applications between ffmpeg and streamer. So the problem seems to be USB-related. The both cameras are plugged into a USB-hub (with an external power supply). The cameras are of different brands.

    I’m on Centos 7, 64bit.

  • is this ffmpeg command optimized ?

    22 juin 2017, par Bob Ramsey

    I have a requirement to take a video, add some plain text, and then add some rotated text at different times, locations, and durations. I want to use processor power in the most efficient way this will run 20,000 times (yes, really, we’re personalizing a video for students at a U.)This is what I finally came up with :

    ffmpeg -y -i INPUT.mp4 -filter_complex
     "drawtext=enable='between(t,14,16)':fontfile=tahoma.ttf:fontsize=54:fontcolor=green:x=10:y=text_h + 10:text='Dana Scully',
      drawtext=enable='between(t,19,23)':fontfile=tahoma.ttf:fontsize=16:fontcolor=red:x=150:y=220:text='Dana Scully  \<dana.scully\@fbi.gov\>',
      drawtext=enable='between(t,99,104)':fontfile=tahoma.ttf:fontsize=28:fontcolor=green:x=480:y=text_h + 160:text='Dana Scully',
      drawtext=enable='between(t,14,16)':fontfile=tahoma.ttf:fontsize=16:fontcolor=yellow:x=40:y=25:text='Dana Scully  \<dana.scully\@fbi.gov\>',
      drawtext=enable='between(t,180,186)':fontfile=tahoma.ttf:fontsize=88:fontcolor=green:x=20:y=430:text='Dana Scully'[text];
      color=c=#111111:s=1280x720:d=1,format=yuv444p[colorbk];
      [colorbk]drawtext=fontfile=tahoma.ttf:fontsize=16:fontcolor=purple:x=(w-text_w)/2:y=(h-text_h)/2:text='by',drawtext=fontfile=tahoma.ttf:fontsize=32:fontcolor=green:x=(w-text_w)/2:y=((h-text_h)/2)+50:text='Dana Scully',rotate=(-.5):ow=1280:oh=720:c=#111111,chromakey=#111111:similarity=0.01,format=yuva444p,colorkey=#111111:0.1[rotated];
      [text][rotated]overlay=eval=frame:x='if(gte(t,134),(if(lte(t,137),20,NAN)), NAN)':y=100[out];[out]scale=iw*.25:-1"
      -crf 20 test.mp4

    Is that about as optimized as it is going to get ? I thought ffmpeg would already handle the threads based on the computer’s processor, so no real need to mess with it. The processing will all be done on AWS VMs.

    Rotating the text is what really slows it down.

    Any ideas ?