Recherche avancée

Médias (0)

Mot : - Tags -/serveur

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (32)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (5005)

  • Shaking/trembling in video slideshow generated by frames from an image

    18 mai 2017, par raziel

    I have a PHP program which is used to generate a video slideshow from the series of images. Basically, I just need to smoothly ‘move’ from one image area to the another one according to the specified top/left coordinates and width/height area of the image. In order to do smooth movement, I use easing functions during the coordinates calculations for the each of video frame. I make an jpeg image frame based on these calculations using PHP’s Imagick library, then I combine all the generated frames into a single video using ffmpeg command.

    <?php
    const TEMP_FRAMES_DIR = __DIR__;
    const VIDEO_WIDTH = 1080;
    const VIDEO_HEIGHT = 720;
    const FPS = 30;
    const MOVEMENT_DURATION_SECONDS = 3;

    const IMAGE_PATH = __DIR__ . '/test_image.png';

    $start_coords = [
       'x' => 100,
       'y' => 100,
       'width'  => 480,
       'height' => 270
    ];
    $end_coords = [
       'x' => 400,
       'y' => 200,
       'width'  => 480,
       'height' => 270
    ];

    $timeline = make_timeline($start_coords, $end_coords);
    render_frames(IMAGE_PATH, $timeline);
    render_video_from_frames();

    function make_timeline($start_coords, $end_coords) {
       $timeline = [];

       $total_frames = MOVEMENT_DURATION_SECONDS * FPS;

       $x_change      = $end_coords['x']      - $start_coords['y'];
       $y_change      = $end_coords['y']      - $start_coords['y'];
       $width_change  = $end_coords['width']  - $start_coords['width'];
       $height_change = $end_coords['height'] - $start_coords['height'];

       for ($i = 0; $i < $total_frames; $i++) {
           $timeline[$i] = [
               'x'      => easingOutExpo($i, $start_coords['x'], $x_change, $total_frames),
               'y'      => easingOutExpo($i, $start_coords['y'], $y_change, $total_frames),
               'width'  => easingOutExpo($i, $start_coords['width'], $width_change, $total_frames),
               'height' => easingOutExpo($i, $start_coords['height'], $height_change, $total_frames)
           ];
       }
       return $timeline;
    }

    function render_frames($image_path, $timeline) {
       $image = new Imagick($image_path);
       //remove frames from the previous render
       array_map('unlink', glob( TEMP_FRAMES_DIR . "/frame*" ));

       foreach ($timeline as $frame_number => $frame) {
           $frame_img = clone $image;
           $frame_img->cropImage($frame["width"],$frame["height"], $frame["x"],$frame["y"]);
           $frame_img->resizeImage(VIDEO_WIDTH, VIDEO_HEIGHT, imagick::FILTER_LANCZOS, 0.9);
           $frame_img->writeImage(TEMP_FRAMES_DIR. "/frame$frame_number.jpg");
       }
    }

    function render_video_from_frames() {
       $fps = FPS;
       $frames_dir = TEMP_FRAMES_DIR;
       $SEP = DIRECTORY_SEPARATOR;

       $video_file = $frames_dir. $SEP . 'video.mp4';

       if (file_exists($video_file)) unlink($video_file);

       system("ffmpeg -framerate $fps -i $frames_dir{$SEP}frame%01d.jpg $video_file");
    }

    function easingOutExpo($t, $b, $c, $d) {
       return $c * ( -pow( 2, -10 * $t/$d ) + 1 ) + $b;
    }

    The problem is that I have annoying shaking/trembling when I need to move at a low speed (like at the end of easing out expo function).
    Here you can get the test video with the problem, the test image which was used and the PHP script :
    https://drive.google.com/drive/u/1/folders/0B9FOrF6IlWaGeHJCS1h6djhVZ28

    You can see this shaking starting from the middle of the test video ( 1.5 sec).

    How can I avoid shaking in such kind of situations ? Thanks in advance !

  • FFMPEG being killed when processing H264 video

    4 mai 2017, par Stuart Clarke

    I have a program that when given a video (video.mp4), extracts the audio, a thumbnail image and the video into separate files, adds some background music to the audio file to create a new file (combined.mp3), increases the resolution of the video to 1080p and saves to a new file (videoHD.mp4), adds intro and credits to the start and end (already in 1080p) and saves that to a new file (merged.mp4) and finally combines the processed video and audio into an output file (videoExt.mp4). I’m using ffmpeg through python and subprocess.call to do all this but the raw ffmpeg commands are as follows.

    ffmpeg -y -i video.mp4 -acodec mp3 audioTrack.mp3

    ffmpeg -y -ss 00:00:00 -i video.mp4 -vframes 1 -q:v 2 thumb.jpg

    ffmpeg -y -i video.mp4 -an videoTrack.mp4

    ffmpeg -y -i audioTrack.mp3 -i musicTrack.mp3 -filter_complex amerge -c:a libmp3lame -q:a 4 combined.mp3

    ffmpeg -y -i videoTrack.mp4 -vf 'scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:x=(1920-iw)/2:y=(1080-ih)/2:color=black' videoHD.mp4

    ffmpeg -y -f concat -i vids.ini -an merged.mp4

    ffmpeg -y -i merged.mp4 -i combined.mp3 -strict -2 -shortest videoExt.mp4

    Between extracting the audio and combining with the music I add a silent period matching the intro length to the beginning for the audio and lower the volume of the music for the duration of the video section and crop the music to the length of the combined intro, video and credits. This is all done with AudioSegment and works well.

    My problem is that the ffmpeg process keeps being killed. It happens in various places at various times, but always when processing the video at 1080p. I’m sure it has something to do with the H264 codec but when I use mpeg4 the quality is terrible. The videos are always less than a minute (Instagram videos) and at most 25Mb, I’m using a VPS with Ubuntu Server 15.04 installed, it should be able to handle this surely. If not is there a way around it, processing the video in parts ?

    Here is an example of the error but as I said, it can happen in various places and sometimes will complete for one command but I have never had it complete for all.

    # ffmpeg -y -f concat -i vids.ini -an merged.mp4

    ffmpeg version 2.5.10-0ubuntu0.15.04.1 Copyright (c) 2000-2016 the FFmpeg developers
     built with gcc 4.9.2 (Ubuntu 4.9.2-10ubuntu13)
     configuration: --prefix=/usr --extra-version=0ubuntu0.15.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --shlibdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --enable-shared --disable-stripping --enable-avresample --enable-avisynth --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libshine --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libwavpack --enable-libwebp --enable-libxvid --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzvbi --enable-libzmq --enable-frei0r --enable-libvpx --enable-libx264 --enable-libsoxr --enable-gnutls --enable-openal --enable-libopencv --enable-librtmp --enable-libx265
     libavutil      54. 15.100 / 54. 15.100
     libavcodec     56. 13.100 / 56. 13.100
     libavformat    56. 15.102 / 56. 15.102
     libavdevice    56.  3.100 / 56.  3.100
     libavfilter     5.  2.103 /  5.  2.103
     libavresample   2.  1.  0 /  2.  1.  0
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    Input #0, concat, from 'vids.ini':
     Duration: N/A, start: 0.000000, bitrate: 2999 kb/s
       Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 2996 kb/s, 29.97 fps, 29.97 tbr, 11988 tbn, 59.94 tbc
       Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 2 kb/s
    [libx264 @ 0x134fd00] using SAR=1/1
    [libx264 @ 0x134fd00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
    [libx264 @ 0x134fd00] profile High, level 4.0
    [libx264 @ 0x134fd00] 264 - core 142 r2495 6a301b6 - H.264/MPEG-4 AVC codec - Copyleft 2003-2014 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to 'merged.mp4':
     Metadata:
       encoder         : Lavf56.15.102
       Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=-1--1, 29.97 fps, 11988 tbn, 29.97 tbc
       Metadata:
         encoder         : Lavc56.13.100 libx264
    Stream mapping:
     Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
    Press [q] to stop, [?] for help
    frame=   13 fps=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A dup=1 dro
    frame=   28 fps= 27 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A dup=1 dro
    frame=   46 fps= 21 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A dup=1 dro
    frame=   51 fps= 16 q=29.0 size=     120kB time=00:00:00.-3 bitrate=N/A dup=1 dr
    frame=   55 fps= 15 q=29.0 size=     154kB time=00:00:00.10 bitrate=12574.5kbits
    frame=   60 fps= 14 q=29.0 size=     195kB time=00:00:00.26 bitrate=5981.0kbits/
    frame=   65 fps= 13 q=29.0 size=     254kB time=00:00:00.43 bitrate=4797.0kbits/
    frame=   69 fps= 12 q=29.0 size=     287kB time=00:00:00.56 bitrate=4142.1kbits/
    frame=   73 fps= 12 q=29.0 size=     375kB time=00:00:00.70 bitrate=4384.2kbits/
    frame=   76 fps= 11 q=29.0 size=     418kB time=00:00:00.80 bitrate=4278.4kbits/
    frame=   81 fps= 11 q=29.0 size=     441kB time=00:00:00.96 bitrate=3732.4kbits/
    frame=   85 fps= 10 q=29.0 size=     472kB time=00:00:01.10 bitrate=3511.0kbits/
    Killed1 drop=0

    Any ideas or if anyone knows a better way to this please let me know.

    Cheers,

    Stu

  • avfilter's anull says "Rematrix is needed between stereo and 0 channels but there is not enough information to do it"

    24 mai 2017, par kuanyui

    I’m trying to write a transcoder according to FFMpeg’s official example with ffmpeg 3.2.4 (official prebuild Win32), and try to transcode a video with stereo audio stream source (from avformat’s dshow).

    In the example code, which passes anull into avfilter_graph_parse_ptr() for audio stream, and "time_base=1/44100:sample_rate=44100:sample_fmt=s16:channels=2:channel_layout=0x3" is passed into avfilter_graph_create_filter(), occurs error in the following avfilter_graph_config() :

    [auto-inserted scaler 0 @ 32f77600] w:iw h:ih flags:'bilinear' interl:0
    [Parsed_null_0 @ 2e9d79a0] auto-inserting filter 'auto-inserted scaler 0' between the filter 'in' and the filter 'Parsed_null_0'
    [swscaler @ 3331bfe0] deprecated pixel format used, make sure you did set range correctly
    [auto-inserted scaler 0 @ 32f77600] w:1920 h:1080 fmt:yuvj422p sar:1/1 -> w:1920 h:1080 fmt:yuv420p sar:1/1 flags:0x2
    [libmp3lame @ 2e90a360] Channel layout not specified
    [in @ 3866e8a0] tb:1/44100 samplefmt:s16 samplerate:44100 chlayout:0x3
    [Parsed_anull_0 @ 330e8820] auto-inserting filter 'auto-inserted resampler 0' between the filter 'in' and the filter 'Parsed_anull_0'
    [auto-inserted resampler 0 @ 330e8dc0] [SWR @ 3809b620] Rematrix is needed between stereo and 0 channels but there is not enough information to do it
    [auto-inserted resampler 0 @ 330e8dc0] Failed to configure output pad on auto-inserted resampler 0

    I’ve googled for days but didn’t find any clue for it. Doesn’t what anull do is only "Pass the audio source unchanged to the output", why libav want to resample stereo to 0 channel ? What’s going wrong ?