Recherche avancée

Médias (0)

Mot : - Tags -/signalement

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (100)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (5984)

  • Create mp4 thumbnail in node.js

    21 mai 2015, par trdavidson

    new in node.js and aws framework so I apologize in advance. I am trying to configure the AWS DB of my app to automatically create thumbnails using AWS Lambda. This works great using the example provided by Amazon for regular .jpg images (walkthrough here : https://alestic.com/2014/11/aws-lambda-cli/).

    However to try and do the same operation for mp4 files seems exponentially more difficult. After some searching I found that it seems the way to do this is by using the ffmpeg module. The problem is that I do not at all understand the response object returned by aws, and thus am not sure how to manipulate it so that ffmpeg can use it.

    current code :

    // dependencies
    var async = require('async');
    var AWS = require('aws-sdk');
    var gm = require('gm')
               .subClass({ imageMagick: true }); // Enable ImageMagick integration.
    var util = require('util');
    var ffmpeg = require('ffmpeg');
    var stream = require('stream')

    // constants
    var MAX_WIDTH  = 250;
    var MAX_HEIGHT = 250;

    // get reference to S3 client
    var s3 = new AWS.S3();

    exports.handler = function(event, context) {
       // Read options from the event.
       console.log("Reading options from event:\n", util.inspect(event, {depth: 5}));
       var srcBucket = event.Records[0].s3.bucket.name;
       // Object key may have spaces or unicode non-ASCII characters.
       var srcKey    =
       decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));  
       var dstBucket = srcBucket + "small";
       var dstKey    = "small-" + srcKey;
    // Sanity check: validate that source and destination are different buckets.
    if (srcBucket == dstBucket) {
       console.error("Destination bucket must not match source bucket.");
       return;
    }

    // Infer the image type.
    var typeMatch = srcKey.match(/\.([^.]*)$/);
    if (!typeMatch) {
       console.error('unable to infer image type for key ' + srcKey);
       return;
    }
    var imageType = typeMatch[1];
    if (imageType != "mp4" && imageType != "avi") {
       console.log('skipping non-image ' + srcKey);
       return;
    }

    // Download the image from S3, transform, and upload to a different S3 bucket.
    async.waterfall([
       function download(next) {
           // Download the image from S3 into a buffer.

           s3.getObject({
                   Bucket: srcBucket,
                   Key: srcKey
               },
               next);
           },
       function tranform(response, next) {
           var instream = new stream.Readable();
           instream.push(response.Body)
           instream.push(null)

           var outstream = new stream();

           ffmpeg(instream)
           .screenshots({timestamps: 1, size: '200x200'})
           .output('screenshot.png')
           .output(outstream)
           .on('end', function(){
               console.log('screenshots finished processing son!')
           })

           gm(outstream, 'screenshot.png').size(function(err, size) {
               // Infer the scaling factor to avoid stretching the image unnaturally.
               var scalingFactor = Math.min(
                   MAX_WIDTH / size.width,
                   MAX_HEIGHT / size.height
               );
               var width  = scalingFactor * size.width;
               var height = scalingFactor * size.height;

               // Transform the image buffer in memory.
               this.resize(width, height)
                   .toBuffer(imageType, function(err, buffer) {
                       if (err) {
                           next(err);
                       } else {
                           next(null, response.ContentType, buffer);
                       }
                   });
           });
       },
       function upload(contentType, data, next) {
           // Stream the transformed image to a different S3 bucket.
           s3.putObject({
                   Bucket: dstBucket,
                   Key: dstKey,
                   Body: data,
                   ContentType: contentType
               },
               next);
           }
       ], function (err) {
           if (err) {
               console.error(
                   'Unable to resize ' + srcBucket + '/' + srcKey +
                   ' and upload to ' + dstBucket + '/' + dstKey +
                   ' due to an error: ' + err
               );
           } else {
               console.log(
                   'Successfully resized ' + srcBucket + '/' + srcKey +
                   ' and uploaded to ' + dstBucket + '/' + dstKey
               );
           }

           context.done();
       }
    );

    } ;

    Any suggestions are welcome ! Thanks

  • How to make drawtext work in AWS Lambda ffmpeg ?

    22 mars 2020, par codeul

    I have setup an AWS Lambda function to use ffmpeg using layer https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:145266761615:applications~ffmpeg-lambda-layer.

    Some ffmpeg commands work, but noticed when I use drawtext or drawbox, I am not getting a proper mp4 file. The output looks corrupted and is low in size. (FYI : The output file is /tmp/test2.mp4 and then I copy it to an S3 bucket.)

    Whats wrong here ? Would appreciate any help. Thanks.

    ffmpeg command :

    ffmpeg -f lavfi -i color=0x142d3d:s=1280*720:d=10 -vf  "drawtext=fontcolor=white:fontsize=50:fontfile=aladin.ttf:text='test':y=10:x=10"  -movflags +faststart    -y /tmp/test2.mp4

    From log :

    o --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzvbi --enable-libzimg
           libavutil      56. 22.100 / 56. 22.100
           libavcodec     58. 35.100 / 58. 35.100
           libavformat    58. 20.100 / 58. 20.100
           libavdevice    58.  5.100 / 58.  5.100
           libavfilter     7. 40.101 /  7. 40.101
           libswscale      5.  3.100 /  5.  3.100
           libswresample   3.  3.100 /  3.  3.100
           libpostproc    55.  3.100 / 55.  3.100
       Input #0, lavfi, from 'color=0x142d3d:s=1280*720:d=10':
           Duration: N/A, start: 0.000000, bitrate: N/A
           Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 25 tbr, 25 tbn, 25 tbc
       Stream mapping:
           Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
       Press [q] to stop, [?] for help
       [Parsed_drawtext_0 @ 0x5852500] Using "/var/task/fonts/aladin.ttf"
       [libx264 @ 0x5850080] using SAR=1/1
       [libx264 @ 0x5850080] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
       [libx264 @ 0x5850080] profile Progressive High, level 3.1, 4:2:0, 8-bit
       [libx264 @ 0x5850080] 264 - core 157 r2969 d4099dd - H.264/MPEG-4 AVC codec - Copyleft 2003-2019 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
       Output #0, mp4, to '/tmp/test2.mp4':
           Metadata:
           encoder         : Lavf58.20.100
           Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], q=-1--1, 25 fps, 12800 tbn, 25 tbc
           Metadata:
               encoder         : Lavc58.35.100 libx264
           Side data:
               cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
       frame=    2 fps=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    
       frame=    9 fps=7.5 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    
       frame=   17 fps=9.8 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    
       frame=   25 fps= 11 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    
       frame=   30 fps=7.4 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    
    =================
  • How to write unit tests for your plugin – Introducing the Piwik Platform

    17 novembre 2014, par Thomas Steur — Development

    This is the next post of our blog series where we introduce the capabilities of the Piwik platform (our previous post was How to verify user permissions). This time you’ll learn how to write unit tests in Piwik. For this tutorial you will need to have basic knowledge of PHP, PHPUnit and the Piwik platform.

    When is a test a unit test ?

    There are many different opinions on this and it can be sometimes hard to decide. At Piwik we consider a test as a unit test if only a single method or class is being tested and if a test does not have a dependency to the filesystem, web, config, database or to any other plugin.

    If a test is slow it can be an indicator that it is not a unit test. “Slow” is of course a bit vague. We will cover how to write other type of tests, such as integration tests, in one of our next blog posts.

    Getting started

    In this post, we assume that you have already installed Piwik 2.9.0 or later via git, set up your development environment and created a plugin. If not, visit the Piwik Developer Zone where you’ll find the tutorial Setting up Piwik and other Guides that help you to develop a plugin.

    Let’s create a unit test

    We start by using the Piwik Console to create a new unit test :

    ./console generate:test --testtype unit

    The command will ask you to enter the name of the plugin the created test should belong to. I will use the plugin name “Insights”. Next it will ask you for the name of the test. Here you usually enter the name of the class you want to test. I will use “Widgets” in this example. There should now be a file plugins/Insights/tests/Unit/WidgetsTest.php which contains already an example to get you started easily :

    1. /**
    2.  * @group Insights
    3.  * @group WidgetsTest
    4.  * @group Plugins
    5.  */
    6. class WidgetsTest extends \PHPUnit_Framework_TestCase
    7. {
    8.  
    9.     public function testSimpleAddition()
    10.     {
    11.         $this->assertEquals(2, 1+1);
    12.     }
    13.  
    14. }

    Télécharger

    We don’t want to cover how you should write your unit test. This is totally up to you. If you have no experience in writing unit tests yet, we recommend to read articles on the topic, or a book, or to watch videos or anything else that will help you learn best.

    Running a test

    To run a test we will use the command tests:run which allows you to execute a test suite, a specific file or a group of tests.

    To verify whether the created test works we will run it as follows :

    ./console tests:run WidgetsTest

    This will run all tests having the group WidgetsTest. As other tests can use the same group you might want to pass the path to your test file instead :

    ./console tests:run plugins/Insights/tests/Unit/Widgets.php

    If you want to run all tests within your plugin pass the name of your plugin as an argument :

    ./console tests:run insights

    Of course you can also define multiple arguments :

    ./console tests:run insights WidgetsTest

    This will execute all tests within the insights plugin having the group WidgetsTest. If you only want to run unit tests within your plugin you can do the following :

    ./console tests:run insights unit

    Advanced features

    Isn’t it easy to create a unit test ? We never even created a file ! You can accomplish even more if you want : You can generate other type of tests, you can run tests on Amazon’s AWS and more. Unfortunately, not everything is documented yet so we recommend to discover more features by executing the commands ./console list tests and ./console help tests:run.

    If you have any feedback regarding our APIs or our guides in the Developer Zone feel free to send it to us.