Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (56)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

Sur d’autres sites (5093)

  • FFmpeg inaccurate outputs [closed]

    27 juillet 2012, par user1557780

    Possible Duplicate :
    ffmpeg : videos before and after conversion aren't the same length

    Recently, I've been trying to use FFmpeg for an application which requires a VERY accurate manipulation when it comes to the time parameter (milliseconds resolution). Unfortunately, I was surprised to find out that FFmpeg's manipulation functionalities return some inaccurate results.

    Here is the output of 'ffmpeg' :

    ffmpeg version 0.11.1 Copyright (c) 2000-2012 the FFmpeg developers
     built on Jul 25 2012 19:55:05 with gcc 4.2.1 (Apple Inc. build 5664)
     configuration: --enable-gpl --enable-shared --enable-pthreads --enable-libx264 --enable-libmp3lame
     libavutil      51. 54.100 / 51. 54.100
     libavcodec     54. 23.100 / 54. 23.100
     libavformat    54.  6.100 / 54.  6.100
     libavdevice    54.  0.100 / 54.  0.100
     libavfilter     2. 77.100 /  2. 77.100
     libswscale      2.  1.100 /  2.  1.100
     libswresample   0. 15.100 /  0. 15.100
     libpostproc    52.  0.100 / 52.  0.100

    Now, let's assume I want to rip the audio track of 'foo.mov'. Here is the relevant output of 'ffmpeg -i foo.mov' :

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'foo.mov':
     Metadata:
       major_brand     : qt  
       minor_version   : 0
       compatible_brands: qt  
       creation_time   : 2012-07-24 23:16:08
     Duration: 00:00:40.38, start: 0.000000, bitrate: 805 kb/s
       Stream #0:0(und): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p, 480x360, 733 kb/s, 24.46 fps, 29.97 tbr, 600 tbn, 1200 tbc
       Metadata:
         rotate          : 90
         creation_time   : 2012-07-24 23:16:08
         handler_name    : Core Media Data Handler
       Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, mono, s16, 63 kb/s
       Metadata:
         creation_time   : 2012-07-24 23:16:08
         handler_name    : Core Media Data Handler

    As you probably noticed, the video file duration is 00:00:40.38. Using the following command, I ripped it's audio track :

    'ffmpeg -i foo.mov foo.wav'

    Output :

    Output #0, wav, to 'foo.wav':
     Metadata:
       major_brand     : qt  
       minor_version   : 0
       compatible_brands: qt  
       creation_time   : 2012-07-24 23:16:08
       encoder         : Lavf54.6.100
       Stream #0:0(und): Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, mono, s16, 705 kb/s
       Metadata:
         creation_time   : 2012-07-24 23:16:08
         handler_name    : Core Media Data Handler
    Stream mapping:
     Stream #0:1 -> #0:0 (aac -> pcm_s16le)
    Press [q] to stop, [?] for help
    size=3482kB time=00:00:40.42 bitrate= 705.6kbits/s    
    video:0kB audio:3482kB global headers:0kB muxing overhead 0.001290%

    As you can see, the output file is longer than the file in the input.

    Another example is audio (and video) file trimming :
    Let's assume I would like to use ffmpeg for audio file trimming. I used the next command :

    'ffmpeg -t 00:00:10.000 -i foo.wav trimmed_foo.wav -ss 00:00:25.000'

    Output :

    [wav @ 0x10180e800] max_analyze_duration 5000000 reached at 5015510
    Guessed Channel Layout for  Input Stream #0.0 : mono
    Input #0, wav, from 'foo.wav':
     Duration: 00:00:40.42, bitrate: 705 kb/s
       Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, mono, s16, 705 kb/s
    Output #0, wav, to 'trimmed_foo.wav':
     Metadata:
       encoder         : Lavf54.6.100
       Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, mono, s16, 705 kb/s
    Stream mapping:
     Stream #0:0 -> #0:0 (pcm_s16le -> pcm_s16le)
       Press [q] to stop, [?] for help
    size=864kB time=00:00:10.03 bitrate= 705.6kbits/s    
    video:0kB audio:864kB global headers:0kB muxing overhead 0.005199%

    Again, the output file is 30 milliseconds longer than I expected.

    I tried, for a long time, to research the issue without any success. When I use audacity for the same functionality, it does it very accurately !

    Does anyone have any idea how to solve this problem ?

  • libavfi/dnn : add LibTorch as one of DNN backend

    15 mars 2024, par Wenbin Chen
    libavfi/dnn : add LibTorch as one of DNN backend
    

    PyTorch is an open source machine learning framework that accelerates
    the path from research prototyping to production deployment. Official
    website : https://pytorch.org/. We call the C++ library of PyTorch as
    LibTorch, the same below.

    To build FFmpeg with LibTorch, please take following steps as
    reference :
    1. download LibTorch C++ library in
    https://pytorch.org/get-started/locally/,
    please select C++/Java for language, and other options as your need.
    Please download cxx11 ABI version :
    (libtorch-cxx11-abi-shared-with-deps-*.zip).
    2. unzip the file to your own dir, with command
    unzip libtorch-shared-with-deps-latest.zip -d your_dir
    3. export libtorch_root/libtorch/include and
    libtorch_root/libtorch/include/torch/csrc/api/include to $PATH
    export libtorch_root/libtorch/lib/ to $LD_LIBRARY_PATH
    4. config FFmpeg with ../configure —enable-libtorch \
    —extra-cflag=-I/libtorch_root/libtorch/include \
    —extra-cflag=-I/libtorch_root/libtorch/include/torch/csrc/api/include \
    —extra-ldflags=-L/libtorch_root/libtorch/lib/
    5. make

    To run FFmpeg DNN inference with LibTorch backend :
    ./ffmpeg -i input.jpg -vf \
    dnn_processing=dnn_backend=torch:model=LibTorch_model.pt -y output.jpg

    The LibTorch_model.pt can be generated by Python with torch.jit.script()
    api. https://pytorch.org/tutorials/advanced/cpp_export.html. This is
    pytorch official guide about how to convert and load torchscript model.
    Please note, torch.jit.trace() is not recommanded, since it does
    not support ambiguous input size.

    Signed-off-by : Ting Fu <ting.fu@intel.com>
    Signed-off-by : Wenbin Chen <wenbin.chen@intel.com>
    Reviewed-by : Guo Yejun <yejun.guo@intel.com>

    • [DH] configure
    • [DH] libavfilter/dnn/Makefile
    • [DH] libavfilter/dnn/dnn_backend_torch.cpp
    • [DH] libavfilter/dnn/dnn_interface.c
    • [DH] libavfilter/dnn_filter_common.c
    • [DH] libavfilter/dnn_interface.h
    • [DH] libavfilter/vf_dnn_processing.c
  • Splitting more then once with a HTML5 video splitter

    12 mai 2017, par Fearhunter

    I am doing a research for split video’s in four files for example. I was looking at this repository on github.

    https://github.com/vdaubry/html5-video-splitter

    It’s a very nice repo how to split a video. My question is : how can I split more files then only one cut ? When I split for the second time it will overwrite the previous one. Here is the open source code of the splitting :

    var childProcess = require("child_process");
    childProcess.spawn = require('cross-spawn');

    var http = require('http');
    var path = require("path");
    var fs = require("fs");
    var exec = require('child_process').exec;

    http.createServer(function (req, res) {
     if (req.method === 'OPTIONS') {
         console.log('!OPTIONS');
         var headers = {};
         headers["Access-Control-Allow-Origin"] = "*";
         headers["Access-Control-Allow-Methods"] = "POST, GET, PUT, DELETE, OPTIONS";
         headers["Access-Control-Allow-Credentials"] = false;
         headers["Access-Control-Max-Age"] = '86400';
         headers["Access-Control-Allow-Headers"] = "X-reqed-With, X-HTTP-Method-Override, Content-Type, Accept";
         res.writeHead(200, headers);
         res.end();
     }
     else if (req.method == 'POST') {
       var body = '';
       req.on('data', function (data) {
           body += data;
       });
       req.on('end', function () {
           var data = JSON.parse(body);
           var localPath = __dirname;
           var inputFilePath = localPath+"/videos/"+data.inputFilePath;
           var outputFilePath = localPath+"/videos/output-"+data.inputFilePath
           var start = data.begin;
           var end = data.end;

           var command = "ffmpeg -y -ss "+start+" -t "+(end-start)+" -i "+inputFilePath+" -vcodec copy -acodec copy "+outputFilePath;
           exec(command, function(error, stdout, stderr) {
             var msg = ""
             if(error) {
               console.log(error);
               msg = error.toString();
               res.writeHead(500, {'Content-Type': 'text/plain'});
             }
             else {
               console.log(stdout);
               res.writeHead(200, {'Content-Type': 'text/plain'});
             }
             res.end(msg);
           });
       });
     }
     else if (req.method == 'GET') {
       var filename = "index.html";
       if(req.url != "/") {
         filename = req.url
       }

       var ext = path.extname(filename);
       var localPath = __dirname;
       var validExtensions = {
         ".html" : "text/html",      
         ".js": "application/javascript",
         ".css": "text/css",
         ".txt": "text/plain",
         ".jpg": "image/jpeg",
         ".gif": "image/gif",
         ".png": "image/png",
         ".ico": "image/x-icon"
       };
       var mimeType = validExtensions[ext];

       if (mimeType) {
         localPath += "/interface/"+filename;
         fs.exists(localPath, function(exists) {
           if(exists) {
             console.log("Serving file: " + localPath);
             getFile(localPath, res, mimeType);
           } else {
             console.log("File not found: " + localPath);
             res.writeHead(404);
             res.end();
           }
         });

       } else {
         console.log("Invalid file extension detected: " + ext)
       }
     }
    }).listen(1337, '127.0.0.1');
    console.log('Server running at http://127.0.0.1:1337/');

    function getFile(localPath, res, mimeType) {
     fs.readFile(localPath, function(err, contents) {
       if(!err) {
         res.setHeader("Content-Length", contents.length);
         res.setHeader("Content-Type", mimeType);
         res.statusCode = 200;
         res.end(contents);
       } else {
         res.writeHead(500);
         res.end();
       }
     });
    }

    I have installed FFMPEG also to do this.

    Kind regards