Recherche avancée

Médias (0)

Mot : - Tags -/auteurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (68)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

  • Changer son thème graphique

    22 février 2011, par

    Le thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
    Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
    Modifier le thème graphique utilisé
    Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
    Il suffit ensuite de se rendre dans l’espace de configuration du (...)

Sur d’autres sites (5805)

  • Jitsi and ffplay

    15 juin 2014, par Kotkot

    I’m playing with jitsi. Got examples form source code. I modified it a bit.
    Here is what I’ve got.
    I am trying to play the transmitted stream in VLC of ffplay or any other player,
    but I cannot.

    I use these application parameters to run the code :

    --local-port-base=5000 --remote-host=localhost --remote-port-base=10000

    What am I doing wrong ?

    package com.company;


       /*
        * Jitsi, the OpenSource Java VoIP and Instant Messaging client.
        *
    * Distributable under LGPL license.
    * See terms of license at gnu.org.
    */

    import org.jitsi.service.libjitsi.LibJitsi;
    import org.jitsi.service.neomedia.*;
    import org.jitsi.service.neomedia.device.MediaDevice;
    import org.jitsi.service.neomedia.format.MediaFormat;
    import org.jitsi.service.neomedia.format.MediaFormatFactory;

    import java.io.PrintStream;
    import java.net.DatagramSocket;
    import java.net.InetAddress;
    import java.net.InetSocketAddress;
    import java.util.HashMap;
    import java.util.Map;

    /**
    * Implements an example application in the fashion of JMF's AVTransmit2 example
    * which demonstrates the use of the <tt>libjitsi</tt> library for the purposes
    * of transmitting audio and video via RTP means.
    *
    * @author Lyubomir Marinov
    */
    public class VideoTransmitter {
       /**
        * The port which is the source of the transmission i.e. from which the
        * media is to be transmitted.
        *
        * @see #LOCAL_PORT_BASE_ARG_NAME
        */
       private int localPortBase;

       /**
        * The <tt>MediaStream</tt> instances initialized by this instance indexed
        * by their respective <tt>MediaType</tt> ordinal.
        */
       private MediaStream[] mediaStreams;

       /**
        * The <tt>InetAddress</tt> of the host which is the target of the
        * transmission i.e. to which the media is to be transmitted.
        *
        * @see #REMOTE_HOST_ARG_NAME
        */
       private InetAddress remoteAddr;

       /**
        * The port which is the target of the transmission i.e. to which the media
        * is to be transmitted.
        *
        * @see #REMOTE_PORT_BASE_ARG_NAME
        */
       private int remotePortBase;

       /**
        * Initializes a new <tt>AVTransmit2</tt> instance which is to transmit
        * audio and video to a specific host and a specific port.
        *
        * @param localPortBase  the port which is the source of the transmission
        *                       i.e. from which the media is to be transmitted
        * @param remoteHost     the name of the host which is the target of the
        *                       transmission i.e. to which the media is to be transmitted
        * @param remotePortBase the port which is the target of the transmission
        *                       i.e. to which the media is to be transmitted
        * @throws Exception if any error arises during the parsing of the specified
        *                   <tt>localPortBase</tt>, <tt>remoteHost</tt> and <tt>remotePortBase</tt>
        */
       private VideoTransmitter(
               String localPortBase,
               String remoteHost, String remotePortBase)
               throws Exception {
           this.localPortBase
                   = (localPortBase == null)
                   ? -1
                   : Integer.valueOf(localPortBase).intValue();
           this.remoteAddr = InetAddress.getByName(remoteHost);
           this.remotePortBase = Integer.valueOf(remotePortBase).intValue();
       }

       /**
        * Starts the transmission. Returns null if transmission started ok.
        * Otherwise it returns a string with the reason why the setup failed.
        */
       private String start()
               throws Exception {
           /*
            * Prepare for the start of the transmission i.e. initialize the
            * MediaStream instances.
            */
           MediaType[] mediaTypes = MediaType.values();
           MediaService mediaService = LibJitsi.getMediaService();
           int localPort = localPortBase;
           int remotePort = remotePortBase;

           mediaStreams = new MediaStream[mediaTypes.length];
           for (MediaType mediaType : mediaTypes) {
               if(mediaType != MediaType.VIDEO) continue;
               /*
                * The default MediaDevice (for a specific MediaType) is configured
                * (by the user of the application via some sort of UI) into the
                * ConfigurationService. If there is no ConfigurationService
                * instance known to LibJitsi, the first available MediaDevice of
                * the specified MediaType will be chosen by MediaService.
                */
               MediaDevice device
                       = mediaService.getMediaDeviceForPartialDesktopStreaming(100,100,100,100);
               if (device == null) {
                   continue;
               }
               MediaStream mediaStream = mediaService.createMediaStream(device);

               // direction
               /*
                * The AVTransmit2 example sends only and the AVReceive2 receives
                * only. In a call, the MediaStream's direction will most commonly
                * be set to SENDRECV.
                */
               mediaStream.setDirection(MediaDirection.SENDONLY);

               // format
               String encoding;
               double clockRate;
               /*
                * The AVTransmit2 and AVReceive2 examples use the H.264 video
                * codec. Its RTP transmission has no static RTP payload type number
                * assigned.
                */
               byte dynamicRTPPayloadType;

               switch (device.getMediaType()) {
                   case AUDIO:
                       encoding = "PCMU";
                       clockRate = 8000;
                   /* PCMU has a static RTP payload type number assigned. */
                       dynamicRTPPayloadType = -1;
                       break;
                   case VIDEO:
                       encoding = "H264";
                       clockRate = MediaFormatFactory.CLOCK_RATE_NOT_SPECIFIED;
                   /*
                    * The dymanic RTP payload type numbers are usually negotiated
                    * in the signaling functionality.
                    */
                       dynamicRTPPayloadType = 99;
                       break;
                   default:
                       encoding = null;
                       clockRate = MediaFormatFactory.CLOCK_RATE_NOT_SPECIFIED;
                       dynamicRTPPayloadType = -1;
               }

               if (encoding != null) {
                   MediaFormat format
                           = mediaService.getFormatFactory().createMediaFormat(
                           encoding,
                           clockRate);

                   /*
                    * The MediaFormat instances which do not have a static RTP
                    * payload type number association must be explicitly assigned
                    * a dynamic RTP payload type number.
                    */
                   if (dynamicRTPPayloadType != -1) {
                       mediaStream.addDynamicRTPPayloadType(
                               dynamicRTPPayloadType,
                               format);
                   }

                   mediaStream.setFormat(format);
               }

               // connector
               StreamConnector connector;

               if (localPortBase == -1) {
                   connector = new DefaultStreamConnector();
               } else {
                   int localRTPPort = localPort++;
                   int localRTCPPort = localPort++;

                   connector
                           = new DefaultStreamConnector(
                           new DatagramSocket(localRTPPort),
                           new DatagramSocket(localRTCPPort));
               }
               mediaStream.setConnector(connector);

               // target
               /*
                * The AVTransmit2 and AVReceive2 examples follow the common
                * practice that the RTCP port is right after the RTP port.
                */
               int remoteRTPPort = remotePort++;
               int remoteRTCPPort = remotePort++;

               mediaStream.setTarget(
                       new MediaStreamTarget(
                               new InetSocketAddress(remoteAddr, remoteRTPPort),
                               new InetSocketAddress(remoteAddr, remoteRTCPPort)));

               // name
               /*
                * The name is completely optional and it is not being used by the
                * MediaStream implementation at this time, it is just remembered so
                * that it can be retrieved via MediaStream#getName(). It may be
                * integrated with the signaling functionality if necessary.
                */
               mediaStream.setName(mediaType.toString());

               mediaStreams[mediaType.ordinal()] = mediaStream;
           }

           /*
            * Do start the transmission i.e. start the initialized MediaStream
            * instances.
            */
           for (MediaStream mediaStream : mediaStreams) {
               if (mediaStream != null) {

                   mediaStream.start();
               }
           }



           return null;
       }

       /**
        * Stops the transmission if already started
        */
       private void stop() {
           if (mediaStreams != null) {
               for (int i = 0; i &lt; mediaStreams.length; i++) {
                   MediaStream mediaStream = mediaStreams[i];

                   if (mediaStream != null) {
                       try {
                           mediaStream.stop();
                       } finally {
                           mediaStream.close();
                           mediaStreams[i] = null;
                       }
                   }
               }

               mediaStreams = null;
           }
       }

       /**
        * The name of the command-line argument which specifies the port from which
        * the media is to be transmitted. The command-line argument value will be
        * used as the port to transmit the audio RTP from, the next port after it
        * will be to transmit the audio RTCP from. Respectively, the subsequent
        * ports will be used to transmit the video RTP and RTCP from."
        */
       private static final String LOCAL_PORT_BASE_ARG_NAME
               = "--local-port-base=";

       /**
        * The name of the command-line argument which specifies the name of the
        * host to which the media is to be transmitted.
        */
       private static final String REMOTE_HOST_ARG_NAME = "--remote-host=";

       /**
        * The name of the command-line argument which specifies the port to which
        * the media is to be transmitted. The command-line argument value will be
        * used as the port to transmit the audio RTP to, the next port after it
        * will be to transmit the audio RTCP to. Respectively, the subsequent ports
        * will be used to transmit the video RTP and RTCP to."
        */
       private static final String REMOTE_PORT_BASE_ARG_NAME
               = "--remote-port-base=";

       /**
        * The list of command-line arguments accepted as valid by the
        * <tt>AVTransmit2</tt> application along with their human-readable usage
        * descriptions.
        */
       private static final String[][] ARGS
               = {
               {
                       LOCAL_PORT_BASE_ARG_NAME,
                       "The port which is the source of the transmission i.e. from"
                               + " which the media is to be transmitted. The specified"
                               + " value will be used as the port to transmit the audio"
                               + " RTP from, the next port after it will be used to"
                               + " transmit the audio RTCP from. Respectively, the"
                               + " subsequent ports will be used to transmit the video RTP"
                               + " and RTCP from."
               },
               {
                       REMOTE_HOST_ARG_NAME,
                       "The name of the host which is the target of the transmission"
                               + " i.e. to which the media is to be transmitted"
               },
               {
                       REMOTE_PORT_BASE_ARG_NAME,
                       "The port which is the target of the transmission i.e. to which"
                               + " the media is to be transmitted. The specified value"
                               + " will be used as the port to transmit the audio RTP to"
                               + " the next port after it will be used to transmit the"
                               + " audio RTCP to. Respectively, the subsequent ports will"
                               + " be used to transmit the video RTP and RTCP to."
               }
       };

       public static void main(String[] args)
               throws Exception {
           // We need two parameters to do the transmission. For example,
           // ant run-example -Drun.example.name=AVTransmit2 -Drun.example.arg.line="--remote-host=127.0.0.1 --remote-port-base=10000"
           if (args.length &lt; 2) {
               prUsage();
           } else {
               Map argMap = parseCommandLineArgs(args);

               LibJitsi.start();
               try {
                   // Create a audio transmit object with the specified params.
                   VideoTransmitter at
                           = new VideoTransmitter(
                           argMap.get(LOCAL_PORT_BASE_ARG_NAME),
                           argMap.get(REMOTE_HOST_ARG_NAME),
                           argMap.get(REMOTE_PORT_BASE_ARG_NAME));
                   // Start the transmission
                   String result = at.start();

                   // result will be non-null if there was an error. The return
                   // value is a String describing the possible error. Print it.
                   if (result == null) {
                       System.err.println("Start transmission for 600 seconds...");

                       // Transmit for 60 seconds and then close the processor
                       // This is a safeguard when using a capture data source
                       // so that the capture device will be properly released
                       // before quitting.
                       // The right thing to do would be to have a GUI with a
                       // "Stop" button that would call stop on AVTransmit2
                       try {
                           Thread.sleep(600_000);
                       } catch (InterruptedException ie) {
                       }

                       // Stop the transmission
                       at.stop();

                       System.err.println("...transmission ended.");
                   } else {
                       System.err.println("Error : " + result);
                   }
               } finally {
                   LibJitsi.stop();
               }
           }
       }

       /**
        * Parses the arguments specified to the <tt>AVTransmit2</tt> application on
        * the command line.
        *
        * @param args the arguments specified to the <tt>AVTransmit2</tt>
        *             application on the command line
        * @return a <tt>Map</tt> containing the arguments specified to the
        * <tt>AVTransmit2</tt> application on the command line in the form of
        * name-value associations
        */
       static Map parseCommandLineArgs(String[] args) {
           Map argMap = new HashMap();

           for (String arg : args) {
               int keyEndIndex = arg.indexOf('=');
               String key;
               String value;

               if (keyEndIndex == -1) {
                   key = arg;
                   value = null;
               } else {
                   key = arg.substring(0, keyEndIndex + 1);
                   value = arg.substring(keyEndIndex + 1);
               }
               argMap.put(key, value);
           }
           return argMap;
       }

       /**
        * Outputs human-readable description about the usage of the
        * <tt>AVTransmit2</tt> application and the command-line arguments it
        * accepts as valid.
        */
       private static void prUsage() {
           PrintStream err = System.err;

           err.println("Usage: " + VideoTransmitter.class.getName() + " <args>");
           err.println("Valid args:");
           for (String[] arg : ARGS)
               err.println("  " + arg[0] + " " + arg[1]);
       }
       }
    </args>
  • Reading in pydub AudioSegment from url. BytesIO returning "OSError [Errno 2] No such file or directory" on heroku only ; fine on localhost

    24 octobre 2014, par Mark

    EDIT 1 for anyone with the same error : installing ffmpeg did indeed solve that BytesIO error

    EDIT 1 for anyone still willing to help : my problem is now that when I AudioSegment.export("filename.mp3", format="mp3"), the file is made, but has size 0 bytes — details below (as "EDIT 1")


    EDIT 2 : All problems now solved.

    • Files can be read in as AudioSegment using BytesIO
    • I found buildpacks to ensure ffmpeg was installed correctly on my app, with lame support for exporting proper mp3 files

    Answer below


    Original question

    I have pydub working nicely locally to crop a particular mp3 file based on parameters in the url.
    (?start_time=3.8&end_time=5.1)

    When I run foreman start it all looks good on localhost. The html renders nicely.
    The key lines from the views.py include reading in a file from a url using

    url = "https://s3.amazonaws.com/shareducate02/The_giving_tree__by_Alex_Blumberg__sponsored_by_mailchimp-short.mp3"
    mp3 = urllib.urlopen(url).read() # inspired by http://nbviewer.ipython.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter11_image/06_speech.ipynb
    original=AudioSegment.from_mp3(BytesIO(mp3))  # AudioSegment.from_mp3 is a pydub command, see http://pydub.com
    section = original[start_time_ms:end_time_ms]

    That all works great... until I push to heroku (django app) and run it online.
    then when I load the same page now on the herokuapp.com, I get this error

    OSError at /path/to/page
    [Errno 2] No such file or directory
    Request Method: GET
    Request URL:    http://my.website.com/path/to/page?start_time=3.8&amp;end_time=5
    Django Version: 1.6.5
    Exception Type: OSError
    Exception Value:    
    [Errno 2] No such file or directory
    Exception Location: /app/.heroku/python/lib/python2.7/subprocess.py in _execute_child, line 1327
    Python Executable:  /app/.heroku/python/bin/python
    Python Version: 2.7.8
    Python Path:    
    ['/app',
    '/app/.heroku/python/bin',
    '/app/.heroku/python/lib/python2.7/site-packages/setuptools-5.4.1-py2.7.egg',
    '/app/.heroku/python/lib/python2.7/site-packages/distribute-0.6.36-py2.7.egg',
    '/app/.heroku/python/lib/python2.7/site-packages/pip-1.3.1-py2.7.egg',
    '/app',
    '/app/.heroku/python/lib/python27.zip',
    '/app/.heroku/python/lib/python2.7',
    '/app/.heroku/python/lib/python2.7/plat-linux2',
    '/app/.heroku/python/lib/python2.7/lib-tk',
    '/app/.heroku/python/lib/python2.7/lib-old',
    '/app/.heroku/python/lib/python2.7/lib-dynload',
    '/app/.heroku/python/lib/python2.7/site-packages',
    '/app/.heroku/python/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg-info']


    Traceback:
    File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
     112.                     response = wrapped_callback(request, *callback_args, **callback_kwargs)
    File "/app/evernote/views.py" in finalize
     105.       original=AudioSegment.from_mp3(BytesIO(mp3))
    File "/app/.heroku/python/lib/python2.7/site-packages/pydub/audio_segment.py" in from_mp3
     318.         return cls.from_file(file, 'mp3')
    File "/app/.heroku/python/lib/python2.7/site-packages/pydub/audio_segment.py" in from_file
     302.         retcode = subprocess.call(convertion_command, stderr=open(os.devnull))
    File "/app/.heroku/python/lib/python2.7/subprocess.py" in call
     522.     return Popen(*popenargs, **kwargs).wait()
    File "/app/.heroku/python/lib/python2.7/subprocess.py" in __init__
     710.                                 errread, errwrite)
    File "/app/.heroku/python/lib/python2.7/subprocess.py" in _execute_child
     1327.                 raise child_exception

    I have commented out some of the original to convince myself that sure enough the single line original=AudioSegment.from_mp3(BytesIO(mp3)) is where the problem kicks in... but this is not a problem locally

    The full function in views.py starts like this :

    from django.shortcuts import render, get_object_or_404
    from django.http import HttpResponseRedirect #, Http404, HttpResponse
    from django.core.urlresolvers import reverse
    from django.views import generic
    import pydub
    # Maybe only need:
    from pydub import AudioSegment # == see below
    from time import gmtime, strftime

    import boto
    from boto.s3.connection import S3Connection
    from boto.s3.key import Key

    # http://nbviewer.ipython.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter11_image/06_speech.ipynb
    import urllib
    from io import BytesIO
    # import numpy as np
    # import scipy.signal as sg
    # import pydub # mentioned above already
    # import matplotlib.pyplot as plt
    # from IPython.display import Audio, display
    # import matplotlib as mpl
    # %matplotlib inline

    import os
    # from settings import AWS_ACCESS_KEY, AWS_SECRET_KEY, AWS_BUCKET_NAME
    AWS_ACCESS_KEY = os.environ.get('AWS_ACCESS_KEY') # there must be a better way?
    AWS_SECRET_KEY = os.environ.get('AWS_SECRET_KEY')
    AWS_BUCKET_NAME = os.environ.get('S3_BUCKET_NAME')

    # http://stackoverflow.com/questions/415511/how-to-get-current-time-in-python

    boto_conn = S3Connection(AWS_ACCESS_KEY, AWS_SECRET_KEY)
    bucket = boto_conn.get_bucket(AWS_BUCKET_NAME)
    s3_url_format = 'https://s3.amazonaws.com/shareducate02/{end_path}'

    and specifically the view in views.py that’s called when I visit the page :

    def finalize(request):

       start_time = request.GET.get('start_time')

       end_time = request.GET.get('end_time')

       original_file = "https://s3.amazonaws.com/shareducate02/The_giving_tree__by_Alex_Blumberg__sponsored_by_mailchimp-short.mp3"


       if start_time:

         # original=AudioSegment.from_mp3(original_file)  #...that didn't work
         # but this works below:

         # next three uncommented lines from http://nbviewer.ipython.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter11_image/06_speech.ipynb
         # python 2.x
         url = original_file
         # req = urllib.Request(url, headers={'User-Agent': ''}) # Note: I commented out this because I got error that "Request" did not exist
         mp3 = urllib.urlopen(url).read()
         # That's for my 2.7

         # If I ever upgrade to python 3.x, would need to change it to:
         # req = urllib.request.Request(url, headers={'User-Agent': ''})
         # mp3 = urllib.request.urlopen(req).read()
         # as per instructions on http://nbviewer.ipython.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter11_image/06_speech.ipynb

         original=AudioSegment.from_mp3(BytesIO(mp3))
         # original=AudioSegment.from_mp3("static/givingtree.mp3") # alternative that works locally (on laptop) but no use for heroku

         start_time_ms = int(float(start_time) * 1000)
         if end_time:
           end_time_ms = int(float(end_time) * 1000)
         else:
           end_time_ms = int(float(original.duration_seconds) * 1000)
         duration_ms = end_time_ms - start_time_ms
         # duration = end_time - start_time
         duration = duration_ms/1000

      #   section = original[start_time_ms:end_time_ms]
      #   section_with_fading = section.fade_in(100).fade_out(100)

         clip = "demo-"
         number = strftime("%Y-%m-%d_%H-%M-%S", gmtime())
         clip += number
         clip += ".mp3"

         # DON'T BOTHER writing locally:
         # clip_with_path = "evernote/static/"+clip
         # section_with_fading.export(clip_with_path, format = "mp3")

      #   tempclip = section_with_fading.export(format = "mp3")

         # commented out while de-bugging, but was working earlier if run on localhost
         # c = boto.connect_s3()
         # b = c.get_bucket(S3_BUCKET_NAME)  # as defined above
         # k = Key(b)
         # k.key=clip
         # # k.set_contents_from_filename(clip_with_path)
         # k.set_contents_from_file(tempclip)
         # k.set_acl('public-read')
         clip_made = True
       else:
         duration = 0.0
         clip_made = False
         clip = ""
       context = {'original_file':original_file, 'new_file':clip, 'start_time': start_time, 'end_time':end_time, 'duration':duration, 'clip_made':clip_made}
       return render(request, 'finalize.html' , context)

    Any suggestions ?

    Potentially related :
    I have ffmpeg installed locally

    But have been unable to install it onto heroku, due to not understanding buildpacks. I tried just a moment ago (http://stackoverflow.com/questions/14407388/how-to-install-ffmpeg-for-a-django-app-on-heroku and https://github.com/shunjikonishi/heroku-buildpack-ffmpeg) but so far ffmpeg is not working on heroku (ffmpeg is not recognised when I do "heroku run ffmpeg —version")
    ...do you think this is the reason ?

    An answer like any of these would be much appreciated as I’m going round in circles here :

    1. "I think ffmpeg is indeed your problem. Try harder to sort that out, to get it installed on heroku"
    2. "Actually, I think this is why BytesIO is not working for you : ..."
    3. "Your approach is terrible anyway... if you want to read in an audio file to process using pydub, you should just do this instead : ..." (since I’m just hacking my way through pydub for my first time... my approach may be poor)

    EDIT 1

    ffmpeg is now installed (e.g., I can output wav files)

    However, I can’t create mp3 files, still... or more correctly, I can, but the filesize is zero

    (venv-app)moriartymacbookair13:getstartapp macuser$ heroku config:add BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git
    Setting config vars and restarting awe01... done, v93
    BUILDPACK_URL: https://github.com/ddollar/heroku-buildpack-multi.git
    (venv-app)moriartymacbookair13:getstartapp macuser$ vim .buildpacks
    (venv-app)moriartymacbookair13:getstartapp macuser$ cat .buildpacks
    https://github.com/shunjikonishi/heroku-buildpack-ffmpeg.git
    https://github.com/heroku/heroku-buildpack-python.git
    (venv-app)moriartymacbookair13:getstartapp macuser$ git add --all
    (venv-app)moriartymacbookair13:getstartapp macuser$ git commit -m "need multi, not just ffmpeg, so adding back in multi + shun + heroku, with trailing .git in .buildpacks file"
    [master cd99fef] need multi, not just ffmpeg, so adding back in multi + shun + heroku, with trailing .git in .buildpacks file
    1 file changed, 2 insertions(+), 2 deletions(-)
    (venv-app)moriartymacbookair13:getstartapp macuser$ git push heroku master
    Fetching repository, done.
    Counting objects: 5, done.
    Delta compression using up to 4 threads.
    Compressing objects: 100% (3/3), done.
    Writing objects: 100% (3/3), 372 bytes | 0 bytes/s, done.
    Total 3 (delta 2), reused 0 (delta 0)

    -----> Fetching custom git buildpack... done
    -----> Multipack app detected
    =====> Downloading Buildpack: https://github.com/shunjikonishi/heroku-buildpack-ffmpeg.git
    =====> Detected Framework: ffmpeg
    -----> Install ffmpeg
          DOWNLOAD_URL =  http://flect.github.io/heroku-binaries/libs/ffmpeg.tar.gz
          exporting PATH and LIBRARY_PATH
    =====> Downloading Buildpack: https://github.com/heroku/heroku-buildpack-python.git
    =====> Detected Framework: Python
    -----> Installing dependencies with pip
          Cleaning up...

    -----> Preparing static assets
          Collectstatic configuration error. To debug, run:
          $ heroku run python ./example/manage.py collectstatic --noinput

    Using release configuration from last framework (Python).
    -----> Discovering process types
          Procfile declares types -> web

    -----> Compressing... done, 198.1MB
    -----> Launching... done, v94
          http://[redacted].herokuapp.com/ deployed to Heroku

    To git@heroku.com:awe01.git
      78d6b68..cd99fef  master -> master
    (venv-app)moriartymacbookair13:getstartapp macuser$ heroku run ffmpeg
    Running `ffmpeg` attached to terminal... up, run.6408
    ffmpeg version git-2013-06-02-5711e4f Copyright (c) 2000-2013 the FFmpeg developers
     built on Jun  2 2013 07:38:40 with gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5.1)
     configuration: --enable-shared --disable-asm --prefix=/app/vendor/ffmpeg
     libavutil      52. 34.100 / 52. 34.100
     libavcodec     55. 13.100 / 55. 13.100
     libavformat    55.  8.102 / 55.  8.102
     libavdevice    55.  2.100 / 55.  2.100
     libavfilter     3. 74.101 /  3. 74.101
     libswscale      2.  3.100 /  2.  3.100
     libswresample   0. 17.102 /  0. 17.102
    Hyper fast Audio and Video encoder
    usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...

    Use -h to get full help or, even better, run 'man ffmpeg'
    (venv-app)moriartymacbookair13:getstartapp macuser$ heroku run bash
    Running `bash` attached to terminal... up, run.9660
    ~ $ python
    Python 2.7.8 (default, Jul  9 2014, 20:47:08)
    [GCC 4.4.3] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import pydub
    >>> from pydub import AudioSegment
    >>> exit()
    ~ $ which ffmpeg
    /app/vendor/ffmpeg/bin/ffmpeg
    ~ $ python

    Python 2.7.8 (default, Jul  9 2014, 20:47:08)
    [GCC 4.4.3] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import pydub
    >>> from pydub import AudioSegment
    >>> AudioSegment.silent(5000).export("/tmp/asdf.mp3", "mp3")
    <open file="file"></open>tmp/asdf.mp3', mode 'wb+' at 0x7f9a37d44780>
    >>> exit ()
    ~ $ cd /tmp/
    /tmp $ ls
    asdf.mp3
    /tmp $ open asdf.mp3
    bash: open: command not found
    /tmp $ ls -lah
    total 8.0K
    drwx------  2 u36483 36483 4.0K 2014-10-22 04:14 .
    drwxr-xr-x 14 root   root  4.0K 2014-09-26 07:08 ..
    -rw-------  1 u36483 36483    0 2014-10-22 04:14 asdf.mp3

    Note the file size of 0 above for the mp3 file... when I do the same thing on my macbook, the file size is never zero

    Back to the heroku shell :

    /tmp $ python
    Python 2.7.8 (default, Jul  9 2014, 20:47:08)
    [GCC 4.4.3] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import pydub
    >>> from pydub import AudioSegment
    >>> pydub.AudioSegment.ffmpeg = "/app/vendor/ffmpeg/bin/ffmpeg"
    >>> AudioSegment.silence(1200).export("/tmp/herokuSilence.mp3", format="mp3")
    Traceback (most recent call last):
     File "<stdin>", line 1, in <module>
    AttributeError: type object 'AudioSegment' has no attribute 'silence'
    >>> AudioSegment.silent(1200).export("/tmp/herokuSilence.mp3", format="mp3")
    <open file="file"></open>tmp/herokuSilence.mp3', mode 'wb+' at 0x7fcc2017c780>
    >>> exit()
    /tmp $ ls
    asdf.mp3  herokuSilence.mp3
    /tmp $ ls -lah
    total 8.0K
    drwx------  2 u36483 36483 4.0K 2014-10-22 04:29 .
    drwxr-xr-x 14 root   root  4.0K 2014-09-26 07:08 ..
    -rw-------  1 u36483 36483    0 2014-10-22 04:14 asdf.mp3
    -rw-------  1 u36483 36483    0 2014-10-22 04:29 herokuSilence.mp3
    </module></stdin>

    I realised the first time that I had forgotten the pydub.AudioSegment.ffmpeg = "/app/vendor/ffmpeg/bin/ffmpeg" command, but as you can see above, the file is still zero size

    Out of desperation, I even tried adding the ".heroku" into the path to be as verbatim as your example, but that didn’t fix it :

    /tmp $ python
    Python 2.7.8 (default, Jul  9 2014, 20:47:08)
    [GCC 4.4.3] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import pydub
    >>> from pydub import AudioSegment
    >>> pydub.AudioSegment.ffmpeg = "/app/.heroku/vendor/ffmpeg/bin/ffmpeg"
    >>> AudioSegment.silent(1200).export("/tmp/herokuSilence03.mp3", format="mp3")
    <open file="file"></open>tmp/herokuSilence03.mp3', mode 'wb+' at 0x7fc92aca7780>
    >>> exit()
    /tmp $ ls -lah
    total 8.0K
    drwx------  2 u36483 36483 4.0K 2014-10-22 04:31 .
    drwxr-xr-x 14 root   root  4.0K 2014-09-26 07:08 ..
    -rw-------  1 u36483 36483    0 2014-10-22 04:14 asdf.mp3
    -rw-------  1 u36483 36483    0 2014-10-22 04:31 herokuSilence03.mp3
    -rw-------  1 u36483 36483    0 2014-10-22 04:29 herokuSilence.mp3

    Finally, I tried exporting a .wav file to check pydub was at least working correctly

    /tmp $ python
    Python 2.7.8 (default, Jul  9 2014, 20:47:08)
    [GCC 4.4.3] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import pydub
    >>> from pydub import AudioSegment
    >>> pydub.AudioSegment.ffmpeg = "/app/vendor/ffmpeg/bin/ffmpeg"
    >>> AudioSegment.silent(1300).export("/tmp/heroku_wav_silence01.wav", format="wav")
    <open file="file"></open>tmp/heroku_wav_silence01.wav', mode 'wb+' at 0x7fa33cbf3780>
    >>> exit()
    /tmp $ ls
    asdf.mp3  herokuSilence03.mp3  herokuSilence.mp3  heroku_wav_silence01.wav
    /tmp $ ls -lah
    total 40K
    drwx------  2 u36483 36483 4.0K 2014-10-22 04:42 .
    drwxr-xr-x 14 root   root  4.0K 2014-09-26 07:08 ..
    -rw-------  1 u36483 36483    0 2014-10-22 04:14 asdf.mp3
    -rw-------  1 u36483 36483    0 2014-10-22 04:31 herokuSilence03.mp3
    -rw-------  1 u36483 36483    0 2014-10-22 04:29 herokuSilence.mp3
    -rw-------  1 u36483 36483  29K 2014-10-22 04:42 heroku_wav_silence01.wav
    /tmp $

    At least that filesize for .wav is non-zero, so pydub is working

    My current theory is that either I’m still not using ffmpeg correctly, or it’s insufficient... maybe I need an mp3 additional install on top of basic ffmpeg.

    Several sites mention "libavcodec-extra-53" but I’m not sure how to install that on heroku, or to check if I have it ? https://github.com/jiaaro/pydub/issues/36
    Similarly tutorials on libmp3lame seem to be geared towards laptop installation rather than installation on heroku, so I’m at a loss http://superuser.com/questions/196857/how-to-install-libmp3lame-for-ffmpeg

    In case relevant, I also have youtube-dl in my requirements.txt... this also works locally on my macbook, but fails when I run it in the heroku shell :

    ~/ytdl $ youtube-dl --restrict-filenames -x --audio-format mp3 n2anDgdUHic
    [youtube] Setting language
    [youtube] Confirming age
    [youtube] n2anDgdUHic: Downloading webpage
    [youtube] n2anDgdUHic: Downloading video info webpage
    [youtube] n2anDgdUHic: Extracting video information
    [download] Destination: Boyce_Avenue_feat._Megan_Nicole_-_Skyscraper_Patrick_Ebert_Edit-n2anDgdUHic.m4a
    [download] 100% of 5.92MiB in 00:00
    [ffmpeg] Destination: Boyce_Avenue_feat._Megan_Nicole_-_Skyscraper_Patrick_Ebert_Edit-n2anDgdUHic.mp3
    ERROR: audio conversion failed: Unknown encoder 'libmp3lame'
    ~/ytdl $

    The informative link is that it too specificies an mp3 failure, so perhaps they two issues are related.


    EDIT 2

    See answer, all problems solved

  • FFMPEG : Offseting & merging audios [migrated]

    5 novembre 2014, par user1064504

    I am trying to offset multiple audios into one, each with different offset.

    <code>ffmpeg -i a.ogg -i 1.ogg -filter_complex "amix=inputs=2[op];[op]adelay=5000|15000" out.ogg
    

    Can someone help with understand how to correctly use adelay with amix for multiple files, I am trying to achieve something like this.

    <code>
      &lt;-ist audio->    &lt;---2nd-audio--->

    <---------------------------------------------->