
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (19)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Menus personnalisés
14 novembre 2010, parMediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
Menus créés à l’initialisation du site
Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)
Sur d’autres sites (3837)
-
Revision 382f86f945 : Improve the performance by caching the left_mi and right_mi in macroblockd. Thi
5 décembre 2014, par hkuangChanged Paths :
Modify /vp9/common/vp9_blockd.h
Modify /vp9/common/vp9_onyxc_int.h
Modify /vp9/common/vp9_pred_common.c
Modify /vp9/common/vp9_pred_common.h
Modify /vp9/encoder/vp9_bitstream.c
Modify /vp9/encoder/vp9_rdopt.c
Improve the performance by caching the left_mi and right_mi in macroblockd.This improve the deocde performance by 2% on Nexus 7 2013.
Change-Id : Ie9c4ba0371a149eb7fddc687a6a291c17298d6c3
-
WebVTT as a W3C Recommendation
1er janvier 2014, par silviaThree weeks ago I attended TPAC, the annual meeting of W3C Working Groups. One of the meetings was of the Timed Text Working Group (TT-WG), that has been specifying TTML, the Timed Text Markup Language. It is now proposed that WebVTT be also standardised through the same Working Group.
How did that happen, you may ask, in particular since WebVTT and TTML have in the past been portrayed as rival caption formats ? How will the WebVTT spec that is currently under development in the Text Track Community Group (TT-CG) move through a Working Group process ?
I’ll explain first why there is a need for WebVTT to become a W3C Recommendation, and then how this is proposed to be part of the Timed Text Working Group deliverables, and finally how I can see this working between the TT-CG and the TT-WG.
Advantages of a W3C Recommendation
TTML is a XML-based markup format for captions developed during the time that XML was all the hotness. It has become a W3C standard (a so-called “Recommendation”) despite not having been implemented in any browsers (if you ask me : that’s actually a flaw of the W3C standardisation process : it requires only two interoperable implementations of any kind – and that could be anyone’s JavaScript library or Flash demonstrator – it doesn’t actually require browser implementations. But I digress…). To be fair, a subpart of TTML is by now implemented in Internet Explorer, but all the other major browsers have thus far rejected proposals of implementation.
Because of its Recommendation status, TTML has become the basis for several other caption standards that other SDOs have picked : the SMPTE’s SMPTE-TT format, the EBU’s EBU-TT format, and the DASH Industry Forum’s use of SMPTE-TT. SMPTE-TT has also become the “safe harbour” format for the US legislation on captioning as decided by the FCC. (Note that the FCC requirements for captions on the Web are actually based on a list of features rather than requiring a specific format. But that will be the topic of a different blog post…)
WebVTT is much younger than TTML. TTML was developed as an interchange format among caption authoring systems. WebVTT was built for rendering in Web browsers and with HTML5 in mind. It meets the requirements of the <track> element and supports more than just captions/subtitles. WebVTT is popular with browser developers and has already been implemented in all major browsers (Firefox Nightly is the last to implement it – all others have support already released).
As we can see and as has been proven by the HTML spec and multiple other specs : browsers don’t wait for specifications to have W3C Recommendation status before they implement them. Nor do they really care about the status of a spec – what they care about is whether a spec makes sense for the Web developer and user communities and whether it fits in the Web platform. WebVTT has obviously achieved this status, even with an evolving spec. (Note that the spec tries very hard not to break backwards compatibility, thus all past implementations will at least be compatible with the more basic features of the spec.)
Given that Web browsers don’t need WebVTT to become a W3C standard, why then should we spend effort in moving the spec through the W3C process to become a W3C Recommendation ?
The modern Web is now much bigger than just Web browsers. Web specifications are being used in all kinds of devices including TV set-top boxes, phone and tablet apps, and even unexpected devices such as white goods. Videos are increasingly omnipresent thus exposing deaf and hard-of-hearing users to ever-growing challenges in interacting with content on diverse devices. Some of these devices will not use auto-updating software but fixed versions so can’t easily adapt to new features. Thus, caption producers (both commercial and community) need to be able to author captions (and other video accessibility content as defined by the HTML5 element) towards a feature set that is clearly defined to be supported by such non-updating devices.
Understandably, device vendors in this space have a need to build their technology on standardised specifications. SDOs for such device technologies like to reference fixed specifications so the feature set is not continually updating. To reference WebVTT, they could use a snapshot of the specification at any time and reference that, but that’s not how SDOs work. They prefer referencing an officially sanctioned and tested version of a specification – for a W3C specification that means creating a W3C Recommendation of the WebVTT spec.
Taking WebVTT on a W3C recommendation track is actually advantageous for browsers, too, because a test suite will have to be developed that proves that features are implemented in an interoperable manner. In summary, I can see the advantages and personally support the effort to take WebVTT through to a W3C Recommendation.
Choice of Working Group
FAIK this is the first time that a specification developed in a Community Group is being moved into the recommendation track. This is something that has been expected when the W3C created CGs, but not something that has an established process yet.
The first question of course is which WG would take it through to Recommendation ? Would we create a new Working Group or find an existing one to move the specification through ? Since WGs involve a lot of overhead, the preference was to add WebVTT to the charter of an existing WG. The two obvious candidates were the HTML WG and the TT-WG – the first because it’s where WebVTT originated and the latter because it’s the closest thematically.
Adding a deliverable to a WG is a major undertaking. The TT-WG is currently in the process of re-chartering and thus a suggestion was made to add WebVTT to the milestones of this WG. TBH that was not my first choice. Since I’m already an editor in the HTML WG and WebVTT is very closely related to HTML and can be tested extensively as part of HTML, I preferred the HTML WG. However, adding WebVTT to the TT-WG has some advantages, too.
Since TTML is an exchange format, lots of captions that will be created (at least professionally) will be in TTML and TTML-related formats. It makes sense to create a mapping from TTML to WebVTT for rendering in browsers. The expertise of both, TTML and WebVTT experts is required to develop a good mapping – as has been shown when we developed the mapping from CEA608/708 to WebVTT. Also, captioning experts are already in the TT-WG, so it helps to get a second set of eyes onto WebVTT.
A disadvantage of moving a specification out of a CG into a WG is, however, that you potentially lose a lot of the expertise that is already involved in the development of the spec. People don’t easily re-subscribe to additional mailing lists or want the additional complexity of involving another community (see e.g. this email).
So, a good process needs to be developed to allow everyone to contribute to the spec in the best way possible without requiring duplicate work. How can we do that ?
The forthcoming process
At TPAC the TT-WG discussed for several hours what the next steps are in taking WebVTT through the TT-WG to recommendation status (agenda with slides). I won’t bore you with the different views – if you are keen, you can read the minutes.
What I came away with is the following process :
- Fix a few more bugs in the CG until we’re happy with the feature set in the CG. This should match the feature set that we realistically expect devices to implement for a first version of the WebVTT spec.
- Make a FSA (Final Specification Agreement) in the CG to create a stable reference and a clean IPR position.
- Assuming that the TT-WG’s charter has been approved with WebVTT as a milestone, we would next bring the FSA specification into the TT-WG as FPWD (First Public Working Draft) and immediately do a Last Call which effectively freezes the feature set (this is possible because there has already been wide community review of the WebVTT spec) ; in parallel, the CG can continue to develop the next version of the WebVTT spec with new features (just like it is happening with the HTML5 and HTML5.1 specifications).
- Develop a test suite and address any issues in the Last Call document (of course, also fix these issues in the CG version of the spec).
- As per W3C process, substantive and minor changes to Last Call documents have to be reported and raised issues addressed before the spec can progress to the next level : Candidate Recommendation status.
- For the next step – Proposed Recommendation status – an implementation report is necessary, and thus the test suite needs to be finalized for the given feature set. The feature set may also be reduced at this stage to just the ones implemented interoperably, leaving any other features for the next version of the spec.
- The final step is Recommendation status, which simply requires sufficient support and endorsement by W3C members.
The first version of the WebVTT spec naturally has a focus on captioning (and subtitling), since this has been the dominant use case that we have focused on this far and it’s the part that is the most compatibly implemented feature set of WebVTT in browsers. It’s my expectation that the next version of WebVTT will have a lot more features related to audio descriptions, chapters and metadata. Thus, this seems a good time for a first version feature freeze.
There are still several obstacles towards progressing WebVTT as a milestone of the TT-WG. Apart from the need to get buy-in from the TT-WG, the TT-CG, and the AC (Adivisory Committee who have to approve the new charter), we’re also looking at the license of the specification document.
The CG specification has an open license that allows creating derivative work as long as there is attribution, while the W3C document license for documents on the recommendation track does not allow the creation of derivative work unless given explicit exceptions. This is an issue that is currently being discussed in the W3C with a proposal for a CC-BY license on the Recommendation track. However, my view is that it’s probably ok to use the different document licenses : the TT-WG will work on WebVTT 1.0 and give it a W3C document license, while the CG starts working on the next WebVTT version under the open CG license. It probably actually makes sense to have a less open license on a frozen spec.
Making the best of a complicated world
WebVTT is now proposed as part of the recharter of the TT-WG. I have no idea how complicated the process will become to achieve a W3C WebVTT 1.0 Recommendation, but I am hoping that what is outlined above will be workable in such a way that all of us get to focus on progressing the technology.
At TPAC I got the impression that the TT-WG is committed to progressing WebVTT to Recommendation status. I know that the TT-CG is committed to continue developing WebVTT to its full potential for all kinds of media-time aligned content with new kinds already discussed at FOMS. Let’s enable both groups to achieve their goals. As a consequence, we will allow the two formats to excel where they do : TTML as an interchange format and WebVTT as a browser rendering format.
-
Reading in pydub AudioSegment from url. BytesIO returning "OSError [Errno 2] No such file or directory" on heroku only ; fine on localhost
24 octobre 2014, par MarkEDIT 1 for anyone with the same error : installing ffmpeg did indeed solve that BytesIO error
EDIT 1 for anyone still willing to help : my problem is now that when I AudioSegment.export("filename.mp3", format="mp3"), the file is made, but has size 0 bytes — details below (as "EDIT 1")
EDIT 2 : All problems now solved.
- Files can be read in as AudioSegment using BytesIO
- I found buildpacks to ensure ffmpeg was installed correctly on my app, with lame support for exporting proper mp3 files
Answer below
Original question
I have pydub working nicely locally to crop a particular mp3 file based on parameters in the url.
(?start_time=3.8&end_time=5.1)When I run
foreman start
it all looks good on localhost. The html renders nicely.
The key lines from the views.py include reading in a file from a url usingurl = "https://s3.amazonaws.com/shareducate02/The_giving_tree__by_Alex_Blumberg__sponsored_by_mailchimp-short.mp3"
mp3 = urllib.urlopen(url).read() # inspired by http://nbviewer.ipython.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter11_image/06_speech.ipynb
original=AudioSegment.from_mp3(BytesIO(mp3)) # AudioSegment.from_mp3 is a pydub command, see http://pydub.com
section = original[start_time_ms:end_time_ms]That all works great... until I push to heroku (django app) and run it online.
then when I load the same page now on the herokuapp.com, I get this errorOSError at /path/to/page
[Errno 2] No such file or directory
Request Method: GET
Request URL: http://my.website.com/path/to/page?start_time=3.8&end_time=5
Django Version: 1.6.5
Exception Type: OSError
Exception Value:
[Errno 2] No such file or directory
Exception Location: /app/.heroku/python/lib/python2.7/subprocess.py in _execute_child, line 1327
Python Executable: /app/.heroku/python/bin/python
Python Version: 2.7.8
Python Path:
['/app',
'/app/.heroku/python/bin',
'/app/.heroku/python/lib/python2.7/site-packages/setuptools-5.4.1-py2.7.egg',
'/app/.heroku/python/lib/python2.7/site-packages/distribute-0.6.36-py2.7.egg',
'/app/.heroku/python/lib/python2.7/site-packages/pip-1.3.1-py2.7.egg',
'/app',
'/app/.heroku/python/lib/python27.zip',
'/app/.heroku/python/lib/python2.7',
'/app/.heroku/python/lib/python2.7/plat-linux2',
'/app/.heroku/python/lib/python2.7/lib-tk',
'/app/.heroku/python/lib/python2.7/lib-old',
'/app/.heroku/python/lib/python2.7/lib-dynload',
'/app/.heroku/python/lib/python2.7/site-packages',
'/app/.heroku/python/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg-info']
Traceback:
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
112. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/app/evernote/views.py" in finalize
105. original=AudioSegment.from_mp3(BytesIO(mp3))
File "/app/.heroku/python/lib/python2.7/site-packages/pydub/audio_segment.py" in from_mp3
318. return cls.from_file(file, 'mp3')
File "/app/.heroku/python/lib/python2.7/site-packages/pydub/audio_segment.py" in from_file
302. retcode = subprocess.call(convertion_command, stderr=open(os.devnull))
File "/app/.heroku/python/lib/python2.7/subprocess.py" in call
522. return Popen(*popenargs, **kwargs).wait()
File "/app/.heroku/python/lib/python2.7/subprocess.py" in __init__
710. errread, errwrite)
File "/app/.heroku/python/lib/python2.7/subprocess.py" in _execute_child
1327. raise child_exceptionI have commented out some of the original to convince myself that sure enough the single line
original=AudioSegment.from_mp3(BytesIO(mp3))
is where the problem kicks in... but this is not a problem locallyThe full function in views.py starts like this :
from django.shortcuts import render, get_object_or_404
from django.http import HttpResponseRedirect #, Http404, HttpResponse
from django.core.urlresolvers import reverse
from django.views import generic
import pydub
# Maybe only need:
from pydub import AudioSegment # == see below
from time import gmtime, strftime
import boto
from boto.s3.connection import S3Connection
from boto.s3.key import Key
# http://nbviewer.ipython.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter11_image/06_speech.ipynb
import urllib
from io import BytesIO
# import numpy as np
# import scipy.signal as sg
# import pydub # mentioned above already
# import matplotlib.pyplot as plt
# from IPython.display import Audio, display
# import matplotlib as mpl
# %matplotlib inline
import os
# from settings import AWS_ACCESS_KEY, AWS_SECRET_KEY, AWS_BUCKET_NAME
AWS_ACCESS_KEY = os.environ.get('AWS_ACCESS_KEY') # there must be a better way?
AWS_SECRET_KEY = os.environ.get('AWS_SECRET_KEY')
AWS_BUCKET_NAME = os.environ.get('S3_BUCKET_NAME')
# http://stackoverflow.com/questions/415511/how-to-get-current-time-in-python
boto_conn = S3Connection(AWS_ACCESS_KEY, AWS_SECRET_KEY)
bucket = boto_conn.get_bucket(AWS_BUCKET_NAME)
s3_url_format = 'https://s3.amazonaws.com/shareducate02/{end_path}'and specifically the view in views.py that’s called when I visit the page :
def finalize(request):
start_time = request.GET.get('start_time')
end_time = request.GET.get('end_time')
original_file = "https://s3.amazonaws.com/shareducate02/The_giving_tree__by_Alex_Blumberg__sponsored_by_mailchimp-short.mp3"
if start_time:
# original=AudioSegment.from_mp3(original_file) #...that didn't work
# but this works below:
# next three uncommented lines from http://nbviewer.ipython.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter11_image/06_speech.ipynb
# python 2.x
url = original_file
# req = urllib.Request(url, headers={'User-Agent': ''}) # Note: I commented out this because I got error that "Request" did not exist
mp3 = urllib.urlopen(url).read()
# That's for my 2.7
# If I ever upgrade to python 3.x, would need to change it to:
# req = urllib.request.Request(url, headers={'User-Agent': ''})
# mp3 = urllib.request.urlopen(req).read()
# as per instructions on http://nbviewer.ipython.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter11_image/06_speech.ipynb
original=AudioSegment.from_mp3(BytesIO(mp3))
# original=AudioSegment.from_mp3("static/givingtree.mp3") # alternative that works locally (on laptop) but no use for heroku
start_time_ms = int(float(start_time) * 1000)
if end_time:
end_time_ms = int(float(end_time) * 1000)
else:
end_time_ms = int(float(original.duration_seconds) * 1000)
duration_ms = end_time_ms - start_time_ms
# duration = end_time - start_time
duration = duration_ms/1000
# section = original[start_time_ms:end_time_ms]
# section_with_fading = section.fade_in(100).fade_out(100)
clip = "demo-"
number = strftime("%Y-%m-%d_%H-%M-%S", gmtime())
clip += number
clip += ".mp3"
# DON'T BOTHER writing locally:
# clip_with_path = "evernote/static/"+clip
# section_with_fading.export(clip_with_path, format = "mp3")
# tempclip = section_with_fading.export(format = "mp3")
# commented out while de-bugging, but was working earlier if run on localhost
# c = boto.connect_s3()
# b = c.get_bucket(S3_BUCKET_NAME) # as defined above
# k = Key(b)
# k.key=clip
# # k.set_contents_from_filename(clip_with_path)
# k.set_contents_from_file(tempclip)
# k.set_acl('public-read')
clip_made = True
else:
duration = 0.0
clip_made = False
clip = ""
context = {'original_file':original_file, 'new_file':clip, 'start_time': start_time, 'end_time':end_time, 'duration':duration, 'clip_made':clip_made}
return render(request, 'finalize.html' , context)Any suggestions ?
Potentially related :
I have ffmpeg installed locallyBut have been unable to install it onto heroku, due to not understanding buildpacks. I tried just a moment ago (
http://stackoverflow.com/questions/14407388/how-to-install-ffmpeg-for-a-django-app-on-heroku
andhttps://github.com/shunjikonishi/heroku-buildpack-ffmpeg
) but so far ffmpeg is not working on heroku (ffmpeg is not recognised when I do "heroku run ffmpeg —version")
...do you think this is the reason ?An answer like any of these would be much appreciated as I’m going round in circles here :
- "I think ffmpeg is indeed your problem. Try harder to sort that out, to get it installed on heroku"
- "Actually, I think this is why BytesIO is not working for you : ..."
- "Your approach is terrible anyway... if you want to read in an audio file to process using pydub, you should just do this instead : ..." (since I’m just hacking my way through pydub for my first time... my approach may be poor)
EDIT 1
ffmpeg is now installed (e.g., I can output wav files)
However, I can’t create mp3 files, still... or more correctly, I can, but the filesize is zero
(venv-app)moriartymacbookair13:getstartapp macuser$ heroku config:add BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git
Setting config vars and restarting awe01... done, v93
BUILDPACK_URL: https://github.com/ddollar/heroku-buildpack-multi.git
(venv-app)moriartymacbookair13:getstartapp macuser$ vim .buildpacks
(venv-app)moriartymacbookair13:getstartapp macuser$ cat .buildpacks
https://github.com/shunjikonishi/heroku-buildpack-ffmpeg.git
https://github.com/heroku/heroku-buildpack-python.git
(venv-app)moriartymacbookair13:getstartapp macuser$ git add --all
(venv-app)moriartymacbookair13:getstartapp macuser$ git commit -m "need multi, not just ffmpeg, so adding back in multi + shun + heroku, with trailing .git in .buildpacks file"
[master cd99fef] need multi, not just ffmpeg, so adding back in multi + shun + heroku, with trailing .git in .buildpacks file
1 file changed, 2 insertions(+), 2 deletions(-)
(venv-app)moriartymacbookair13:getstartapp macuser$ git push heroku master
Fetching repository, done.
Counting objects: 5, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 372 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
-----> Fetching custom git buildpack... done
-----> Multipack app detected
=====> Downloading Buildpack: https://github.com/shunjikonishi/heroku-buildpack-ffmpeg.git
=====> Detected Framework: ffmpeg
-----> Install ffmpeg
DOWNLOAD_URL = http://flect.github.io/heroku-binaries/libs/ffmpeg.tar.gz
exporting PATH and LIBRARY_PATH
=====> Downloading Buildpack: https://github.com/heroku/heroku-buildpack-python.git
=====> Detected Framework: Python
-----> Installing dependencies with pip
Cleaning up...
-----> Preparing static assets
Collectstatic configuration error. To debug, run:
$ heroku run python ./example/manage.py collectstatic --noinput
Using release configuration from last framework (Python).
-----> Discovering process types
Procfile declares types -> web
-----> Compressing... done, 198.1MB
-----> Launching... done, v94
http://[redacted].herokuapp.com/ deployed to Heroku
To git@heroku.com:awe01.git
78d6b68..cd99fef master -> master
(venv-app)moriartymacbookair13:getstartapp macuser$ heroku run ffmpeg
Running `ffmpeg` attached to terminal... up, run.6408
ffmpeg version git-2013-06-02-5711e4f Copyright (c) 2000-2013 the FFmpeg developers
built on Jun 2 2013 07:38:40 with gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5.1)
configuration: --enable-shared --disable-asm --prefix=/app/vendor/ffmpeg
libavutil 52. 34.100 / 52. 34.100
libavcodec 55. 13.100 / 55. 13.100
libavformat 55. 8.102 / 55. 8.102
libavdevice 55. 2.100 / 55. 2.100
libavfilter 3. 74.101 / 3. 74.101
libswscale 2. 3.100 / 2. 3.100
libswresample 0. 17.102 / 0. 17.102
Hyper fast Audio and Video encoder
usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...
Use -h to get full help or, even better, run 'man ffmpeg'
(venv-app)moriartymacbookair13:getstartapp macuser$ heroku run bash
Running `bash` attached to terminal... up, run.9660
~ $ python
Python 2.7.8 (default, Jul 9 2014, 20:47:08)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pydub
>>> from pydub import AudioSegment
>>> exit()
~ $ which ffmpeg
/app/vendor/ffmpeg/bin/ffmpeg
~ $ python
Python 2.7.8 (default, Jul 9 2014, 20:47:08)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pydub
>>> from pydub import AudioSegment
>>> AudioSegment.silent(5000).export("/tmp/asdf.mp3", "mp3")
<open file="file"></open>tmp/asdf.mp3', mode 'wb+' at 0x7f9a37d44780>
>>> exit ()
~ $ cd /tmp/
/tmp $ ls
asdf.mp3
/tmp $ open asdf.mp3
bash: open: command not found
/tmp $ ls -lah
total 8.0K
drwx------ 2 u36483 36483 4.0K 2014-10-22 04:14 .
drwxr-xr-x 14 root root 4.0K 2014-09-26 07:08 ..
-rw------- 1 u36483 36483 0 2014-10-22 04:14 asdf.mp3Note the file size of 0 above for the mp3 file... when I do the same thing on my macbook, the file size is never zero
Back to the heroku shell :
/tmp $ python
Python 2.7.8 (default, Jul 9 2014, 20:47:08)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pydub
>>> from pydub import AudioSegment
>>> pydub.AudioSegment.ffmpeg = "/app/vendor/ffmpeg/bin/ffmpeg"
>>> AudioSegment.silence(1200).export("/tmp/herokuSilence.mp3", format="mp3")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'AudioSegment' has no attribute 'silence'
>>> AudioSegment.silent(1200).export("/tmp/herokuSilence.mp3", format="mp3")
<open file="file"></open>tmp/herokuSilence.mp3', mode 'wb+' at 0x7fcc2017c780>
>>> exit()
/tmp $ ls
asdf.mp3 herokuSilence.mp3
/tmp $ ls -lah
total 8.0K
drwx------ 2 u36483 36483 4.0K 2014-10-22 04:29 .
drwxr-xr-x 14 root root 4.0K 2014-09-26 07:08 ..
-rw------- 1 u36483 36483 0 2014-10-22 04:14 asdf.mp3
-rw------- 1 u36483 36483 0 2014-10-22 04:29 herokuSilence.mp3
</module></stdin>I realised the first time that I had forgotten the
pydub.AudioSegment.ffmpeg = "/app/vendor/ffmpeg/bin/ffmpeg"
command, but as you can see above, the file is still zero sizeOut of desperation, I even tried adding the ".heroku" into the path to be as verbatim as your example, but that didn’t fix it :
/tmp $ python
Python 2.7.8 (default, Jul 9 2014, 20:47:08)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pydub
>>> from pydub import AudioSegment
>>> pydub.AudioSegment.ffmpeg = "/app/.heroku/vendor/ffmpeg/bin/ffmpeg"
>>> AudioSegment.silent(1200).export("/tmp/herokuSilence03.mp3", format="mp3")
<open file="file"></open>tmp/herokuSilence03.mp3', mode 'wb+' at 0x7fc92aca7780>
>>> exit()
/tmp $ ls -lah
total 8.0K
drwx------ 2 u36483 36483 4.0K 2014-10-22 04:31 .
drwxr-xr-x 14 root root 4.0K 2014-09-26 07:08 ..
-rw------- 1 u36483 36483 0 2014-10-22 04:14 asdf.mp3
-rw------- 1 u36483 36483 0 2014-10-22 04:31 herokuSilence03.mp3
-rw------- 1 u36483 36483 0 2014-10-22 04:29 herokuSilence.mp3Finally, I tried exporting a .wav file to check pydub was at least working correctly
/tmp $ python
Python 2.7.8 (default, Jul 9 2014, 20:47:08)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pydub
>>> from pydub import AudioSegment
>>> pydub.AudioSegment.ffmpeg = "/app/vendor/ffmpeg/bin/ffmpeg"
>>> AudioSegment.silent(1300).export("/tmp/heroku_wav_silence01.wav", format="wav")
<open file="file"></open>tmp/heroku_wav_silence01.wav', mode 'wb+' at 0x7fa33cbf3780>
>>> exit()
/tmp $ ls
asdf.mp3 herokuSilence03.mp3 herokuSilence.mp3 heroku_wav_silence01.wav
/tmp $ ls -lah
total 40K
drwx------ 2 u36483 36483 4.0K 2014-10-22 04:42 .
drwxr-xr-x 14 root root 4.0K 2014-09-26 07:08 ..
-rw------- 1 u36483 36483 0 2014-10-22 04:14 asdf.mp3
-rw------- 1 u36483 36483 0 2014-10-22 04:31 herokuSilence03.mp3
-rw------- 1 u36483 36483 0 2014-10-22 04:29 herokuSilence.mp3
-rw------- 1 u36483 36483 29K 2014-10-22 04:42 heroku_wav_silence01.wav
/tmp $At least that filesize for .wav is non-zero, so pydub is working
My current theory is that either I’m still not using ffmpeg correctly, or it’s insufficient... maybe I need an mp3 additional install on top of basic ffmpeg.
Several sites mention "libavcodec-extra-53" but I’m not sure how to install that on heroku, or to check if I have it ?
https://github.com/jiaaro/pydub/issues/36
Similarly tutorials on libmp3lame seem to be geared towards laptop installation rather than installation on heroku, so I’m at a losshttp://superuser.com/questions/196857/how-to-install-libmp3lame-for-ffmpeg
In case relevant, I also have youtube-dl in my requirements.txt... this also works locally on my macbook, but fails when I run it in the heroku shell :
~/ytdl $ youtube-dl --restrict-filenames -x --audio-format mp3 n2anDgdUHic
[youtube] Setting language
[youtube] Confirming age
[youtube] n2anDgdUHic: Downloading webpage
[youtube] n2anDgdUHic: Downloading video info webpage
[youtube] n2anDgdUHic: Extracting video information
[download] Destination: Boyce_Avenue_feat._Megan_Nicole_-_Skyscraper_Patrick_Ebert_Edit-n2anDgdUHic.m4a
[download] 100% of 5.92MiB in 00:00
[ffmpeg] Destination: Boyce_Avenue_feat._Megan_Nicole_-_Skyscraper_Patrick_Ebert_Edit-n2anDgdUHic.mp3
ERROR: audio conversion failed: Unknown encoder 'libmp3lame'
~/ytdl $The informative link is that it too specificies an mp3 failure, so perhaps they two issues are related.
EDIT 2
See answer, all problems solved