
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (87)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)
Sur d’autres sites (5487)
-
ffmpeg seeking to I-frame
7 octobre 2014, par user3398748Is it possible to seek to a I-Frame using the av_seek_frame() function.
The problem I am facing is that if I seek in a AVC file I get a lot of noise if I dont flush the buffer. And if I flush the buffer the decoder dose not return a frame until it comes across a I-Frame which causes problem in the calculation of total frames at the end of file if I am seeking.Thank you
-
Reading in pydub AudioSegment from url. BytesIO returning "OSError [Errno 2] No such file or directory" on heroku only ; fine on localhost
24 octobre 2014, par MarkEDIT 1 for anyone with the same error : installing ffmpeg did indeed solve that BytesIO error
EDIT 1 for anyone still willing to help : my problem is now that when I AudioSegment.export("filename.mp3", format="mp3"), the file is made, but has size 0 bytes — details below (as "EDIT 1")
EDIT 2 : All problems now solved.
- Files can be read in as AudioSegment using BytesIO
- I found buildpacks to ensure ffmpeg was installed correctly on my app, with lame support for exporting proper mp3 files
Answer below
Original question
I have pydub working nicely locally to crop a particular mp3 file based on parameters in the url.
(?start_time=3.8&end_time=5.1)When I run
foreman start
it all looks good on localhost. The html renders nicely.
The key lines from the views.py include reading in a file from a url usingurl = "https://s3.amazonaws.com/shareducate02/The_giving_tree__by_Alex_Blumberg__sponsored_by_mailchimp-short.mp3"
mp3 = urllib.urlopen(url).read() # inspired by http://nbviewer.ipython.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter11_image/06_speech.ipynb
original=AudioSegment.from_mp3(BytesIO(mp3)) # AudioSegment.from_mp3 is a pydub command, see http://pydub.com
section = original[start_time_ms:end_time_ms]That all works great... until I push to heroku (django app) and run it online.
then when I load the same page now on the herokuapp.com, I get this errorOSError at /path/to/page
[Errno 2] No such file or directory
Request Method: GET
Request URL: http://my.website.com/path/to/page?start_time=3.8&end_time=5
Django Version: 1.6.5
Exception Type: OSError
Exception Value:
[Errno 2] No such file or directory
Exception Location: /app/.heroku/python/lib/python2.7/subprocess.py in _execute_child, line 1327
Python Executable: /app/.heroku/python/bin/python
Python Version: 2.7.8
Python Path:
['/app',
'/app/.heroku/python/bin',
'/app/.heroku/python/lib/python2.7/site-packages/setuptools-5.4.1-py2.7.egg',
'/app/.heroku/python/lib/python2.7/site-packages/distribute-0.6.36-py2.7.egg',
'/app/.heroku/python/lib/python2.7/site-packages/pip-1.3.1-py2.7.egg',
'/app',
'/app/.heroku/python/lib/python27.zip',
'/app/.heroku/python/lib/python2.7',
'/app/.heroku/python/lib/python2.7/plat-linux2',
'/app/.heroku/python/lib/python2.7/lib-tk',
'/app/.heroku/python/lib/python2.7/lib-old',
'/app/.heroku/python/lib/python2.7/lib-dynload',
'/app/.heroku/python/lib/python2.7/site-packages',
'/app/.heroku/python/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg-info']
Traceback:
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
112. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/app/evernote/views.py" in finalize
105. original=AudioSegment.from_mp3(BytesIO(mp3))
File "/app/.heroku/python/lib/python2.7/site-packages/pydub/audio_segment.py" in from_mp3
318. return cls.from_file(file, 'mp3')
File "/app/.heroku/python/lib/python2.7/site-packages/pydub/audio_segment.py" in from_file
302. retcode = subprocess.call(convertion_command, stderr=open(os.devnull))
File "/app/.heroku/python/lib/python2.7/subprocess.py" in call
522. return Popen(*popenargs, **kwargs).wait()
File "/app/.heroku/python/lib/python2.7/subprocess.py" in __init__
710. errread, errwrite)
File "/app/.heroku/python/lib/python2.7/subprocess.py" in _execute_child
1327. raise child_exceptionI have commented out some of the original to convince myself that sure enough the single line
original=AudioSegment.from_mp3(BytesIO(mp3))
is where the problem kicks in... but this is not a problem locallyThe full function in views.py starts like this :
from django.shortcuts import render, get_object_or_404
from django.http import HttpResponseRedirect #, Http404, HttpResponse
from django.core.urlresolvers import reverse
from django.views import generic
import pydub
# Maybe only need:
from pydub import AudioSegment # == see below
from time import gmtime, strftime
import boto
from boto.s3.connection import S3Connection
from boto.s3.key import Key
# http://nbviewer.ipython.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter11_image/06_speech.ipynb
import urllib
from io import BytesIO
# import numpy as np
# import scipy.signal as sg
# import pydub # mentioned above already
# import matplotlib.pyplot as plt
# from IPython.display import Audio, display
# import matplotlib as mpl
# %matplotlib inline
import os
# from settings import AWS_ACCESS_KEY, AWS_SECRET_KEY, AWS_BUCKET_NAME
AWS_ACCESS_KEY = os.environ.get('AWS_ACCESS_KEY') # there must be a better way?
AWS_SECRET_KEY = os.environ.get('AWS_SECRET_KEY')
AWS_BUCKET_NAME = os.environ.get('S3_BUCKET_NAME')
# http://stackoverflow.com/questions/415511/how-to-get-current-time-in-python
boto_conn = S3Connection(AWS_ACCESS_KEY, AWS_SECRET_KEY)
bucket = boto_conn.get_bucket(AWS_BUCKET_NAME)
s3_url_format = 'https://s3.amazonaws.com/shareducate02/{end_path}'and specifically the view in views.py that’s called when I visit the page :
def finalize(request):
start_time = request.GET.get('start_time')
end_time = request.GET.get('end_time')
original_file = "https://s3.amazonaws.com/shareducate02/The_giving_tree__by_Alex_Blumberg__sponsored_by_mailchimp-short.mp3"
if start_time:
# original=AudioSegment.from_mp3(original_file) #...that didn't work
# but this works below:
# next three uncommented lines from http://nbviewer.ipython.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter11_image/06_speech.ipynb
# python 2.x
url = original_file
# req = urllib.Request(url, headers={'User-Agent': ''}) # Note: I commented out this because I got error that "Request" did not exist
mp3 = urllib.urlopen(url).read()
# That's for my 2.7
# If I ever upgrade to python 3.x, would need to change it to:
# req = urllib.request.Request(url, headers={'User-Agent': ''})
# mp3 = urllib.request.urlopen(req).read()
# as per instructions on http://nbviewer.ipython.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter11_image/06_speech.ipynb
original=AudioSegment.from_mp3(BytesIO(mp3))
# original=AudioSegment.from_mp3("static/givingtree.mp3") # alternative that works locally (on laptop) but no use for heroku
start_time_ms = int(float(start_time) * 1000)
if end_time:
end_time_ms = int(float(end_time) * 1000)
else:
end_time_ms = int(float(original.duration_seconds) * 1000)
duration_ms = end_time_ms - start_time_ms
# duration = end_time - start_time
duration = duration_ms/1000
# section = original[start_time_ms:end_time_ms]
# section_with_fading = section.fade_in(100).fade_out(100)
clip = "demo-"
number = strftime("%Y-%m-%d_%H-%M-%S", gmtime())
clip += number
clip += ".mp3"
# DON'T BOTHER writing locally:
# clip_with_path = "evernote/static/"+clip
# section_with_fading.export(clip_with_path, format = "mp3")
# tempclip = section_with_fading.export(format = "mp3")
# commented out while de-bugging, but was working earlier if run on localhost
# c = boto.connect_s3()
# b = c.get_bucket(S3_BUCKET_NAME) # as defined above
# k = Key(b)
# k.key=clip
# # k.set_contents_from_filename(clip_with_path)
# k.set_contents_from_file(tempclip)
# k.set_acl('public-read')
clip_made = True
else:
duration = 0.0
clip_made = False
clip = ""
context = {'original_file':original_file, 'new_file':clip, 'start_time': start_time, 'end_time':end_time, 'duration':duration, 'clip_made':clip_made}
return render(request, 'finalize.html' , context)Any suggestions ?
Potentially related :
I have ffmpeg installed locallyBut have been unable to install it onto heroku, due to not understanding buildpacks. I tried just a moment ago (
http://stackoverflow.com/questions/14407388/how-to-install-ffmpeg-for-a-django-app-on-heroku
andhttps://github.com/shunjikonishi/heroku-buildpack-ffmpeg
) but so far ffmpeg is not working on heroku (ffmpeg is not recognised when I do "heroku run ffmpeg —version")
...do you think this is the reason ?An answer like any of these would be much appreciated as I’m going round in circles here :
- "I think ffmpeg is indeed your problem. Try harder to sort that out, to get it installed on heroku"
- "Actually, I think this is why BytesIO is not working for you : ..."
- "Your approach is terrible anyway... if you want to read in an audio file to process using pydub, you should just do this instead : ..." (since I’m just hacking my way through pydub for my first time... my approach may be poor)
EDIT 1
ffmpeg is now installed (e.g., I can output wav files)
However, I can’t create mp3 files, still... or more correctly, I can, but the filesize is zero
(venv-app)moriartymacbookair13:getstartapp macuser$ heroku config:add BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git
Setting config vars and restarting awe01... done, v93
BUILDPACK_URL: https://github.com/ddollar/heroku-buildpack-multi.git
(venv-app)moriartymacbookair13:getstartapp macuser$ vim .buildpacks
(venv-app)moriartymacbookair13:getstartapp macuser$ cat .buildpacks
https://github.com/shunjikonishi/heroku-buildpack-ffmpeg.git
https://github.com/heroku/heroku-buildpack-python.git
(venv-app)moriartymacbookair13:getstartapp macuser$ git add --all
(venv-app)moriartymacbookair13:getstartapp macuser$ git commit -m "need multi, not just ffmpeg, so adding back in multi + shun + heroku, with trailing .git in .buildpacks file"
[master cd99fef] need multi, not just ffmpeg, so adding back in multi + shun + heroku, with trailing .git in .buildpacks file
1 file changed, 2 insertions(+), 2 deletions(-)
(venv-app)moriartymacbookair13:getstartapp macuser$ git push heroku master
Fetching repository, done.
Counting objects: 5, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 372 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
-----> Fetching custom git buildpack... done
-----> Multipack app detected
=====> Downloading Buildpack: https://github.com/shunjikonishi/heroku-buildpack-ffmpeg.git
=====> Detected Framework: ffmpeg
-----> Install ffmpeg
DOWNLOAD_URL = http://flect.github.io/heroku-binaries/libs/ffmpeg.tar.gz
exporting PATH and LIBRARY_PATH
=====> Downloading Buildpack: https://github.com/heroku/heroku-buildpack-python.git
=====> Detected Framework: Python
-----> Installing dependencies with pip
Cleaning up...
-----> Preparing static assets
Collectstatic configuration error. To debug, run:
$ heroku run python ./example/manage.py collectstatic --noinput
Using release configuration from last framework (Python).
-----> Discovering process types
Procfile declares types -> web
-----> Compressing... done, 198.1MB
-----> Launching... done, v94
http://[redacted].herokuapp.com/ deployed to Heroku
To git@heroku.com:awe01.git
78d6b68..cd99fef master -> master
(venv-app)moriartymacbookair13:getstartapp macuser$ heroku run ffmpeg
Running `ffmpeg` attached to terminal... up, run.6408
ffmpeg version git-2013-06-02-5711e4f Copyright (c) 2000-2013 the FFmpeg developers
built on Jun 2 2013 07:38:40 with gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5.1)
configuration: --enable-shared --disable-asm --prefix=/app/vendor/ffmpeg
libavutil 52. 34.100 / 52. 34.100
libavcodec 55. 13.100 / 55. 13.100
libavformat 55. 8.102 / 55. 8.102
libavdevice 55. 2.100 / 55. 2.100
libavfilter 3. 74.101 / 3. 74.101
libswscale 2. 3.100 / 2. 3.100
libswresample 0. 17.102 / 0. 17.102
Hyper fast Audio and Video encoder
usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...
Use -h to get full help or, even better, run 'man ffmpeg'
(venv-app)moriartymacbookair13:getstartapp macuser$ heroku run bash
Running `bash` attached to terminal... up, run.9660
~ $ python
Python 2.7.8 (default, Jul 9 2014, 20:47:08)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pydub
>>> from pydub import AudioSegment
>>> exit()
~ $ which ffmpeg
/app/vendor/ffmpeg/bin/ffmpeg
~ $ python
Python 2.7.8 (default, Jul 9 2014, 20:47:08)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pydub
>>> from pydub import AudioSegment
>>> AudioSegment.silent(5000).export("/tmp/asdf.mp3", "mp3")
<open file="file"></open>tmp/asdf.mp3', mode 'wb+' at 0x7f9a37d44780>
>>> exit ()
~ $ cd /tmp/
/tmp $ ls
asdf.mp3
/tmp $ open asdf.mp3
bash: open: command not found
/tmp $ ls -lah
total 8.0K
drwx------ 2 u36483 36483 4.0K 2014-10-22 04:14 .
drwxr-xr-x 14 root root 4.0K 2014-09-26 07:08 ..
-rw------- 1 u36483 36483 0 2014-10-22 04:14 asdf.mp3Note the file size of 0 above for the mp3 file... when I do the same thing on my macbook, the file size is never zero
Back to the heroku shell :
/tmp $ python
Python 2.7.8 (default, Jul 9 2014, 20:47:08)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pydub
>>> from pydub import AudioSegment
>>> pydub.AudioSegment.ffmpeg = "/app/vendor/ffmpeg/bin/ffmpeg"
>>> AudioSegment.silence(1200).export("/tmp/herokuSilence.mp3", format="mp3")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'AudioSegment' has no attribute 'silence'
>>> AudioSegment.silent(1200).export("/tmp/herokuSilence.mp3", format="mp3")
<open file="file"></open>tmp/herokuSilence.mp3', mode 'wb+' at 0x7fcc2017c780>
>>> exit()
/tmp $ ls
asdf.mp3 herokuSilence.mp3
/tmp $ ls -lah
total 8.0K
drwx------ 2 u36483 36483 4.0K 2014-10-22 04:29 .
drwxr-xr-x 14 root root 4.0K 2014-09-26 07:08 ..
-rw------- 1 u36483 36483 0 2014-10-22 04:14 asdf.mp3
-rw------- 1 u36483 36483 0 2014-10-22 04:29 herokuSilence.mp3
</module></stdin>I realised the first time that I had forgotten the
pydub.AudioSegment.ffmpeg = "/app/vendor/ffmpeg/bin/ffmpeg"
command, but as you can see above, the file is still zero sizeOut of desperation, I even tried adding the ".heroku" into the path to be as verbatim as your example, but that didn’t fix it :
/tmp $ python
Python 2.7.8 (default, Jul 9 2014, 20:47:08)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pydub
>>> from pydub import AudioSegment
>>> pydub.AudioSegment.ffmpeg = "/app/.heroku/vendor/ffmpeg/bin/ffmpeg"
>>> AudioSegment.silent(1200).export("/tmp/herokuSilence03.mp3", format="mp3")
<open file="file"></open>tmp/herokuSilence03.mp3', mode 'wb+' at 0x7fc92aca7780>
>>> exit()
/tmp $ ls -lah
total 8.0K
drwx------ 2 u36483 36483 4.0K 2014-10-22 04:31 .
drwxr-xr-x 14 root root 4.0K 2014-09-26 07:08 ..
-rw------- 1 u36483 36483 0 2014-10-22 04:14 asdf.mp3
-rw------- 1 u36483 36483 0 2014-10-22 04:31 herokuSilence03.mp3
-rw------- 1 u36483 36483 0 2014-10-22 04:29 herokuSilence.mp3Finally, I tried exporting a .wav file to check pydub was at least working correctly
/tmp $ python
Python 2.7.8 (default, Jul 9 2014, 20:47:08)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pydub
>>> from pydub import AudioSegment
>>> pydub.AudioSegment.ffmpeg = "/app/vendor/ffmpeg/bin/ffmpeg"
>>> AudioSegment.silent(1300).export("/tmp/heroku_wav_silence01.wav", format="wav")
<open file="file"></open>tmp/heroku_wav_silence01.wav', mode 'wb+' at 0x7fa33cbf3780>
>>> exit()
/tmp $ ls
asdf.mp3 herokuSilence03.mp3 herokuSilence.mp3 heroku_wav_silence01.wav
/tmp $ ls -lah
total 40K
drwx------ 2 u36483 36483 4.0K 2014-10-22 04:42 .
drwxr-xr-x 14 root root 4.0K 2014-09-26 07:08 ..
-rw------- 1 u36483 36483 0 2014-10-22 04:14 asdf.mp3
-rw------- 1 u36483 36483 0 2014-10-22 04:31 herokuSilence03.mp3
-rw------- 1 u36483 36483 0 2014-10-22 04:29 herokuSilence.mp3
-rw------- 1 u36483 36483 29K 2014-10-22 04:42 heroku_wav_silence01.wav
/tmp $At least that filesize for .wav is non-zero, so pydub is working
My current theory is that either I’m still not using ffmpeg correctly, or it’s insufficient... maybe I need an mp3 additional install on top of basic ffmpeg.
Several sites mention "libavcodec-extra-53" but I’m not sure how to install that on heroku, or to check if I have it ?
https://github.com/jiaaro/pydub/issues/36
Similarly tutorials on libmp3lame seem to be geared towards laptop installation rather than installation on heroku, so I’m at a losshttp://superuser.com/questions/196857/how-to-install-libmp3lame-for-ffmpeg
In case relevant, I also have youtube-dl in my requirements.txt... this also works locally on my macbook, but fails when I run it in the heroku shell :
~/ytdl $ youtube-dl --restrict-filenames -x --audio-format mp3 n2anDgdUHic
[youtube] Setting language
[youtube] Confirming age
[youtube] n2anDgdUHic: Downloading webpage
[youtube] n2anDgdUHic: Downloading video info webpage
[youtube] n2anDgdUHic: Extracting video information
[download] Destination: Boyce_Avenue_feat._Megan_Nicole_-_Skyscraper_Patrick_Ebert_Edit-n2anDgdUHic.m4a
[download] 100% of 5.92MiB in 00:00
[ffmpeg] Destination: Boyce_Avenue_feat._Megan_Nicole_-_Skyscraper_Patrick_Ebert_Edit-n2anDgdUHic.mp3
ERROR: audio conversion failed: Unknown encoder 'libmp3lame'
~/ytdl $The informative link is that it too specificies an mp3 failure, so perhaps they two issues are related.
EDIT 2
See answer, all problems solved
-
How to play raw h264 produced by MediaCodec encoder ?
1er novembre 2014, par jackos2500I’m a bit new when it comes to MediaCodec (and video encoding/decoding in general), so correct me if anything I say here is wrong.
I want to play the raw h264 output of MediaCodec with VLC/ffplay. I need this to play becuase my end goal is to stream some live video to a computer, and MediaMuxer only produces a file on disk rather than something I can stream with (very) low latency to a desktop. (I’m open to other solutions, but I have not found anything else that fits the latency requirement)
Here is the code I’m using encode the video and write it to a file : (it’s based off the MediaCodec example found here, only with the MediaMuxer part removed)
package com.jackos2500.droidtop;
import android.media.MediaCodec;
import android.media.MediaCodecInfo;
import android.media.MediaFormat;
import android.opengl.EGL14;
import android.opengl.EGLConfig;
import android.opengl.EGLContext;
import android.opengl.EGLDisplay;
import android.opengl.EGLExt;
import android.opengl.EGLSurface;
import android.opengl.GLES20;
import android.os.Environment;
import android.util.Log;
import android.view.Surface;
import java.io.BufferedOutputStream;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
public class StreamH264 {
private static final String TAG = "StreamH264";
private static final boolean VERBOSE = true; // lots of logging
// where to put the output file (note: /sdcard requires WRITE_EXTERNAL_STORAGE permission)
private static final File OUTPUT_DIR = Environment.getExternalStorageDirectory();
public static int MEGABIT = 1000 * 1000;
private static final int IFRAME_INTERVAL = 10;
private static final int TEST_R0 = 0;
private static final int TEST_G0 = 136;
private static final int TEST_B0 = 0;
private static final int TEST_R1 = 236;
private static final int TEST_G1 = 50;
private static final int TEST_B1 = 186;
private MediaCodec codec;
private CodecInputSurface inputSurface;
private BufferedOutputStream out;
private MediaCodec.BufferInfo bufferInfo;
public StreamH264() {
}
private void prepareEncoder() throws IOException {
bufferInfo = new MediaCodec.BufferInfo();
MediaFormat format = MediaFormat.createVideoFormat("video/avc", 1280, 720);
format.setInteger(MediaFormat.KEY_BIT_RATE, 2 * MEGABIT);
format.setInteger(MediaFormat.KEY_FRAME_RATE, 30);
format.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, IFRAME_INTERVAL);
codec = MediaCodec.createEncoderByType("video/avc");
codec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
inputSurface = new CodecInputSurface(codec.createInputSurface());
codec.start();
File dst = new File(OUTPUT_DIR, "test.264");
out = new BufferedOutputStream(new FileOutputStream(dst));
}
private void releaseEncoder() throws IOException {
if (VERBOSE) Log.d(TAG, "releasing encoder objects");
if (codec != null) {
codec.stop();
codec.release();
codec = null;
}
if (inputSurface != null) {
inputSurface.release();
inputSurface = null;
}
if (out != null) {
out.flush();
out.close();
out = null;
}
}
public void stream() throws IOException {
try {
prepareEncoder();
inputSurface.makeCurrent();
for (int i = 0; i < (30 * 5); i++) {
// Feed any pending encoder output into the file.
drainEncoder(false);
// Generate a new frame of input.
generateSurfaceFrame(i);
inputSurface.setPresentationTime(computePresentationTimeNsec(i, 30));
// Submit it to the encoder. The eglSwapBuffers call will block if the input
// is full, which would be bad if it stayed full until we dequeued an output
// buffer (which we can't do, since we're stuck here). So long as we fully drain
// the encoder before supplying additional input, the system guarantees that we
// can supply another frame without blocking.
if (VERBOSE) Log.d(TAG, "sending frame " + i + " to encoder");
inputSurface.swapBuffers();
}
// send end-of-stream to encoder, and drain remaining output
drainEncoder(true);
} finally {
// release encoder, muxer, and input Surface
releaseEncoder();
}
}
private void drainEncoder(boolean endOfStream) throws IOException {
final int TIMEOUT_USEC = 10000;
if (VERBOSE) Log.d(TAG, "drainEncoder(" + endOfStream + ")");
if (endOfStream) {
if (VERBOSE) Log.d(TAG, "sending EOS to encoder");
codec.signalEndOfInputStream();
}
ByteBuffer[] outputBuffers = codec.getOutputBuffers();
while (true) {
int encoderStatus = codec.dequeueOutputBuffer(bufferInfo, TIMEOUT_USEC);
if (encoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
// no output available yet
if (!endOfStream) {
break; // out of while
} else {
if (VERBOSE) Log.d(TAG, "no output available, spinning to await EOS");
}
} else if (encoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
// not expected for an encoder
outputBuffers = codec.getOutputBuffers();
} else if (encoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
// should happen before receiving buffers, and should only happen once
MediaFormat newFormat = codec.getOutputFormat();
Log.d(TAG, "encoder output format changed: " + newFormat);
} else if (encoderStatus < 0) {
Log.w(TAG, "unexpected result from encoder.dequeueOutputBuffer: " + encoderStatus);
// let's ignore it
} else {
ByteBuffer encodedData = outputBuffers[encoderStatus];
if (encodedData == null) {
throw new RuntimeException("encoderOutputBuffer " + encoderStatus + " was null");
}
if ((bufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
// The codec config data was pulled out and fed to the muxer when we got
// the INFO_OUTPUT_FORMAT_CHANGED status. Ignore it.
if (VERBOSE) Log.d(TAG, "ignoring BUFFER_FLAG_CODEC_CONFIG");
bufferInfo.size = 0;
}
if (bufferInfo.size != 0) {
// adjust the ByteBuffer values to match BufferInfo (not needed?)
encodedData.position(bufferInfo.offset);
encodedData.limit(bufferInfo.offset + bufferInfo.size);
byte[] data = new byte[bufferInfo.size];
encodedData.get(data);
out.write(data);
if (VERBOSE) Log.d(TAG, "sent " + bufferInfo.size + " bytes to file");
}
codec.releaseOutputBuffer(encoderStatus, false);
if ((bufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
if (!endOfStream) {
Log.w(TAG, "reached end of stream unexpectedly");
} else {
if (VERBOSE) Log.d(TAG, "end of stream reached");
}
break; // out of while
}
}
}
}
private void generateSurfaceFrame(int frameIndex) {
frameIndex %= 8;
int startX, startY;
if (frameIndex < 4) {
// (0,0) is bottom-left in GL
startX = frameIndex * (1280 / 4);
startY = 720 / 2;
} else {
startX = (7 - frameIndex) * (1280 / 4);
startY = 0;
}
GLES20.glClearColor(TEST_R0 / 255.0f, TEST_G0 / 255.0f, TEST_B0 / 255.0f, 1.0f);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glEnable(GLES20.GL_SCISSOR_TEST);
GLES20.glScissor(startX, startY, 1280 / 4, 720 / 2);
GLES20.glClearColor(TEST_R1 / 255.0f, TEST_G1 / 255.0f, TEST_B1 / 255.0f, 1.0f);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glDisable(GLES20.GL_SCISSOR_TEST);
}
private static long computePresentationTimeNsec(int frameIndex, int frameRate) {
final long ONE_BILLION = 1000000000;
return frameIndex * ONE_BILLION / frameRate;
}
/**
* Holds state associated with a Surface used for MediaCodec encoder input.
* <p>
* The constructor takes a Surface obtained from MediaCodec.createInputSurface(), and uses that
* to create an EGL window surface. Calls to eglSwapBuffers() cause a frame of data to be sent
* to the video encoder.
* </p><p>
* This object owns the Surface -- releasing this will release the Surface too.
*/
private static class CodecInputSurface {
private static final int EGL_RECORDABLE_ANDROID = 0x3142;
private EGLDisplay mEGLDisplay = EGL14.EGL_NO_DISPLAY;
private EGLContext mEGLContext = EGL14.EGL_NO_CONTEXT;
private EGLSurface mEGLSurface = EGL14.EGL_NO_SURFACE;
private Surface mSurface;
/**
* Creates a CodecInputSurface from a Surface.
*/
public CodecInputSurface(Surface surface) {
if (surface == null) {
throw new NullPointerException();
}
mSurface = surface;
eglSetup();
}
/**
* Prepares EGL. We want a GLES 2.0 context and a surface that supports recording.
*/
private void eglSetup() {
mEGLDisplay = EGL14.eglGetDisplay(EGL14.EGL_DEFAULT_DISPLAY);
if (mEGLDisplay == EGL14.EGL_NO_DISPLAY) {
throw new RuntimeException("unable to get EGL14 display");
}
int[] version = new int[2];
if (!EGL14.eglInitialize(mEGLDisplay, version, 0, version, 1)) {
throw new RuntimeException("unable to initialize EGL14");
}
// Configure EGL for recording and OpenGL ES 2.0.
int[] attribList = {
EGL14.EGL_RED_SIZE, 8,
EGL14.EGL_GREEN_SIZE, 8,
EGL14.EGL_BLUE_SIZE, 8,
EGL14.EGL_ALPHA_SIZE, 8,
EGL14.EGL_RENDERABLE_TYPE, EGL14.EGL_OPENGL_ES2_BIT,
EGL_RECORDABLE_ANDROID, 1,
EGL14.EGL_NONE
};
EGLConfig[] configs = new EGLConfig[1];
int[] numConfigs = new int[1];
EGL14.eglChooseConfig(mEGLDisplay, attribList, 0, configs, 0, configs.length,
numConfigs, 0);
checkEglError("eglCreateContext RGB888+recordable ES2");
// Configure context for OpenGL ES 2.0.
int[] attrib_list = {
EGL14.EGL_CONTEXT_CLIENT_VERSION, 2,
EGL14.EGL_NONE
};
mEGLContext = EGL14.eglCreateContext(mEGLDisplay, configs[0], EGL14.EGL_NO_CONTEXT,
attrib_list, 0);
checkEglError("eglCreateContext");
// Create a window surface, and attach it to the Surface we received.
int[] surfaceAttribs = {
EGL14.EGL_NONE
};
mEGLSurface = EGL14.eglCreateWindowSurface(mEGLDisplay, configs[0], mSurface,
surfaceAttribs, 0);
checkEglError("eglCreateWindowSurface");
}
/**
* Discards all resources held by this class, notably the EGL context. Also releases the
* Surface that was passed to our constructor.
*/
public void release() {
if (mEGLDisplay != EGL14.EGL_NO_DISPLAY) {
EGL14.eglMakeCurrent(mEGLDisplay, EGL14.EGL_NO_SURFACE, EGL14.EGL_NO_SURFACE,
EGL14.EGL_NO_CONTEXT);
EGL14.eglDestroySurface(mEGLDisplay, mEGLSurface);
EGL14.eglDestroyContext(mEGLDisplay, mEGLContext);
EGL14.eglReleaseThread();
EGL14.eglTerminate(mEGLDisplay);
}
mSurface.release();
mEGLDisplay = EGL14.EGL_NO_DISPLAY;
mEGLContext = EGL14.EGL_NO_CONTEXT;
mEGLSurface = EGL14.EGL_NO_SURFACE;
mSurface = null;
}
/**
* Makes our EGL context and surface current.
*/
public void makeCurrent() {
EGL14.eglMakeCurrent(mEGLDisplay, mEGLSurface, mEGLSurface, mEGLContext);
checkEglError("eglMakeCurrent");
}
/**
* Calls eglSwapBuffers. Use this to "publish" the current frame.
*/
public boolean swapBuffers() {
boolean result = EGL14.eglSwapBuffers(mEGLDisplay, mEGLSurface);
checkEglError("eglSwapBuffers");
return result;
}
/**
* Sends the presentation time stamp to EGL. Time is expressed in nanoseconds.
*/
public void setPresentationTime(long nsecs) {
EGLExt.eglPresentationTimeANDROID(mEGLDisplay, mEGLSurface, nsecs);
checkEglError("eglPresentationTimeANDROID");
}
/**
* Checks for EGL errors. Throws an exception if one is found.
*/
private void checkEglError(String msg) {
int error;
if ((error = EGL14.eglGetError()) != EGL14.EGL_SUCCESS) {
throw new RuntimeException(msg + ": EGL error: 0x" + Integer.toHexString(error));
}
}
}
}
</p>However, the file produced from this code does not play with VLC or ffplay. Can anyone tell me what I’m doing wrong ? I believe it is due to an incorrect format (or total lack) of headers required for the playing of raw h264, as I have had success playing .264 files downloaded from the internet with ffplay. Also, I’m not sure exactly how I’m going to stream this video to a computer, so if somebody could give me some suggestions as to how I might do that, I would be very grateful ! Thanks !