
Recherche avancée
Médias (91)
-
Head down (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Echoplex (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Discipline (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Letting you (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
999 999 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (48)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (8707)
-
Using pydub on AWS lambda
11 juin 2024, par BaileyOk, so I have the same problem as here : Using pydub and AWS Lambda but thought it would be cleaner to re-ask here as I have made some progress (I think ?).


I followed the instructions here : https://medium.com/faun/how-to-use-aws-lambda-layers-f4fe6624aff1 like this :


- 

- I created a lambda_function.py file :




from python.pydub import AudioSegment

print('Loading function')


def lambda_handler(event, context):
 print(event)

 sound = AudioSegment.from_mp3("https://s3-eu-west-1.amazonaws.com/audio.mp3")

 etc...



Using docker, I created the dependencies with this script :


#!/bin/bash

export PKG_DIR="python"

rm -rf ${PKG_DIR} && mkdir -p ${PKG_DIR}

docker run --rm -v $(pwd):/foo -w /foo lambci/lambda:build-python3.6 \
 pip install -r requirements.txt --no-deps -t ${PKG_DIR}



and the requirements.txt file :


pydub
ffmpeg



Docker downloaded 4 folders :


ffmpeg
ffmpeg-1.4.dist-info
pydub
pydub-0.24.1.dist-info



and I uploaded the whole project to lambda as a zip file.


When I run the code I get an error :


/var/task/pydub/utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work



I'm guessing that I need an ffmpeg binary uploading to Lambda but I'm not sure, and if I do, not sure how to do it.


What steps am I missing ?


UPDATE :


So I've modified my code and also ran the code using pycharm. Although ffmpeg and ffprobe are uploaded to my lambda function, and I have programmatically confirmed they are present, they fail to run.


My code is now like this :


def lambda_handler(event, context):
 
 l = logging.getLogger("pydub.converter")
 l.setLevel(logging.DEBUG)
 l.addHandler(logging.StreamHandler())

 AudioSegment.converter='/var/task/ffmpeg'
 
 print(event)

 print("00000")
 print (os.getcwd())
 print("00000")
 
 print ("1111")
 path = "/var/task"
 dir_list = os.listdir(path)
 print (dir_list)
 print ("1111")

 print ("11111111111111111")
 out = check_output(['/var/task/ffprobe', 'test.mp3'])
 print (out)
 print ("11111111111111111")



and this is the cloudwatch error log :


00000
/var/task
00000
1111
['.DS_Store', 'ffmpeg', 'get_layer_packages.sh', 'lambda_function.py', 'lambda_function.py.orig.py', 'python', 'requirements.txt', 'test.mp3']
1111
11111111111111111

[Errno 8] Exec format error: '/var/task/ffmpeg': OSError
Traceback (most recent call last):
 File "/var/task/lambda_function.py", line 32, in lambda_handler
 out = check_output(['/var/task/ffmpeg', 'test.mp3'])
 File "/var/lang/lib/python3.6/subprocess.py", line 356, in check_output
 **kwargs).stdout
 File "/var/lang/lib/python3.6/subprocess.py", line 423, in run
 with Popen(*popenargs, **kwargs) as process:
 File "/var/lang/lib/python3.6/subprocess.py", line 729, in __init__
 restore_signals, start_new_session)
 File "/var/lang/lib/python3.6/subprocess.py", line 1364, in _execute_child
 raise child_exception_type(errno_num, err_msg, err_filename)
OSError: [Errno 8] Exec format error: '/var/task/ffmpeg'



Any ideas ? I have made ffmpeg and ffprobe executable using chmod +x


Could this be a lambda permissions issue somehow ?


-
Encoding webm with ffmpeg and libvorbis does not work
24 janvier 2014, par AnthonyMI am attempting to run a modified version of the ffmpeg muxing example which outputs vorbis encoded audio to a webm container.
The code works fine if I specify mp3 as the format, just not when I use vorbis
THe code is similar to http://www.ffmpeg.org/doxygen/2.0/doc_2examples_2muxing_8c-example.html but with the video portions stripped out. I tested with video enabled and the example video was encoded properly, but with no audio.
ffmpeg is compiled with libvorbis and libvpx support.
#include
#include
#include
#include
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>mathematics.h>
#include <libavformat></libavformat>avformat.h>
#include <libswscale></libswscale>swscale.h>
#include <libswresample></libswresample>swresample.h>
#define STREAM_DURATION 200.0
extern AVCodec ff_libvorbis_encoder;
static AVFrame *frame;
static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec,
enum AVCodecID codec_id)
{
AVCodecContext *c;
AVStream *st;
/* find the encoder */
//*codec = &ff_libvorbis_encoder;
*codec = avcodec_find_encoder(codec_id);
if (!(*codec)) {
fprintf(stderr, "Could not find encoder for '%s'\n",
avcodec_get_name(codec_id));
exit(1);
}
st = avformat_new_stream(oc, *codec);
if (!st) {
fprintf(stderr, "Could not allocate stream\n");
exit(1);
}
st->id = oc->nb_streams-1;
c = st->codec;
switch ((*codec)->type) {
case AVMEDIA_TYPE_AUDIO:
c->sample_fmt = AV_SAMPLE_FMT_FLTP;
c->bit_rate = 64000;
c->sample_rate = 44100;
c->channels = 2;
break;
default:
break;
}
/* Some formats want stream headers to be separate. */
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
c->flags |= CODEC_FLAG_GLOBAL_HEADER;
return st;
}
static float t, tincr, tincr2;
static uint8_t **src_samples_data;
static int src_samples_linesize;
static int src_nb_samples;
static int max_dst_nb_samples;
uint8_t **dst_samples_data;
int dst_samples_linesize;
int dst_samples_size;
struct SwrContext *swr_ctx = NULL;
static void open_audio(AVFormatContext *oc, AVCodec *codec, AVStream *st) {
AVCodecContext *c;
int ret;
c = st->codec;
/* open it */
ret = avcodec_open2(c, codec, NULL);
if (ret sample_rate;
/* increment frequency by 110 Hz per second */
tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;
src_nb_samples = c->codec->capabilities & CODEC_CAP_VARIABLE_FRAME_SIZE ?
10000 : c->frame_size;
ret = av_samples_alloc_array_and_samples(&src_samples_data, &src_samples_linesize, c->channels,
src_nb_samples, c->sample_fmt, 0);
if (ret sample_fmt != AV_SAMPLE_FMT_S16) {
swr_ctx = swr_alloc();
if (!swr_ctx) {
fprintf(stderr, "Could not allocate resampler context\n");
exit(1);
}
/* set options */
av_opt_set_int (swr_ctx, "in_channel_count", c->channels, 0);
av_opt_set_int (swr_ctx, "in_sample_rate", c->sample_rate, 0);
av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0);
av_opt_set_int (swr_ctx, "out_channel_count", c->channels, 0);
av_opt_set_int (swr_ctx, "out_sample_rate", c->sample_rate, 0);
av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt", AV_SAMPLE_FMT_FLTP, 0);
/* initialize the resampling context */
if ((ret = swr_init(swr_ctx)) channels,
max_dst_nb_samples, c->sample_fmt, 0);
if (ret channels, max_dst_nb_samples,
c->sample_fmt, 0);
}
static void get_audio_frame(int16_t *samples, int frame_size, int nb_channels)
{
int j, i, v;
int16_t *q;
q = samples;
for (j = 0; j codec;
get_audio_frame((int16_t *)src_samples_data[0], src_nb_samples, c->channels);
/* convert samples from native format to destination codec format, using the resampler */
if (swr_ctx) {
/* compute destination number of samples */
dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, c->sample_rate) + src_nb_samples,
c->sample_rate, c->sample_rate, AV_ROUND_UP);
if (dst_nb_samples > max_dst_nb_samples) {
av_free(dst_samples_data[0]);
ret = av_samples_alloc(dst_samples_data, &dst_samples_linesize, c->channels,
dst_nb_samples, c->sample_fmt, 0);
if (ret channels, dst_nb_samples,
c->sample_fmt, 0);
}
/* convert to destination format */
ret = swr_convert(swr_ctx,
dst_samples_data, dst_nb_samples,
(const uint8_t **)src_samples_data, src_nb_samples);
if (ret nb_samples = dst_nb_samples;
avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt,
dst_samples_data[0], dst_samples_size, 0);
ret = avcodec_encode_audio2(c, &pkt, frame, &got_packet);
if (ret index;
/* Write the compressed frame to the media file. */
ret = av_interleaved_write_frame(oc, &pkt);
if (ret != 0) {
fprintf(stderr, "Error while writing audio frame: %s\n",
av_err2str(ret));
exit(1);
}
avcodec_free_frame(&frame);
}
static void close_audio(AVFormatContext *oc, AVStream *st)
{
avcodec_close(st->codec);
av_free(src_samples_data[0]);
av_free(dst_samples_data[0]);
}
int main(int argc, char *argv[]) {
AVOutputFormat *fmt;
AVFormatContext *oc;
AVStream *audio_st;
AVCodec *audio_codec;
double audio_time, video_time;
int ret = 0;
const char *input = argv[1];
const char *output = argv[2];
av_register_all();
avformat_alloc_output_context2(&oc, NULL, NULL, output);
if(!oc) {
printf("Could not alloc the output context");
return 1;
}
fmt = oc->oformat;
audio_st = NULL;
if(fmt->audio_codec != AV_CODEC_ID_NONE) {
audio_st = add_stream(oc, &audio_codec, fmt->audio_codec);
printf("Started audio stream with codec %s\n", audio_codec->name);
}
if(audio_st) {
open_audio(oc, audio_codec, audio_st);
}
av_dump_format(oc, 0, output, 1);
if (!(fmt->flags & AVFMT_NOFILE)) {
ret = avio_open(&oc->pb, output, AVIO_FLAG_WRITE);
if (ret pts = 0;
for (;;) {
audio_time = audio_st ? audio_st->pts.val * av_q2d(audio_st->time_base) : 0.0;
if ((!audio_st || audio_time >= STREAM_DURATION))
break;
write_audio_frame(oc, audio_st);
}
av_write_trailer(oc);
if(audio_st)
close_audio(oc, audio_st);
if(!(fmt->flags & AVFMT_NOFILE))
avio_close(oc->pb);
avformat_free_context(oc);
return 0;
}compiled with
clang -o converter -lavcodec -lavformat -lavutil -lswresample -lvorbis converter.c
output
~/v/converter> ./converter test.wav test.webm
Started audio stream with codec libvorbis
Output #0, webm, to 'test.webm':
Stream #0:0: Audio: vorbis (libvorbis), 44100 Hz, 2 channels, fltp, 64 kb/s
[libvorbis @ 0x7fdafb800600] 33 frames left in the queue on closing -
Java CV 1.0 Webcam video capture :Frame cannot be converted to IplImage
4 décembre 2015, par user17795I am not able to record webcam video (i.e. Capture and Save .avi or .mp4 File) using JavaCV / OpenCV / FFMPEG, what am I doing wrong ?
Version used (all 64-bit)
Win 7 , NetBeans8.0.2 , jdk1.7.0_10 , JavaCV 1.0 , OpenCV 3.0.0 , ffmpeg-2.1.1-win64-shared .
My system variables are set to
C :\Program Files\Java\jdk1.7.0_10 ;%SystemRoot%\system32 ;%SystemRoot% ;%SystemRoot%\System32\Wbem ;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0~;C :\Program Files\Intel\WiFi\bin~;C :\Program Files\Common Files\Intel\WirelessCommon~;C :\Program Files (x86)\Intel\OpenCL SDK\2.0\bin\x86 ;C :\Program Files (x86)\Intel\OpenCL SDK\2.0\bin\x64 ;C :\Program Files (x86)\MySQL\MySQL Fabric 1.5.4 & MySQL Utilities 1.5.4 1.5~;C :\Program Files (x86)\MySQL\MySQL Fabric 1.5.4 & MySQL Utilities 1.5.4 1.5\Doctrine extensions for PHP~;C :\opencv\build\x64\vc11\bin ;C :\ffmpeg\bin
After downloading and setting path variables I added jar files to Netbeans project
C :\opencv\build\java\opencv-300.jar
C :\javacv-1.0-bin\javacv-bin\videoinput.jar
C :\javacv-1.0-bin\javacv-bin\videoinput-windows-x86_64.jar
C :\javacv-1.0-bin\javacv-bin\videoinput-windows-x86.jar
C :\javacv-1.0-bin\javacv-bin\opencv.jar
C :\javacv-1.0-bin\javacv-bin\opencv-windows-x86_64.jar
C :\javacv-1.0-bin\javacv-bin\opencv-windows-x86.jar
C :\javacv-1.0-bin\javacv-bin\libfreenect.jar
C :\javacv-1.0-bin\javacv-bin\libfreenect-windows-x86_64.jar
C :\javacv-1.0-bin\javacv-bin\libfreenect-windows-x86.jar
C :\javacv-1.0-bin\javacv-bin\libdc1394.jar
C :\javacv-1.0-bin\javacv-bin\junit.jar
C :\javacv-1.0-bin\javacv-bin\javacv.jar
C :\javacv-1.0-bin\javacv-bin\javacpp.jar
C :\javacv-1.0-bin\javacv-bin\hamcrest-core.jar
C :\javacv-1.0-bin\javacv-bin\flycapture.jar
C :\javacv-1.0-bin\javacv-bin\flycapture-windows-x86_64.jar
C :\javacv-1.0-bin\javacv-bin\flycapture-windows-x86.jar
C :\javacv-1.0-bin\javacv-bin\flandmark.jar
C :\javacv-1.0-bin\javacv-bin\flandmark-windows-x86_64.jar
C :\javacv-1.0-bin\javacv-bin\flandmark-windows-x86.jar
C :\javacv-1.0-bin\javacv-bin\ffmpeg.jar
C :\javacv-1.0-bin\javacv-bin\ffmpeg-windows-x86_64.jar
C :\javacv-1.0-bin\javacv-bin\ffmpeg-windows-x86.jar
C :\javacv-1.0-bin\javacv-bin\artoolkitplus.jar
C :\javacv-1.0-bin\javacv-bin\artoolkitplus-windows-x86_64.jar
C :\javacv-1.0-bin\javacv-bin\artoolkitplus-windows-x86.jarProblem 1 :
First program to capture webcam video (display and save to output.avi file) is as given below.It displays webcam and creates output.avi. But after terminating program when I open file output.avi in Media Player it doesn’t display anything :)
It doesn’t work
import java.io.File;
import java.net.URL;
import org.bytedeco.javacv.*;
import org.bytedeco.javacpp.*;
import org.bytedeco.javacpp.indexer.*;
import static org.bytedeco.javacpp.opencv_core.*;
import static org.bytedeco.javacpp.opencv_imgproc.*;
import static org.bytedeco.javacpp.opencv_calib3d.*;
import static org.bytedeco.javacpp.opencv_objdetect.*;
public class JCVdemo3 {
public static void main(String[] args) throws Exception {
// Preload the opencv_objdetect module to work around a known bug.
Loader.load(opencv_objdetect.class);
// The available FrameGrabber classes include OpenCVFrameGrabber (opencv_videoio),
// DC1394FrameGrabber, FlyCaptureFrameGrabber, OpenKinectFrameGrabber,
// PS3EyeFrameGrabber, VideoInputFrameGrabber, and FFmpegFrameGrabber.
FrameGrabber grabber = FrameGrabber.createDefault(0);
grabber.start();
// CanvasFrame, FrameGrabber, and FrameRecorder use Frame objects to communicate image data.
// We need a FrameConverter to interface with other APIs (Android, Java 2D, or OpenCV).
OpenCVFrameConverter.ToIplImage converter = new OpenCVFrameConverter.ToIplImage();
IplImage grabbedImage = converter.convert(grabber.grab());
int width = grabbedImage.width();
int height = grabbedImage.height();
FrameRecorder recorder = FrameRecorder.createDefault("output.avi", width, height);
recorder.start();
CanvasFrame frame = new CanvasFrame("Some Title");
while (frame.isVisible() && (grabbedImage = converter.convert(grabber.grab())) != null) {
// cvWarpPerspective(grabbedImage, rotatedImage, randomR);
Frame rotatedFrame = converter.convert(grabbedImage);
//opencv_core.IplImage grabbedImage = grabber.grab();
frame.showImage(rotatedFrame);
recorder.record(rotatedFrame);
}
frame.dispose();
recorder.stop();
grabber.stop();
}
}Problem 2 : When I run following code
opencv_core.IplImage grabbedImage = grabber.grab();
incompatible types : Frame cannot be converted to IplImage message appears
import java.io.File;
import java.net.URL;
import java.util.logging.Level;
import java.util.logging.Logger;
import org.bytedeco.javacv.*;
import org.bytedeco.javacpp.*;
import org.bytedeco.javacpp.indexer.*;
import static org.bytedeco.javacpp.opencv_core.*;
import static org.bytedeco.javacpp.opencv_imgproc.*;
import static org.bytedeco.javacpp.opencv_calib3d.*;
import static org.bytedeco.javacpp.opencv_objdetect.*;
public class Demo {
public static void main(String[] args) {
try {
OpenCVFrameGrabber grabber = new OpenCVFrameGrabber(0);
grabber.start();
opencv_core.IplImage grabbedImage = grabber.grab();
CanvasFrame canvasFrame = new CanvasFrame("Video with JavaCV");
canvasFrame.setCanvasSize(grabbedImage.width(), grabbedImage.height());
grabber.setFrameRate(grabber.getFrameRate());
FFmpegFrameRecorder recorder = new FFmpegFrameRecorder("mytestvideo.mp4", grabber.getImageWidth(), grabber.getImageHeight());
recorder.setFormat("mp4");
recorder.setFrameRate(30);
recorder.setVideoBitrate(10 * 1024 * 1024);
recorder.start();
while (canvasFrame.isVisible() && (grabbedImage = grabber.grab()) != null) {
canvasFrame.showImage(grabbedImage);
recorder.record(grabbedImage);
}
recorder.stop();
grabber.stop();
canvasFrame.dispose();
} catch (FrameGrabber.Exception ex) {
Logger.getLogger(JCVdemo.class.getName()).log(Level.SEVERE, null, ex);
} catch (FrameRecorder.Exception ex) {
Logger.getLogger(JCVdemo.class.getName()).log(Level.SEVERE, null, ex);
}
}
}Question is : what am i doing wrong ?
I am not able to record any sort of video ; no matter what version of JavaCV/OPenCv I use.
Please tell me working example to record video from webcam and also the working JavaCV/OpenCV /FFmpeg compatible versions.