
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (78)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)
Sur d’autres sites (4690)
-
What permission ffmpeg-static need in AWS Lambda ?
17 février 2023, par JánosI have this code. It download a image, made a video from it and upload it to S3. It runs on Lambda. Added packages, intalled, zipped, uploaded.


npm install --production
zip -r my-lambda-function.zip ./



But get an
error code 126


2023-02-17T09:27:55.236Z 5c845bb6-02c1-41b0-8759-4459591b57b0 INFO Error: ffmpeg exited with code 126
 at ChildProcess.<anonymous> (/var/task/node_modules/fluent-ffmpeg/lib/processor.js:182:22)
 at ChildProcess.emit (node:events:513:28)
 at ChildProcess._handle.onexit (node:internal/child_process:291:12)
2023-02-17T09:27:55.236Z 5c845bb6-02c1-41b0-8759-4459591b57b0 INFO Error: ffmpeg exited with code 126 at ChildProcess.<anonymous> (/var/task/node_modules/fluent-ffmpeg/lib/processor.js:182:22) at ChildProcess.emit (node:events:513:28) at ChildProcess._handle.onexit (node:internal/child_process:291:12)
</anonymous></anonymous>


Do I need to set a specific premission for ffmpeg ?


import { PutObjectCommand, S3Client } from '@aws-sdk/client-s3'
import { fromNodeProviderChain } from '@aws-sdk/credential-providers'
import axios from 'axios'
import pathToFfmpeg from 'ffmpeg-static'
import ffmpeg from 'fluent-ffmpeg'
import fs from 'fs'
ffmpeg.setFfmpegPath(pathToFfmpeg)
const credentials = fromNodeProviderChain({
 clientConfig: {
 region: 'eu-central-1',
 },
})
const client = new S3Client({ credentials })

export const handler = async (event, context) => {
 try {
 let body
 let statusCode = 200
 const query = event?.queryStringParameters
 if (!query?.imgId && !query?.video1Id && !query?.video2Id) {
 return
 }

 const imgId = query?.imgId
 const video1Id = query?.video1Id
 const video2Id = query?.video2Id
 console.log(
 `Parameters received, imgId: ${imgId}, video1Id: ${video1Id}, video2Id: ${video2Id}`
 )
 const imgURL = getFileURL(imgId)
 const video1URL = getFileURL(`${video1Id}.mp4`)
 const video2URL = getFileURL(`${video2Id}.mp4`)
 const imagePath = `/tmp/${imgId}`
 const video1Path = `/tmp/${video1Id}.mp4`
 const video2Path = `/tmp/${video2Id}.mp4`
 const outputPath = `/tmp/${imgId}.mp4`
 await Promise.all([
 downloadFile(imgURL, imagePath),
 downloadFile(video1URL, video1Path),
 downloadFile(video2URL, video2Path),
 ])
 await new Promise((resolve, reject) => {
 console.log('Input files downloaded')
 ffmpeg()
 .input(imagePath)
 .inputFormat('image2')
 .inputFPS(30)
 .loop(1)
 .size('1080x1080')
 .videoCodec('libx264')
 .format('mp4')
 .outputOptions([
 '-tune animation',
 '-pix_fmt yuv420p',
 '-profile:v baseline',
 '-level 3.0',
 '-preset medium',
 '-crf 23',
 '-movflags +faststart',
 '-y',
 ])
 .output(outputPath)
 .on('end', () => {
 console.log('Output file generated')
 resolve()
 })
 .on('error', (e) => {
 console.log(e)
 reject()
 })
 .run()
 
 })
 await uploadFile(outputPath, imgId + '.mp4')
 .then((url) => {
 body = JSON.stringify({
 url,
 })
 })
 .catch((error) => {
 console.error(error)
 statusCode = 400
 body = error?.message ?? error
 })
 console.log(`File uploaded to S3`)
 const headers = {
 'Content-Type': 'application/json',
 'Access-Control-Allow-Headers': 'Content-Type',
 'Access-Control-Allow-Origin': 'https://tikex.com, https://borespiac.hu',
 'Access-Control-Allow-Methods': 'GET',
 }
 return {
 statusCode,
 body,
 headers,
 }
 } catch (error) {
 console.error(error)
 return {
 statusCode: 500,
 body: JSON.stringify('Error fetching data'),
 }
 }
}

const downloadFile = async (url, path) => {
 try {
 console.log(`Download will start: ${url}`)
 const response = await axios(url, {
 responseType: 'stream',
 })
 if (response.status !== 200) {
 throw new Error(
 `Failed to download file, status code: ${response.status}`
 )
 }
 response.data
 .pipe(fs.createWriteStream(path))
 .on('finish', () => console.log(`File downloaded to ${path}`))
 .on('error', (e) => {
 throw new Error(`Failed to save file: ${e}`)
 })
 } catch (e) {
 console.error(`Error downloading file: ${e}`)
 }
}
const uploadFile = async (path, id) => {
 const buffer = fs.readFileSync(path)
 const params = {
 Bucket: 't44-post-cover',
 ACL: 'public-read',
 Key: id,
 ContentType: 'video/mp4',
 Body: buffer,
 }
 await client.send(new PutObjectCommand(params))
 return getFileURL(id)
}
const getFileURL = (id) => {
 const bucket = 't44-post-cover'
 const url = `https://${bucket}.s3.eu-central-1.amazonaws.com/${id}`
 return url
}



Added
AWSLambdaBasicExecutionRole-16e770c8-05fa-4c42-9819-12c468cb5b49
permission, with policy :

{
 "Version": "2012-10-17",
 "Statement": [
 {
 "Effect": "Allow",
 "Action": "logs:CreateLogGroup",
 "Resource": "arn:aws:logs:eu-central-1:634617701827:*"
 },
 {
 "Effect": "Allow",
 "Action": [
 "logs:CreateLogStream",
 "logs:PutLogEvents"
 ],
 "Resource": [
 "arn:aws:logs:eu-central-1:634617701827:log-group:/aws/lambda/promo-video-composer-2:*"
 ]
 },
 {
 "Effect": "Allow",
 "Action": [
 "s3:GetObject",
 "s3:PutObject",
 "s3:ListBucket"
 ],
 "Resource": [
 "arn:aws:s3:::example-bucket",
 "arn:aws:s3:::example-bucket/*"
 ]
 },
 {
 "Effect": "Allow",
 "Action": [
 "logs:CreateLogGroup",
 "logs:CreateLogStream",
 "logs:PutLogEvents"
 ],
 "Resource": [
 "arn:aws:logs:*:*:*"
 ]
 },
 {
 "Effect": "Allow",
 "Action": [
 "ec2:DescribeNetworkInterfaces"
 ],
 "Resource": [
 "*"
 ]
 },
 {
 "Effect": "Allow",
 "Action": [
 "sns:*"
 ],
 "Resource": [
 "*"
 ]
 },
 {
 "Effect": "Allow",
 "Action": [
 "cloudwatch:*"
 ],
 "Resource": [
 "*"
 ]
 },
 {
 "Effect": "Allow",
 "Action": [
 "kms:Decrypt"
 ],
 "Resource": [
 "*"
 ]
 }
 ]
}



What do I miss ?


janoskukoda@Janoss-MacBook-Pro promo-video-composer-2 % ls -l $(which ffmpeg)
lrwxr-xr-x 1 janoskukoda admin 35 Feb 10 12:50 /opt/homebrew/bin/ffmpeg -> ../Cellar/ffmpeg/5.1.2_4/bin/ffmpeg



-
ffmpeg : crop video into two grayscale sub-videos ; guarantee monotonical frames ; and get timestamps
13 mars 2021, par lurix66The need


Hello, I need to extract two regions of a .h264 video file via the
crop
filter into two files. The output videos need to be monochrome and extension .mp4. The encoding (or format ?) should guarantee that video frames are organized monotonically. Finally, I need to get the timestamps for both files (which I'd bet are the same timestamps that I would get from the input file, see below).

In the end I will be happy to do everything in one command via an elegant one liner (via a complex filter I guess), but I start doing it in multiple steps to break it down in simpler problems.


In this path I get into many difficulties and despite having searched in many places I don't seem to find solutions that work. Unfortunately I'm no expert of ffmpeg or video conversion, so the more I search, the more details I discover, the less I solve problems.


Below you find some of my attempts to work with the following options :


- 

-filter:v "crop=400:ih:260:0,format=gray"
to do the crop and the monochrome conversion-vf showinfo
possibly combined with-vsync 0
or-copyts
to get the timestamps via stderr redirection&> filename
-c:v mjpeg
to force monotony of frames (are there other ways ?)








1. cropping each region and obtaining monochrome videos


$ ffmpeg -y -hide_banner -i inVideo.h264 -filter:v "crop=400:ih:260:0,format=gray" outL.mp4
$ ffmpeg -y -hide_banner -i inVideo.h264 -filter:v "crop=400:ih:1280:0,format=gray" outR.mp4



The issue here is that in the output files the frames are not organized monotonically (I don't understand why ; how come would that make sense in any video format ? I can't say if that comes from the input file).


EDIT. Maybe it is not frames, but packets, as returned by
av .demux()
method that are not monotonic (see below "instructions to reproduce...")

I have got the advice to do a
ffmpeg -i outL.mp4 outL.mjpeg
after, but this produces two videos that look very pixellated (at least playing them with ffplay) despite being surprisingly 4x bigger than the input. Needless to say, I need both monotonic frames and lossless conversion.

EDIT. I acknowledge the advice to specify
-q:v 1
; this fixes the pixellation effect but produces a file even bigger, 12x in size. Is it necessary ? (see below "instructions to reproduce...")

2. getting the timestamps


I found this piece of advice, but I don't want to generate hundreds of image files, so I tried the following :


$ ffmpeg -y -hide_banner -i outL.mp4 -vf showinfo -vsync 0 &>tsL.txt
$ ffmpeg -y -hide_banner -i outR.mp4 -vf showinfo -vsync 0 &>tsR.txt



The issue here is that I don't get any output because ffmpeg claims it needs an output file.


The need to produce an output file, and the doubt that the timestamps could be lost in the previous conversions, leads me back to making a first attempt of a one liner, where I am testing also the
-copyts
option, and the forcing the encoding with-c:v mjpeg
option as per the advice mentioned above (don't know if in the right position though)

ffmpeg -y -hide_banner -i testTex2.h264 -copyts -filter:v "crop=400:ih:1280:0,format=gray" -vf showinfo -c:v mjpeg eyeL.mp4 &>tsL.txt



This does not work because surprisingly the output .mp4 I get is the same as the input. If instead I put the
-vf showinfo
option just before the stderr redirection, I get no redirected output

ffmpeg -y -hide_banner -i testTex2.h264 -copyts -filter:v "crop=400:ih:260:0,format=gray" -c:v mjpeg outR.mp4 -vf showinfo dummy.mp4 &>tsR.txt



In this case I get the desired timestamps output (too much : I will need some solution to grab only the pts and pts_time data out of it) but I have to produce a big dummy file. The worst thing is anyway, that the mjpeg encoding produces a low resolution very pixellated video again


I admit that the logic how to place the options and the output files on the command line is obscure to me. Possible combinations are many, and the more options I try the more complicated it gets, and I am not getting much closer to the solution.


3. [EDIT] instructions how to reproduce this


- 

- get a .h264 video
- turn it into .mp by ffmpeg command
$ ffmpeg -i inVideo.h264 out.mp4
- run the following python cell in a jupyter-notebook
- see that the packets timestamps have diffs greater and less than zero










%matplotlib inline
import av
import numpy as np
import matplotlib.pyplot as mpl

fname, ext="outL.direct", "mp4"

cont=av.open(f"{fname}.{ext}")
pk_pts=np.array([p.pts for p in cont.demux(video=0) if p.pts is not None])

cont=av.open(f"{fname}.{ext}")
fm_pts=np.array([f.pts for f in cont.decode(video=0) if f.pts is not None])

print(pk_pts.shape,fm_pts.shape)

mpl.subplot(211)
mpl.plot(np.diff(pk_pts))

mpl.subplot(212)
mpl.plot(np.diff(fm_pts))



- 

- finally create also the mjpeg encoded files in various ways, and check packets monotony with the same script (see also file size)




$ ffmpeg -i inVideo.h264 out.mjpeg
$ ffmpeg -i inVideo.h264 -c:v mjpeg out.c_mjpeg.mp4
$ ffmpeg -i inVideo.h264 -c:v mjpeg -q:v 1 out.c_mjpeg_q1.mp4



Finally, the question


What is a working way / the right way to do it ?


Any hints, even about single steps and how to rightly combine them will be appreciated. Also, I am not limited tio the command line, and I would be able to try some more programmatic solution in python (jupyter notebook) instead of the command line if someone points me in that direction.


-
How do you use Node.js to stream an MP4 file with ffmpeg ?
2 novembre 2016, par LaserJesusI’ve been trying to solve this problem for several days now and would really appreciate any help on the subject.
I’m able to successfully stream an mp4 audio file stored on a Node.js server using fluent-ffmpeg by passing the location of the file as a string and transcoding it to mp3. If I create a file stream from the same file and pass that to fluent-ffmpeg instead it works for an mp3 input file, but not a mp4 file. In the case of the mp4 file no error is thrown and it claims the stream completed successfully, but nothing is playing in the browser. I’m guessing this has to do with the meta data being stored at the end of an mp4 file, but I don’t know how to code around this. This is the exact same file that works correctly when it’s location is passed to ffmpeg, rather than the stream. When I try and pass a stream to the mp4 file on s3, again no error is thrown, but nothing streams to the browser. This isn’t surprising as ffmpeg won’t work with the file locally as stream, so expecting it to handle the stream from s3 is wishful thinking.
How can I stream the mp4 file from s3, without storing it locally as a file first ? How do I get ffmpeg to do this without transcoding the file too ? The following is the code I have at the moment which isn’t working. Note that it attempts to pass the s3 file as a stream to ffmpeg and it’s also transcoding it into an mp3, which I’d prefer not to do.
.get(function(req,res) {
aws.s3(s3Bucket).getFile(s3Path, function (err, result) {
if (err) {
return next(err);
}
var proc = new ffmpeg(result)
.withAudioCodec('libmp3lame')
.format('mp3')
.on('error', function (err, stdout, stderr) {
console.log('an error happened: ' + err.message);
console.log('ffmpeg stdout: ' + stdout);
console.log('ffmpeg stderr: ' + stderr);
})
.on('end', function () {
console.log('Processing finished !');
})
.on('progress', function (progress) {
console.log('Processing: ' + progress.percent + '% done');
})
.pipe(res, {end: true});
});
});This is using the knox library when it calls aws.s3... I’ve also tried writing it using the standard aws sdk for Node.js, as shown below, but I get the same outcome as above.
var AWS = require('aws-sdk');
var s3 = new AWS.S3({
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_KEY,
region: process.env.AWS_REGION_ID
});
var fileStream = s3.getObject({
Bucket: s3Bucket,
Key: s3Key
}).createReadStream();
var proc = new ffmpeg(fileStream)
.withAudioCodec('libmp3lame')
.format('mp3')
.on('error', function (err, stdout, stderr) {
console.log('an error happened: ' + err.message);
console.log('ffmpeg stdout: ' + stdout);
console.log('ffmpeg stderr: ' + stderr);
})
.on('end', function () {
console.log('Processing finished !');
})
.on('progress', function (progress) {
console.log('Processing: ' + progress.percent + '% done');
})
.pipe(res, {end: true});=====================================
Updated
I placed an mp3 file in the same s3 bucket and the code I have here worked and was able to stream the file through to the browser without storing a local copy. So the streaming issues I face have something to do with the mp4/aac container/encoder format.
I’m still interested in a way to bring the m4a file down from s3 to the Node.js server in it’s entirety, then pass it to ffmpeg for streaming without actually storing the file in the local file system.
=====================================
Updated Again
I’ve managed to get the server streaming the file, as mp4, straight to the browser. This half answers my original question. My only issue now is that I have to download the file to a local store first, before I can stream it. I’d still like to find a way to stream from s3 without needing the temporary file.
aws.s3(s3Bucket).getFile(s3Path, function(err, result){
result.pipe(fs.createWriteStream(file_location));
result.on('end', function() {
console.log('File Downloaded!');
var proc = new ffmpeg(file_location)
.outputOptions(['-movflags isml+frag_keyframe'])
.toFormat('mp4')
.withAudioCodec('copy')
.seekInput(offset)
.on('error', function(err,stdout,stderr) {
console.log('an error happened: ' + err.message);
console.log('ffmpeg stdout: ' + stdout);
console.log('ffmpeg stderr: ' + stderr);
})
.on('end', function() {
console.log('Processing finished !');
})
.on('progress', function(progress) {
console.log('Processing: ' + progress.percent + '% done');
})
.pipe(res, {end: true});
});
});On the receiving side I just have the following javascript in an empty html page :
window.AudioContext = window.AudioContext || window.webkitAudioContext;
context = new AudioContext();
function process(Data) {
source = context.createBufferSource(); // Create Sound Source
context.decodeAudioData(Data, function(buffer){
source.buffer = buffer;
source.connect(context.destination);
source.start(context.currentTime);
});
};
function loadSound() {
var request = new XMLHttpRequest();
request.open("GET", "/stream/", true);
request.responseType = "arraybuffer";
request.onload = function() {
var Data = request.response;
process(Data);
};
request.send();
};
loadSound()=====================================
The Answer
The code above under the title ’updated again’ will stream an mp4 file, from s3, via a Node.js server to a browser without using flash. It does require that the file be stored temporarily on the Node.js server so that the meta data in the file is moved from the end of the file to the front. In order to stream without storing the temporary file, you need to actual modify the file on S3 first and make this meta data change. If you have changed the file in this way on S3 then you can modify the code under the title ’updated again’ so that the result from S3 is piped straight into the ffmpeg constructor, rather than into a file stream on the Node.js server, then providing that file location to ffmepg, as the code does now. You can change the final ’pipe’ command to ’save(location)’ to get a version of the mp4 file locally with the meta data moved to the front. You can then upload that new version of the file to S3 and try out the end to end streaming. Personally I’m now going to create a task that modifies the files in this way as they are uploaded to s3 in the first place. This allows me to record and stream in mp4 without transcoding or storing a temp file on the Node.js server.