
Recherche avancée
Autres articles (17)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
-
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)
Sur d’autres sites (4003)
-
Node 18 or Node 20 break ffmpeg (in google cloud functions -> ffprobe was killed with signal SIGSEGV)
10 janvier 2024, par user20206929Please see below, the code is working on node js 16, but not when upgrading to node 18 or 20.


const ffmpeg = require("fluent-ffmpeg");

// Following is inside a .https.onRequest Google Cloud function with enough memory

try {
 const duration = new Promise((resolve, reject) => {
 ffmpeg.ffprobe(videoUrl, async (err, metadata) => {
 if (err) {
 if (res.headersSent) {
 console.error("Response already sent");
 return;
 } else {
 console.log("Metadata:", metadata);
 console.log("err: " + err);
 res.status(400).send("Error getting video metadata");
 return;
 }
 }
 const duration = metadata.format.duration;
 console.log("video duration in second: " + duration);
 resolve(duration);
 });
});
 videoDuration = await duration;
} catch (err) {
 console.log(err);
 throw err;
}



When upgrading to node 18/20 (No other change than upgrading node), the error "ffprobe not found" appears.


But setting the path manually using ffmpeg.setFfprobePath(ffprobePath) ;
trigger the error : Error : ffprobe was killed with signal SIGSEGV


So it seem its a permissions issue.


However, I tried a lot of different solutions, none of them made this work.
For instance i tried to download manually the ffprobe from the official website https://ffbinaries.com/downloads. Then manually add it to the code.


I tried to use https://www.npmjs.com/package/@ffprobe-installer/ffprobe or others package like https://www.npmjs.com/package/ffprobe-static


I also tried to download the ffprobe file to the temporary folder of google cloud, and change the permission of this folder.


All of those was doing the same error.


None of what i could think of made any difference.


Please help because i need to update node 16 to 18 or 20 before google remove node 16 on january 31 2024 and for now i don't see a solution.


I also looked for other solution to get this duration from a video file url, but using ffmpeg seem to be the only one that should work out of the box. As it is working on node 16.


Thank you,


UPDATE - 11/26/2023


GCP Functions NodeJS 16 runtime uses Ubuntu 18.04 with FFMpeg installed.
NodeJS 18/20 use Ubuntu 22.04, and Google decided not to include FFMpeg.


https://cloud.google.com/functions/docs/runtime-support#node.js
https://cloud.google.com/functions/docs/reference/system-packages


No workaround or solutions found as of now


UPDATE - 01/10/2024


Google added back ffmpeg to latest version, this is working as before now.


-
FFMPEG Dash with tiles of thumbnail images
31 juillet 2020, par martyn GilbertAs of DASH-IF IOP version 4.2, section 6.2.6 defines the notion of image-based tracks in DASH :
https://dashif.org/docs/DASH-IF-IOP-v4.3.pdf.



This is the ability to have an adaption set made up of mime type images that themselves are a strip of low resolution thumbnails. 
A player will use these thumbnails when the user hovers their mouse over the video timeline and get a 
preview of the the frame at that approximate timecode.



Theo player website has a page dedicated to this function for playback :
https://www.theoplayer.com/blog/in-stream-thumbnail-support-dvr-dash-streams



I need to generate a dash stream (not live) using ffmpeg that also contains these thumbnails. 
I already have an ffmpeg command that will generate the film strip of jpgs which outputs a thumbnail every 5 seconds of input video and joins 5 of these together in a single jpg :



ffmpeg -i INPUT -q:v 20 -vf "select=not(mod(n\,125)),scale=480:270,tile=5x1" -vsync vfr output%d.jpg



and the mpeg dash itself :



ffmpeg -i INPUT -y -map 0 -acodec aac -ac 2 -ar 48000 -s 960x540 -vcodec libx264 -f dash -preset veryfast -b:v:2 1500k -seg_duration 2 output.mpd



But I cannot find a way in ffmpeg to include the thumbnails in the dash mpd file.


-
How to upload object to a bucket in Google Cloud Platform from Python script
7 juillet 2016, par BryanThe goal of this script is to extract audio from a video file using ffmpeg and upload it into a bucket on Google Cloud Platform each time it is called. Eventually I will have to extract audio from a large list of videos, so ideally I would want my script to extract and subsequently upload it into the cloud.
My confusion is how to use GCP API to upload my object into a bucket. Any advice would be greatly appreciated !
Link for reference : https://cloud.google.com/storage/docs/json_api/v1/json-api-python-samples#setup-code
import subprocess
import sys
import re
fullVideo = sys.argv[1]
title = re.findall('^([^.]*).*', fullVideo)
title = str(title[0])
subprocess.call('ffmpeg -i ' + fullVideo + ' -vn -ab 128k ' + title + '.flac', shell = True)
def upload_object(bucket, filename, readers, owners):
service = create_service()
# This is the request body as specified:
# http://g.co/cloud/storage/docs/json_api/v1/objects/insert#request
body = {
'name': filename,
}
# If specified, create the access control objects and add them to the
# request body
if readers or owners:
body['acl'] = []
for r in readers:
body['acl'].append({
'entity': 'user-%s' % r,
'role': 'READER',
'email': r
})
for o in owners:
body['acl'].append({
'entity': 'user-%s' % o,
'role': 'OWNER',
'email': o
})
# Now insert them into the specified bucket as a media insertion.
# http://g.co/dev/resources/api-libraries/documentation/storage/v1/python/latest/storage_v1.objects.html#insert
with open(filename, 'rb') as f:
req = service.objects().insert(
bucket=bucket, body=body,
# You can also just set media_body=filename, but # for the sake of
# demonstration, pass in the more generic file handle, which could
# very well be a StringIO or similar.
media_body=http.MediaIoBaseUpload(f, 'application/octet-stream'))
resp = req.execute()
return resp