
Recherche avancée
Médias (2)
-
Granite de l’Aber Ildut
9 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
Autres articles (21)
-
MediaSPIP Core : La Configuration
9 novembre 2010, parMediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;
Sur d’autres sites (2866)
-
What's the best FFMPEG method for frequent, automated compilation of timelapse videos ?
5 août 2020, par GoOutsideI have a web application running on a not-particularly beefy Ubuntu Amazon Lightsail instance that uses FFMPEG to build a timelapse video generated from downloaded .jpg webcam photos taken every 2 minutes throughout the day (720 total images each day, which grows throughout the day as new images are downloaded).


The code I'm running every 20 minutes is this :


ffmpeg -y -r 24 -pattern_type glob -I 'picturefolder/*.jpg' -s 1024x576 -vcodec libx264 picturefolder/timelapse.mp4


This mostly works, but it is often quite slow, taking 30-60 seconds to run and getting slower as the day goes on, of course.


Recently, I tried to use
concat
instead ofglob
bing the entire folder over and over. I did not see a noticeable performance improvement, ass it appears theconcat
processes the entire video in order to add even just a few frames to the end of it.

My question for any FFMPEG experts out there : what is the most efficient way to handle this kind of automated timelapse creation, given my setup ? Is there a flag I'm missing ? Perhaps a different, more efficient method ? Or maybe a way to have the FFMPEG process just crawl through this at a more 'slow and steady' pace instead of big bursts of CPU usage.


Or am I stuck with this and should just deal with it ? My ultimate goal would be to continue using my current tier (2 GB RAM, 1 vCPU) without the expense of upgrading. Thank you very kindly for your help !


-
InvalidArgumentException : Missing required client configuration options Laravel + FFmpeg + ubuntu 20 + Lamp
9 août 2020, par Ahmed Al-RayanI have an upload site on a Linux VPS server, Ubuntu, the new version, which is used by Laravel and the packaged fmpeg for the upload client


A problem appears to me that I will attach, but I explain the type of problem. When I upload a video, the video is supposed to be uploaded, and then it goes to the processing, but times it raises and stops, and the error appears 500 times, it is raised and goes to treatment, but it stops here and the error appears to me that I will attach it and it is a problem in the country and does not match the information please Thank you for help


InvalidArgumentException: Missing required client configuration options: region: (string) 

A "region" configuration value is required for the "s3" service (e.g., "us-west-2"). A list of available public regions and endpoints can be found at http://docs.aws.amazon.com/general/latest/gr/rande.html.
in /var/www/html/vendor/aws/aws-sdk-php/src/ClientResolver.php:399 


Stack trace: #0 /var/www/html/vendor/aws/aws-sdk-php/src/ClientResolver.php(295): Aws\ClientResolver->throwRequired() #1 /var/www/html/vendor/aws/aws-sdk-php/src/AwsClient.php(195): Aws\ClientResolver->resolve() #2 /var/www/html/vendor/aws/aws-sdk-php/src/S3/S3Client.php(327): Aws\AwsClient->__construct() #3 /var/www/html/vendor/laravel/framework/src/Illuminate/Filesystem/FilesystemManager.php(212): Aws\S3\S3Client->__construct() #4 /var/www/html/vendor/laravel/framework/src/Illuminate/Filesystem/FilesystemManager.php(129): Illuminate\Filesystem\FilesystemManager->createS3Driver() #5 /var/www/html/vendor/laravel/framework/src/Illuminate/Filesystem/FilesystemManager.php(101): Illuminate\Filesystem\FilesystemManager->resolve() #6 /var/www/html/vendor/laravel/framework/src/Illuminate/Filesystem/FilesystemManager.php(78): Illuminate\Filesystem\FilesystemManager->get() #7 /var/www/html/vendor/pbmedia/laravel-ffmpeg/src/FFMpeg.php(66): Illuminate\Filesystem\FilesystemManager->disk() #8 /var/www/html/vendor/laravel/framework/src/Illuminate/Support/Facades/Facade.php(261): Pbmedia\LaravelFFMpeg\FFMpeg->fromDisk() #9 /var/www/html/app/Jobs/StreamMovie.php(53): Illuminate\Support\Facades\Facade::__callStatic() #10 [internal function]: App\Jobs\StreamMovie->handle() #11 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(32): call_user_func_array() #12 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/Util.php(36): Illuminate\Container\BoundMethod::Illuminate\Container\{closure}() #13 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(90): Illuminate\Container\Util::unwrapIfClosure() #14InvalidArgumentException: Missing required client configuration options



-
Merging input Streams with nodejs/ffmpeg
14 septembre 2020, par jAndyI'm creating a very basic and rudimentary Video-Web-Chat. On the client side, I'm going to use a simple
getUserMedia
API call to capture the webcam data and send video-data asdata-blob
to my server.

From There, I'm planning to either use the
fluent-ffmpeg
library or just spawnffmpeg
myself and pipe that raw data toffmpeg
, which in turn, does some magic and pushes that out asHLS
stream to an Amazon AWS Service (for instance), which then gets actually displayed on a Web Browser for all participating people in the video chat.

So far, I think all of this should be fairly easy to implement, but I keep my head spinning around the question, how I can create a "combined" or "merged" frame and stream, so the output HLS data from my server to the distributing cloud service has only to be one combined data stream to receive.


If there are 3 people in that video chat, my server receives 3 data streams from those clients and combines these data streams (from the individual web-cam data sources) into one output stream.


How could that be accomplished ?
Can I "create" a new frame with
ffmpeg
, so to speak ? I would be very thankful if anybody could give me a heads up here, maybe I'm thinking in a complete wrong direction.

Another question which arises to me is, if I really can just "dump" any data, which I'm receiving from a binary blob created from
getUserMedia
orMultiStreamRecorder
toffmpeg
or if I have to specify somewhere and somehow the exact codecs being used etc.?