
Recherche avancée
Autres articles (79)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (5116)
-
FFMPEG H264 with custom overlay per frame
4 octobre 2020, par La bla blaWe have a stream that is stored in the cloud (Amazon S3) as individual H264 frames. The frames are stored as
framexxxxxx.264
, the numbering doesn't start from 0 but rather from some larger number, say 1000 (so,frame001000.264
)

The goal is to create a mp4 clip which is either timelapse or just faster for inspection and other checking (much faster, compressing around 3 hours of video down to < 20 minutes), this also requires we overlay the frame number (the filename) on the frame itself


At first I was creating a timelapse by pulling from S3 only the keyframes (i-frames ? still rather new to codecs & stuff) and overlaying the filename on them and saving as png (which probably isn't needed, but that's what I did) using (this command is used inside a python script)


ffmpeg -y -i {h264_name} -vf \"scale=1920:-1, 
drawtext=fontfile=/usr/share/fonts/truetype/ubuntu-font-family/Ubuntu-B.ttf:fontsize=34:text={txt}:fontcolor=white:x=50:y=50:bordercolor=black:borderw=2\" 
-c:a copy -pix_fmt yuv420p {basename}.png



after this I combined all the frames by using python to convert the lowest numbered frame to
0.png
and incrementing (so it would be continuous, because I only used keyframes the numbers originally weren't sequential) and running

ffmpeg -y -f image2 -i %d.png -r {self.params.fps} -vcodec libx264 -crf {self.params.crf} -pix_fmt yuv420p {out_file}



and this worked great, but the difference between keyframes was too long to allow for proper inspection


so now for the question(s)


since I know frames that are not keyframes (p-frames ?) can't be used alone by ffmpeg, the method of overlaying the file name and converting it to png (or keep as h264, same thing) won't work, or at least, I couldn't find a way for it to work, maybe there's a way to specify a frame's keyframe ?, how can one overlay the filename (and not the frame number as shown here for example)


Also, is it possible to skip some p-frames between the keyframes ? (so if a keyframe is every 30 frames, we would take a keyframe, a frame 15 frames later, and next another keyframe)


I thought about using ffmpeg's pipe option to feed it with the files as they're being downloaded, but I'm not sure if I can specify drawtext this way


Also, if there's another alternative that can achieve that (at first I was converting to png, using python and OpenCV to add the filename and then merging the pngs to mp4, but then I found drawtext can do that in a single command so I used it)


-
Merging input Streams with nodejs/ffmpeg
14 septembre 2020, par jAndyI'm creating a very basic and rudimentary Video-Web-Chat. On the client side, I'm going to use a simple
getUserMedia
API call to capture the webcam data and send video-data asdata-blob
to my server.

From There, I'm planning to either use the
fluent-ffmpeg
library or just spawnffmpeg
myself and pipe that raw data toffmpeg
, which in turn, does some magic and pushes that out asHLS
stream to an Amazon AWS Service (for instance), which then gets actually displayed on a Web Browser for all participating people in the video chat.

So far, I think all of this should be fairly easy to implement, but I keep my head spinning around the question, how I can create a "combined" or "merged" frame and stream, so the output HLS data from my server to the distributing cloud service has only to be one combined data stream to receive.


If there are 3 people in that video chat, my server receives 3 data streams from those clients and combines these data streams (from the individual web-cam data sources) into one output stream.


How could that be accomplished ?
Can I "create" a new frame with
ffmpeg
, so to speak ? I would be very thankful if anybody could give me a heads up here, maybe I'm thinking in a complete wrong direction.

Another question which arises to me is, if I really can just "dump" any data, which I'm receiving from a binary blob created from
getUserMedia
orMultiStreamRecorder
toffmpeg
or if I have to specify somewhere and somehow the exact codecs being used etc.?

-
InvalidArgumentException : Missing required client configuration options Laravel + FFmpeg + ubuntu 20 + Lamp
9 août 2020, par Ahmed Al-RayanI have an upload site on a Linux VPS server, Ubuntu, the new version, which is used by Laravel and the packaged fmpeg for the upload client


A problem appears to me that I will attach, but I explain the type of problem. When I upload a video, the video is supposed to be uploaded, and then it goes to the processing, but times it raises and stops, and the error appears 500 times, it is raised and goes to treatment, but it stops here and the error appears to me that I will attach it and it is a problem in the country and does not match the information please Thank you for help


InvalidArgumentException: Missing required client configuration options: region: (string) 

A "region" configuration value is required for the "s3" service (e.g., "us-west-2"). A list of available public regions and endpoints can be found at http://docs.aws.amazon.com/general/latest/gr/rande.html.
in /var/www/html/vendor/aws/aws-sdk-php/src/ClientResolver.php:399 


Stack trace: #0 /var/www/html/vendor/aws/aws-sdk-php/src/ClientResolver.php(295): Aws\ClientResolver->throwRequired() #1 /var/www/html/vendor/aws/aws-sdk-php/src/AwsClient.php(195): Aws\ClientResolver->resolve() #2 /var/www/html/vendor/aws/aws-sdk-php/src/S3/S3Client.php(327): Aws\AwsClient->__construct() #3 /var/www/html/vendor/laravel/framework/src/Illuminate/Filesystem/FilesystemManager.php(212): Aws\S3\S3Client->__construct() #4 /var/www/html/vendor/laravel/framework/src/Illuminate/Filesystem/FilesystemManager.php(129): Illuminate\Filesystem\FilesystemManager->createS3Driver() #5 /var/www/html/vendor/laravel/framework/src/Illuminate/Filesystem/FilesystemManager.php(101): Illuminate\Filesystem\FilesystemManager->resolve() #6 /var/www/html/vendor/laravel/framework/src/Illuminate/Filesystem/FilesystemManager.php(78): Illuminate\Filesystem\FilesystemManager->get() #7 /var/www/html/vendor/pbmedia/laravel-ffmpeg/src/FFMpeg.php(66): Illuminate\Filesystem\FilesystemManager->disk() #8 /var/www/html/vendor/laravel/framework/src/Illuminate/Support/Facades/Facade.php(261): Pbmedia\LaravelFFMpeg\FFMpeg->fromDisk() #9 /var/www/html/app/Jobs/StreamMovie.php(53): Illuminate\Support\Facades\Facade::__callStatic() #10 [internal function]: App\Jobs\StreamMovie->handle() #11 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(32): call_user_func_array() #12 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/Util.php(36): Illuminate\Container\BoundMethod::Illuminate\Container\{closure}() #13 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(90): Illuminate\Container\Util::unwrapIfClosure() #14InvalidArgumentException: Missing required client configuration options