
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (84)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)
Sur d’autres sites (11637)
-
Sending per frame metadata with H264 encoded frames
21 septembre 2013, par user2459280We're looking for a way to send per frame metadata (for example an ID) with H264 encoded frames from a server to a client.
We're currently developing a remote rendering application, where both client and server side are actively involved.
The server renders a high quality image with all effects, lighting etc.
The client also has model-informations and renders a diffuse image that is used when the bandwidth is too low or the images have to be warped in order to avoid stuttering .So far we're encoding the frames on the server side with ffmpeg and streaming them with live555 to the client, who receives an rtsp-stream and decodes the frames again using ffmpeg.
For our application, we now need to send per frame metadata.
We want the client to tell the server where the camera is right now.
Ideally we'd be able to send the client's view matrix to the server, render the corresponding frame and send it back to the client together with its view matrix. So when the client receives a frame, we need to know exactly at what camera position the frame was rendered.Alternatively we could also tag each view matrix with an ID, send it to the server, render the frame and tag it with the same ID and send it back. In this case we'd have to assign the right matrix to the frame again on the client side.
After several attempts to realize the above intent with ffmpeg we came to the conclusion that ffmpeg does not provide the required functionality. ffmpeg only provides a fix, predefined set of fields for metadata, that either cannot store a matrix or can only be set for every key frame, which is not frequently enough for our purpose.
Now we're considering using live555. So far we have an on demand Server, witch gets a VideoSubsession with a H264VideoStreamDiscreteFramer to contain our own FramedSource class. In this class we load the encoded AVPacket (from ffmpeg) and send its data-buffer over the network. Now we need a way to send some kind of metadata with every frame to the client.
Do you have any ideas how to solve this metadata problem with live555 oder another library ?
Thanks for your help !
-
Save FFMpeg conversion to PHP variable vs. File System for use with Whisper API ?
13 avril 2023, par SScottiI just started working on a little demo to transalte audio captured from the front-end as audio/webm using JS and then sent the back-end in a Laravel App. I guess there are JS libraries that can handle the conversion, but I'd rather use a server side solution with FFMPEG, which I am doing.


The backend code is below. It seems to be working after playing around with the PHP composer package that I'm using vs. one for Laravel that is also there. I'd rather use this one because I have other PHP apps that are not Laravel.


Questions :


- 

-
With the FFMpeg library, is there a way to capture the converted .mp3 file to a PHP variable in the script rather than saving it to the file system and then reading it back in later ?


-
For the OpenAI call, I'd like to catch exceptions there also. I just sort of have a placeholder there for now.


protected function whisper(Request $request) {

 $yourApiKey = getenv('OPENAI_API_KEY');
 $client = OpenAI::client($yourApiKey);

 $file = $request->file('file');
 $mimeType = $request->file('file')->getMimeType();
 $audioContents = $file->getContent();

 try {

 FFMpeg::open($file)
 ->export()
 ->toDisk('public')
 ->inFormat(new \FFMpeg\Format\Audio\Mp3)
 ->save('song_converted.mp3');
 }
 catch (EncodingException $exception) {
 $command = $exception->getCommand();
 $errorLog = $exception->getErrorOutput();
 }

 $mp3 = Storage::disk('public')->path('song_converted.mp3');
 try {
 $response = $client->audio()->transcribe([
 'model' => 'whisper-1',
 'file' => fopen($mp3, 'r'),
 'response_format' => 'verbose_json',
 ]);
 }
 catch (EncodingException $exception) {
 $command = $exception->getCommand();
 $errorLog = $exception->getErrorOutput();
 }

 echo json_encode($response);

}









-
-
FFmpeg proccess killed while converting .mov file
10 novembre 2020, par Vala KhosraviI'm using FFmpeg to reduce my videos file size when I give a .mov file as an input with this command :


ffmpeg -i in.mov -c:a copy -crf 20 out.mov



program start working and after a while, it gets killed. here are the last lines of the log that I get :


Output #0, mov, to '/home/ubuntu/test.mov':
 Metadata:
 major_brand : qt
 minor_version : 0
 compatible_brands: qt
 com.apple.quicktime.creationdate: 2020-08-29T15:03:17+0430
 com.apple.quicktime.make: Apple
 com.apple.quicktime.model: MacBookPro14,1
 com.apple.quicktime.software: Mac OS X 10.15.1 (19B88)
 encoder : Lavf57.83.100
 Stream #0:0(und): Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 2866x1716 [SAR 1:1 DAR 1433:858], q=-1--1, 60 fps, 15360 tbn, 60 tbc (default)
 Metadata:
 creation_time : 2020-11-10T09:04:43.000000Z
 handler_name : Core Media Data Handler
 encoder : Lavc57.107.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 207 kb/s (default)
 Metadata:
 creation_time : 2020-11-10T09:04:43.000000Z
 handler_name : Core Media Data Handler
Killed 23 fps=3.6 q=0.0 size= 0kB time=00:00:01.34 bitrate= 0.0kbits/s dup=1 drop=0 speed=0.21x



I tried so many different flags for FFmpeg but still getting the same error.


What's the solution ?
here is my input video file