
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (56)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
Sélection de projets utilisant MediaSPIP
29 avril 2011, parLes exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
Ferme MediaSPIP @ Infini
L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)
Sur d’autres sites (7573)
-
NodeJS - efficiently and correctly convert from raw PCM to WAV at scale (without FFMPEG ?)
13 juillet 2024, par Royi BernthalI have a stream of raw PCM buffers I need to convert to playable WAV buffers.


@ffmpeg.wasm
can convert an individual buffer in the stream well, but it's limited to executing 1 command at a time, so it won't work in a real-life streaming scenario (streams x concurrent users). We can use it as a reference to a conversion that outputs a good playable wav.

import { FFmpeg, createFFmpeg, fetchFile } from '@ffmpeg.wasm/main';

async pcmToWavFFMPEG(buffer: Buffer) {
 // bitDepth - PCM signed 16-bit little-endian
 const options = { sampleRate: '24k', channels: '1', bitDepth: 's16le' };

 this.ffmpeg.FS('writeFile', 'input.pcm', await fetchFile(buffer));

 await this.ffmpeg.run(
 '-f',
 options.bitDepth,
 '-ar',
 options.sampleRate,
 '-ac',
 options.channels,
 '-i',
 'input.pcm',
 'output.wav',
 );

 const wavBuffer = this.ffmpeg.FS('readFile', 'output.wav');

 this.ffmpeg.FS('unlink', `input.pcm`);
 this.ffmpeg.FS('unlink', `output.wav`);

 return Buffer.from(wavBuffer);
 }



In order to get over the command execution limit, I've tried
fluent-ffmpeg
. I couldn't find a way to convert a single buffer, so I'm just passing the whole readable stream so that ffmpeg can convert all of its buffers to wav. The buffers I'm getting in on('data') aren't playable wav. The same is true for concatting the accumulated buffers in on('complete') - the result is not a playable wav.

import ffmpeg from 'fluent-ffmpeg';
import internal from 'stream';

async pcmToWavFluentFFMPEG(
 readable: internal.Readable,
 callback: (chunk: Buffer) => void,
 ) {
 const options = { sampleRate: 24000, channels: 1, bitDepth: 's16le' };

 ffmpeg(readable)
 .inputFormat(options.bitDepth)
 .audioFrequency(options.sampleRate)
 .audioChannels(options.channels)
 .outputFormat('wav')
 .pipe()
 .on('data', callback);
 }



I've also tried using
node-wav
to convert each buffer individually. It manages to convert everything to playable wavs that sound close to the desired result, however for some reason they're extremely loud and sound a bit weird.

import wav from 'node-wav';

pcmToWavBad(buffer: Buffer) {
 const pcmData = new Int16Array(
 buffer.buffer,
 buffer.byteOffset,
 buffer.byteLength / Int16Array.BYTES_PER_ELEMENT,
 );

 const channelData = [pcmData]; // assuming mono channel

 return wav.encode(channelData, { sampleRate: 24000, bitDepth: 16 });
 }



I've also tried wrapping the PCM as a WAV with
wavefile
without any actual conversion (which is redundant as PCM is contained as is in WAV), but it results in white noise :

import { WaveFile } from 'wavefile';

pcmToWav(buffer: Buffer) {
 const wav = new WaveFile();

 wav.fromScratch(1, 24000, '16', buffer); // 's16le' is invalid

 return Buffer.from(wav.toBuffer());
 }



-
FFMPEG SAR mismatch when concatenating videos
6 septembre 2020, par marcmanI'm trying to concatenate videos with different codecs using the
concat
filter. I'm also doing a bunch of processing to scale and pad the individual clips in this filter as well. However, I keep getting an error of this form :

[Parsed_concat_14 @ 0x556b918ed580] Input link in1:v0 parameters (size 960x1280, SAR 0:1) do not match the corresponding output link in0:v0 parameters (1280x720, SAR 1:1)



Here is my command :


ffmpeg \
 -y \
 -loglevel warning \
 -stats \
 -i videos/vid0.mp4 \
 -i videos/vid1.mp4 \
 -i videos/vid2.mov \
 -filter_complex \
"[0:v]setpts=PTS-STARTPTS, scale=1920:1080, setsar=1, drawtext=expansion=strftime: basetime=$(date +%s -d'2020-07-10 16:04:44')000000 : fontcolor=white : text='%^b %d, %Y%n%l\\:%M%p' : fontsize=36 : y=1080-4*lh : x=1920-text_w-2*max_glyph_w; \
[1:v]setpts=PTS-STARTPTS, scale=810:1080, pad=width=1920:height=1080:x=555:y=0:color=black, setsar=1, drawtext=expansion=strftime: basetime=$(date +%s -d'2020-08-20 21:12:27')000000 : fontcolor=white : text='%^b %d, %Y%n%l\\:%M%p' : fontsize=36 : y=1080-4*lh : x=1365-text_w-2*max_glyph_w; \
[2:v]setpts=PTS-STARTPTS, scale=607:1080, pad=width=1920:height=1080:x=656:y=0:color=black, setsar=1, drawtext=expansion=strftime: basetime=$(date +%s -d'2020-08-27 16:42:26')000000 : fontcolor=white : text='%^b %d, %Y%n%l\\:%M%p' : fontsize=36 : y=1080-4*lh : x=1263-text_w-2*max_glyph_w; \
[0:v][0:a][1:v][1:a][2:v][2:a]concat=n=3:v=1:a=1[v][a]" \
 -map "[v]" \
 -map "[a]" \
 videos.mp4



This gives the following error :


[Parsed_concat_14 @ 0x563ab9d840c0] Input link in1:v0 parameters (size 960x1280, SAR 0:1) do not match the corresponding output link in0:v0 parameters (1280x720, SAR 1:1)
[Parsed_concat_14 @ 0x563ab9d840c0] Input link in2:v0 parameters (size 1080x1920, SAR 0:1) do not match the corresponding output link in0:v0 parameters (1280x720, SAR 1:1)



The input videos have these resolutions :


- 

- vid0.mp4 : (1280x720)
- vid1.mp4 : (960x1280)
- vid2.mp4 : (1080x1920)








The output resolution of the concatenated output is to be 1920x1080.


I've added the
setsar=1
command after all my scaling and padding operations, as per this question and answer. I've also triedsetdar=16/9
as in this answer, but it made no difference.

What am I missing here ?


-
FATE Under New Management
2 août 2010, par Multimedia Mike — FATE ServerAt any given time, I have between 20-30 blog posts in some phase of development. Half of them seem to be contemplations regarding the design and future of my original FATE system and are thus ready for the recycle bin at this point. Mans is a man of considerably fewer words, so I thought I would use a few words to describe the new FATE system that he put together.
Overview
Here are the distinguishing features that Mans mentioned in his announcement message :- Test specs are part of the ffmpeg repo. They are thus properly versioned, and any developer can update them as needed.
- Support for inexact tests.
- Parallel testing on multi-core systems.
- Anyone registered with FATE can add systems.
- Client side entirely in POSIX shell script and GNU make.
- Open source backend and web interface.
- Client and backend entirely decoupled.
- Anyone can contribute patches.
Client
The FATE build/test client source code is contained in tests/fate.sh in the FFmpeg source tree. The script — as the extension implies — is a shell script. It takes a text file full of shell variables, updates source code, configures, builds, and tests. It’s a considerably minor amount of code, especially compared to my original Python code. Part of this is because most of the testing logic has shifted into FFmpeg itself. The build system knows about all the FATE tests and all of the specs are now maintained in the codebase (thanks to all who spearheaded that effort— I think it was Vitor and Mans).The client creates a report file which contains a series of lines to be transported to the server. The first line has some information about the configuration and compiler, plus the overall status of the build/test iteration. The second line contains ’./configure’ information. Each of the remaining lines contain information about an individual FATE test, mostly in Base64 format.
Server
The server source code lives at http://git.mansr.com/?p=fateweb. It is written in Perl and plugs into a CGI-capable HTTP server. Authentication between the client and the server operates via SSH/SSL. In stark contrast to the original FATE server, there is no database component on the backend. The new system maintains information in a series of flat files.