
Recherche avancée
Médias (2)
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
-
GetID3 - Boutons supplémentaires
9 avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (96)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
Sur d’autres sites (7087)
-
ffmpeg doesnt use all the pictures when creating a video
9 septembre 2022, par Mikhael KarabasI have 75 pictures of the same size for an animation. named 0.png ... 74.png
when running ffmpeg to create a video out of them with 24 fps (commmand and log below) the resulting video instead of expected 75/24 = 3.125 sec. is 2.667 sec in lenght and consists only of first 64 frames(pictures), although ffmpeg tells it has processed 75 frames.
I have checked with


ffmpeg -i output.webm out%%d.png - on the resulting video, it indeed exports 64 first frames and not the rest 11 of them.


Cant undertand what am i doing wrong. please kindly advise.


brief output below.


complete log : https://drive.google.com/file/d/1_J7wLPU9PJZ7jztpiJ8g_bZKPZfiK02L/view?usp=sharing


D:\ffmpeg\ffmpeg-64.exe -report -framerate 24 -f image2 -i %01d.png -c:v libvpx-vp9 -pix_fmt yuva420p -crf 10 -b:v 0 output.webm
ffmpeg started on 2022-09-09 at 19:03:15
Report written to "ffmpeg-20220909-190315.log"
Log level: 48
ffmpeg version 2021-12-17-git-b780b6db64-essentials_build-www.gyan.dev Copyright (c) 2000-2021 the FFmpeg developers
 built with gcc 11.2.0 (Rev2, Built by MSYS2 project)
 configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-lzma --enable-zlib --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-sdl2 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-libass --enable-libfreetype --enable-libfribidi --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libgme --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libtheora --enable-libvo-amrwbenc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-librubberband
 libavutil 57. 11.100 / 57. 11.100
 libavcodec 59. 14.100 / 59. 14.100
 libavformat 59. 10.100 / 59. 10.100
 libavdevice 59. 0.101 / 59. 0.101
 libavfilter 8. 20.100 / 8. 20.100
 libswscale 6. 1.101 / 6. 1.101
 libswresample 4. 0.100 / 4. 0.100
 libpostproc 56. 0.100 / 56. 0.100
Input #0, image2, from '%01d.png':
 Duration: 00:00:03.13, start: 0.000000, bitrate: N/A
 Stream #0:0: Video: png, rgba(pc), 300x400, 24 fps, 24 tbr, 24 tbn
File 'output.webm' already exists. Overwrite? [y/N] y
Stream mapping:
 Stream #0:0 -> #0:0 (png (native) -> vp9 (libvpx-vp9))
Press [q] to stop, [?] for help
[libvpx-vp9 @ 000002dad505c8c0] v1.11.0-62-g7f45e94d9
Output #0, webm, to 'output.webm':
 Metadata:
 encoder : Lavf59.10.100
 Stream #0:0: Video: vp9, yuva420p(tv, progressive), 300x400, q=2-31, 24 fps, 1k tbn
 Metadata:
 encoder : Lavc59.14.100 libvpx-vp9
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
frame= 75 fps=9.9 q=0.0 Lsize= 1056kB time=00:00:02.58 bitrate=3347.2kbits/s speed=0.342x
video:1036kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.942318%



-
dnn : change dnn interface to replace DNNData* with AVFrame*
28 août 2020, par Guo, Yejundnn : change dnn interface to replace DNNData* with AVFrame*
Currently, every filter needs to provide code to transfer data from
AVFrame* to model input (DNNData*), and also from model output
(DNNData*) to AVFrame*. Actually, such transfer can be implemented
within DNN module, and so filter can focus on its own business logic.DNN module also exports the function pointer pre_proc and post_proc
in struct DNNModel, just in case that a filter has its special logic
to transfer data between AVFrame* and DNNData*. The default implementation
within DNN module is used if the filter does not set pre/post_proc.- [DH] configure
- [DH] libavfilter/dnn/Makefile
- [DH] libavfilter/dnn/dnn_backend_native.c
- [DH] libavfilter/dnn/dnn_backend_native.h
- [DH] libavfilter/dnn/dnn_backend_openvino.c
- [DH] libavfilter/dnn/dnn_backend_openvino.h
- [DH] libavfilter/dnn/dnn_backend_tf.c
- [DH] libavfilter/dnn/dnn_backend_tf.h
- [DH] libavfilter/dnn/dnn_io_proc.c
- [DH] libavfilter/dnn/dnn_io_proc.h
- [DH] libavfilter/dnn_interface.h
- [DH] libavfilter/vf_derain.c
- [DH] libavfilter/vf_dnn_processing.c
- [DH] libavfilter/vf_sr.c
-
Bluemix node js build pack to support webm audio conversion
31 août 2018, par Nimmy MohandasEven after adding ffmpeg to bluemix node js build pack(I tried this https://github.com/BlueChasm/nodejs-buildpack-ffmpeg), it doesn’t support webm audio format conversion.Could anyone please suggest alternate ways to support this issue ?
Description : I want to do speech to text conversion using Google speech recognition. I am using react native for front-end development. I recorded the voice and generated a webm audio file. Since the google speech recognition doesn’t support webm file format, I converted it into wav format using fluent-ffmpeg. It is working perfectly in my local system, but when I deploy it to the IBM cloud foundry using ffmpeg buildpack(https://github.com/BlueChasm/nodejs-buildpack-ffmpeg), it generates a file without extension. Following are my code,
router.post('/st', audioUpload.single('audio'), function(req, res, next) {
if (!req.file) {
return res.json({ status: false, error: 'No input given!' });
}
/**
* convert to .wav
*/
ffmpeg(req.file.path)
.toFormat('wav')
.inputOptions(['-r 32000',
'-ac 1'
])
.on('error', (err) => {
console.log('An error occurred: ' + err.message);
})
.save(path.join(__dirname, '..', 'uploads/audio/file.wav'))
.on('end', () => {
st.detectAudioFile(path.join(__dirname, '..', 'uploads/audio/file.wav')).then(data => {
fs.unlink(path.join(__dirname, '..', 'uploads/audio/file.wav'), function(err) {
if (err) return res.send(err);
const response = data[0];
const transcription = response.results
.map(result => result.alternatives[0].transcript)
.join('\n');
return res.json({ transcription });
}), fs.unlink(req.file.path, function(err) {
if (err) return res.json(err);
})
})
.catch(err => {
res.send({
status: false,
error_code: 400,
err: err.error || err.message
});
});
})
})//speech-to-text.js
const fs = require('fs');
const speech = require('@google-cloud/speech');
var detectAudioFile = function(fileName) {
const Speechclient = new speech.SpeechClient({})
// Reads a local audio file and converts it to base64
const file = fs.readFileSync(fileName);
const audioBytes = file.toString('base64');
const audio = {
content: audioBytes,
};
const config = {
// encoding: "FLAC",
// sampleRateHertz: 44100,
languageCode: 'en-US',
};
const request = {
audio: audio,
config: config,
};
return Speechclient
.recognize(request);
}
module.exports = {
detectAudioFile
}`