
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (38)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)
Sur d’autres sites (1723)
-
Inserting image to video as a watermark using FFMPEG on Android phone
4 octobre 2018, par Deepak PrasadI was trying to add an image to video as a watermark on android phone.
I am using below command.String [] complexCommand = {"ffmpeg","-i","/storage/emulated/0/Vd/sample.mp4","-i",
"/storage/emulated/0/Vd/icon.png","-filter_complex","overlay=10:main_h-overlay_h-10","/storage/emulated/0/Vd/out.mp4"};But it’s giving me the error.
D/FFmpeg: Failed ffmpeg version n3.0.1 Copyright (c) 2000-2016 the FFmpeg developers
built with gcc 4.8 (GCC)
configuration: --target-os=linux --cross-prefix=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- --arch=arm --cpu=cortex-a8 --enable-runtime-cpudetect --sysroot=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/sysroot --enable-pic --enable-libx264 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-fontconfig --enable-pthreads --disable-debug --disable-ffserver --enable-version3 --enable-hardcoded-tables --disable-ffplay --disable-ffprobe --enable-gpl --enable-yasm --disable-doc --disable-shared --enable-static --pkg-config=/home/vagrant/SourceCode/ffmpeg-android/ffmpeg-pkg-config --prefix=/home/vagrant/SourceCode/ffmpeg-android/build/armeabi-v7a --extra-cflags='-I/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all' --extra-ldflags='-L/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie' --extra-libs='-lpng -lexpat -lm' --extra-cxxflags=
libavutil 55. 17.103 / 55. 17.103
libavcodec 57. 24.102 / 57. 24.102
libavformat 57. 25.100 / 57. 25.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 31.100 / 6. 31.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/storage/emulated/0/Vd/sample.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
creation_time : 1970-01-01 00:00:00
encoder : Lavf53.24.2
Duration: 00:00:05.31, start: 0.000000, bitrate: 1589 kb/s
Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1205 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
Metadata:
creation_time : 1970-01-01 00:00:00
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, 5.1, fltp, 384 kb/s (default)
Metadata:
creation_time : 1970-01-01 00:00:00
handler_name : SoundHandler
Input #1, png_pipe, from '/storage/emulated/0/Vd/icon.png':
Duration: N/A, bitrate: N/A
Stream #1:0: Video: png, rgba(pc), 96x96 [SAR 3779:3779 DAR 1:1], 25 tbr, 25 tbn, 25 tbc
[NULL @ 0x42303a60] Unable to find a suitable output format for 'ffmpeg'
ffmpeg: Invalid argumentBoth the source video and source image, I have stored on the android phone under some path. And I am using that complete path.
Thanks.
-
FFmpeg video filters corrupt mp4 file
12 septembre 2018, par katarotyI am trying to add visual filters to
mp4
files usingFFmpeg
.I have tried many filters, but 4 of them worked out of which only 1 worked worked perfectly. I am using them in an
Android application
, but I tried the same command lines onwindows cmd
usingFFmpeg
and the result is same. With 3 filters the.mp4
is unplayable by many players and alsoAndroids mediaplayer
. I can still play the "corrupt" files onVLC
and players that have many codex`s, but I need them to work perfectly.The only filter working as intended is
Black and White
and the command is (have tried both-vf
and-filter_complex
) :-i origin.mp4 -filter_complex hue=s=0 blackWhite.mp4
The other 3 are
Sepia
,Vintage
andNegative
"colorchannelmixer=.393:.769:.189:0:.349:.686:.168:0:.272:.534:.131", "curves=vintage", "curves=negative"
Since I cannot upload video files here, I have added a link of 3 files, the original video file, black/white filtered file (that works), vintage file (that does not work).
-
How to setup a virtual mic and pipe audio to it from node.js
28 octobre 2018, par NielllesSummary of what I am trying to achieve :
I’m currently doing some work on a Discord bot. I’m trying to join a voice channel, which is the easy part, and then use the combined audio of the speakers in that voice channel as input for a webpage in a web browser. It doesn’t really matter which browser it is as long as it can be controlled with Selenium.
What I’ve tried/looked into so far
My bot so far is written up in Python using the discord.py API wrapper. Unfortunately listening to, as opposed to putting in, audio hasn’t been exactly implemented great − let alone documented − with discord.py. This made me decide to switch to node.js (i.e. discord.js) for the voice channel stuff of my bot.
After switching to discord.js it was pretty easy to determine who’s talking and create an audio stream (PCM stream) for that user. For the next part I though I’d just pipe the audio stream to a virtual microphone and select that as the audio input on the browser. You can even use FFMPEG from within node.js 1, to get something that looks like this :
const Discord = require("discord.js");
const client = new Discord.Client();
client.on('ready', () => {
voiceChannel = client.channels.get('SOME_CHANNEL_ID');
voiceChannel.join()
.then(conn => {
console.log('Connected')
const receiver = conn.createReceiver();
conn.on('speaking', (user, speaking) => {
if (speaking) {
const audioStream = receiver.createPCMStream(user);
ffmpeg(stream)
.inputFormat('s32le')
.audioFrequency(16000)
.audioChannels(1)
.audioCodec('pcm_s16le')
.format('s16le')
.pipe(someVirtualMic);
}
});
})
.catch(console.log);
});
client.login('SOME_TOKEN');This last part, creating and streaming to a virtual microphone, has proven to be rather complicated. I’ve read a ton of SO posts and documentation on both The Advanced Linux Sound Architecture (ALSA) and the JACK Audio Connection Kit, but I simply can’t figure out how to setup a virtual microphone that will show up as a mic in my browser, or how to pipe audio to it.
Any help or pointers to a solution would be greatly appreciated !
Addendum
For the past couple of days I’ve kept on looking into to this issue. I’ve now learned about ALSA loopback devices and feel that the solution must be there.
I’ve pretty much followed a post that talks about loopback devices and aims to achieve the following :
Simply imagine that you have a physical link between one OUT and one
IN of the same device.I’ve set up the devices as described in the post and now two new audio devices show up when selecting a microphone in Firefox. I’d expect one, but I that may be because I don’t entirely understand the loopback devices (yet).
The loop back devices are created and I think that they’re linked (if I understood the aforementioned article correctly). Assuming that’s the case the only problem I have to tackle is streaming the audio via FFMPEG from within node.js.