
Recherche avancée
Autres articles (28)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...)
Sur d’autres sites (4370)
-
FFMpeg : audio resampling changes slightly the speed of the music [on hold]
24 octobre 2018, par thenaohIn my Android app, I use the FFMpeg library to extract audio samples from an audio file on the fly, and also resample them (since my file is encoded in 44100 Hz, and my device expects 48000 Hz sound).
The problem is : even if the sound quality is perfect, the speed is a little bit higher than the regular speed of the music.
The difference is very small (small enough so you won’t hear it), but you can hear it when you play the same song at the same time on a regular player (like VLC for example).
I think it comes from the resampling, but I don’t know how to fix it.
Here is my code (I get my samples thanks to the
getPcmFloat()
function) : https://gist.github.com/mregnauld/2538d98308ad57eb75cfcd36aab5099aHow can I correct the speed of the music ?
Thanks for your help.
-
Creating a sequence of images from lyrics to use in ffmpeg
19 septembre 2018, par SKSI’m trying to make an MP3 + Lyric -> MP4 program in python.
I have a lyrics file like this :
[00:00.60]Revelation, chapter 4
[00:02.34]After these things I looked,
[00:04.10]and behold a door was opened in heaven,
[00:06.41]and the first voice which I heard, as it were,
[00:08.78]of a trumpet speaking with me, said:
[00:11.09]Come up hither,
[00:12.16]and I will shew thee the things which must be done hereafter.
[00:15.78]And immediately I was in the spirit:
[00:18.03]and behold there was a throne set in heaven,
[00:20.72]and upon the throne one sitting.
[00:22.85]And he that sat,
[00:23.91]was to the sight like the jasper and the sardine stone;
[00:26.97]and there was a rainbow round about the throne,
[00:29.16]in sight like unto an emerald.
[00:31.35]And round about the throne were four and twenty seats;
[00:34.85]and upon the seats, four and twenty ancients sitting,
[00:38.03]clothed in white garments, and on their heads were crowns of gold.
[00:41.97]And from the throne proceeded lightnings, and voices, and thunders;
[00:46.03]and there were seven lamps burning before the throne,
[00:48.60]which are the seven spirits of God.
[00:51.23]And in the sight of the throne was, as it were,
[00:53.79]a sea of glass like to crystal;
[00:56.16]and in the midst of the throne, and round about the throne,
[00:59.29]were four living creatures, full of eyes before and behind.
[01:03.79]And the first living creature was like a lion:I’m trying to create a sequence of images from the lyrics to use into ffmpeg.
os.system(ffmpeg_path + " -r 2 -i " + images_path + "image%1d.png -i " + audio_file + " -vcodec mpeg4 -y " + video_name)
I tried finding out the number of images to make for each line. I’ve tried subtracting the seconds of the next line from the current line. It works but produces very inconsistent results.
import os
import datetime
import time
import math
from PIL import Image, ImageDraw
ffmpeg_path = os.getcwd() + "\\ffmpeg\\bin\\ffmpeg.exe"
images_path = os.getcwd() + "\\test_output\\"
audio_file = os.getcwd() + "\\audio.mp3"
lyric_file = os.getcwd() + "\\lyric.lrc"
video_name = "movie.mp4"
def save():
lyric_to_images()
os.system(ffmpeg_path + " -r 2 -i " + images_path + "image%1d.png -i " + audio_file + " -vcodec mpeg4 -y " + video_name)
def lyric_to_images():
file = open(lyric_file, "r")
data = file.readlines()
startOfLyric = True
lstTimestamp = []
images_to_make = 0
from_second = 0.0
to_second = 0.0
for line in data:
vTime = line[1:9] # 00:00.60
temp = vTime.split(':')
minute = float(temp[0])
#a = float(temp[1].split('.'))
#second = float((minute * 60) + int(a[0]))
second = (minute * 60) + float(temp[1])
lstTimestamp.append(second)
counter = 1
for i, second in enumerate(lstTimestamp):
if startOfLyric is True:
startOfLyric = False
#first line is always 3 seconds (images to make = 3x2)
for x in range(1, 7):
writeImage(data[i][10:], 'image' + str(counter))
counter += 1
else:
from_second = lstTimestamp[i-1]
to_second = second
difference = to_second - from_second
images_to_make = int(difference * 2)
for x in range(1, int(images_to_make+1)):
writeImage(data[i-1][10:], 'image'+str(counter))
counter += 1
file.close()
def writeImage(v_text, filename):
img = Image.new('RGB', (480, 320), color = (73, 109, 137))
d = ImageDraw.Draw(img)
d.text((10,10), v_text, fill=(255,255,0))
img.save(os.getcwd() + "\\test_output\\" + filename + ".png")
save()Is there any efficient and accurate way to calculate how many images I need to create for each line ?
Note : Whatever many images I create will have to be multiplied by 2 because I’m using
-r 2
for FFmpeg (2 FPS). -
Extract raw audio frames from OGG music file with Android NDK
31 octobre 2018, par thenaohIn my Android app, I would like to be able to process audio on the fly from an OGG file by extracting audio samples, process them and redirect them to the audio output.
I know how to make the last 2 steps using Android NDK, but I don’t know how to extract audio samples to get them in an array of floats or shorts.
I tried to make this code work that, apparently, can extract raw audio samples on the fly.
The problem is : I don’t manage to add FFMpeg in my project. I tried many tutorials (like this one), but it seems pretty difficult since I work on Windows. So after a while, I found Prebuild FFMpeg for Android, that seems interesting since it’s available for armeabi-v7a, arm64-v8a, x86 and x86_64 architectures, but again, I don’t understand how to add it in my project.
I also took a look at
libogg
,libvorbis
andvorbisfile
, but I have no idea how to add them in my project.So, does anyone have a working example on how to extract audio samples from an OGG file on the fly ?
Thanks for your help.