
Recherche avancée
Médias (91)
-
Corona Radiata
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Lights in the Sky
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Head Down
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Echoplex
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Discipline
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Letting You
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (21)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)
Sur d’autres sites (4423)
-
How to Synchronize Audio with Video Frames [Python]
19 septembre 2023, par РостиславI want to stream video from URL to a server via socket, which then restreams it to all clients in the room.


This code streams video frame by frame :


async def stream_video(room, url):
 cap = cv2.VideoCapture(url)
 fps = round(cap.get(cv2.CAP_PROP_FPS))

 while True:
 ret, frame = cap.read()
 if not ret: break
 _, img_bytes = cv2.imencode(".jpg", frame)
 img_base64 = base64.b64encode(img_bytes).decode('utf-8')
 img_data_url = f"data:image/jpeg;base64,{img_base64}"

 await socket.emit('segment', { 'room': room, 'type': 'video', 'stream': img_data_url})
 await asyncio.sleep(1/fps)
 
 cap.release()



And this is code for stream audio :


async def stream_audio(room, url):
 sample_size = 14000
 cmd_audio = [
 "ffmpeg",
 "-i", url,
 '-vn',
 '-f', 's16le',
 '-c:a', 'pcm_s16le',
 "-ac", "2",
 "-sample_rate","48000",
 '-ar','48000',
 "-acodec","libmp3lame",
 "pipe:1"
 ]
 proc_audio = await asyncio.create_subprocess_exec(
 *cmd_audio, stdout=subprocess.PIPE, stderr=False
 )

 while True:
 audio_data = await proc_audio.stdout.read(sample_size)
 if audio_data:
 await socket.emit('segment', { 'room': room, 'type': 'audio', 'stream': audio_data})
 await asyncio.sleep(1)




But the problem is : how to synchronize them ? How many bytes need to be read every second from ffmpeg so that the audio matches the frames.


I tried to do this, but the problem with the number of chunks still remained :


while True:
 audio_data = await proc_audio.stdout.read(sample_size)
 if audio_data:
 await socket.emit('segment', { 'room': room, 'type': 'audio', 'stream': audio_data})

 for i in range(fps):
 ret, frame = cap.read()
 if not ret: break
 _, img_bytes = cv2.imencode(".jpg", frame)
 img_base64 = base64.b64encode(img_bytes).decode('utf-8')
 img_data_url = f"data:image/jpeg;base64,{img_base64}"

 await socket.emit('segment', { 'room': room, 'type': 'video', 'stream': img_data_url})
 await asyncio.sleep(1/fps)



I also tried loading a chunk of audio into pydub, but it shows that the duration of my 14000 chunk is 0.07s, which is very small. And if you increase the number of chunks for reading to 192k (as the gpt chat says), then the audio will simply play very, very quickly.
The ideal number of chunks that I was able to achieve is approximately 14000, but the audio is still not synchronous.


-
Split h.264 stream into multiple parts in python
31 janvier 2023, par BillPlayzMy objective is to split an h.264 stream into multiple parts, meaning while reading the stream from a pipe i would like to save it into x second long packages (in my case 10).


I am using a libcamera-vid subprocess on my Raspberry Pi that outputs the h.264 stream into stdout.

Might be irrelevant, depends : libcamera-vid outputs a message every frame and I am able to locate it at isFrameStopLine
To convert the stream, I use an ffmpeg subprocess, as you can see in the code below.

Imagine it like that :

Stream is running...

- Start recording to a file

- Sleep x seconds

- Finish recording to file

- Start recording a new file

- Sleep x seconds

- Finish recording the new file

- and so on...

Here is my current code, however upon running the first export succeeds, and after the second or third the ffmpeg-subprocess is terminating with the error :

pipe:: Invalid data found when processing input

And shortly after, the python process, because of the ffmpeg termination i believe.
Traceback (most recent call last): File "/home/survpi-camera/main.py", line 56, in <module> processStreamLine(readData) File "/home/survpi-camera/main.py", line 16, in processStreamLine streamInfo["process"].stdin.write(data) BrokenPipeError: [Errno 32] Broken pipe</module>


recentStreamProcesses = []
streamInfo = {
 "lastStreamStart": -1,
 "process": None
}

def processStreamLine(data):
 isInfoLine = ((data.startswith(b"[") and (b"INFO" in data)) or (data == b"Preview window unavailable"))
 isFrameStopLine = (data.startswith(b"#") and (b" fps) exp" in data))
 if ((not isInfoLine) and (not isFrameStopLine)):
 streamInfo["process"].stdin.write(data)
 
 if (isFrameStopLine):
 if (time.time() - streamInfo["lastStreamStart"] >= 10):
 print("10 seconds passed, exporting...")
 exportStream()
 createNewStream()

def createNewStream():
 streamInfo["lastStreamStart"] = time.time()
 streamInfo["process"] = subprocess.Popen([
 "ffmpeg",
 "-r","30",
 "-i","-",
 "-c","copy",("/home/survpi-camera/" + str(round(time.time())) + ".mp4")
 ],stdin=subprocess.PIPE,stderr=subprocess.STDOUT)
 print("Created new streamProcess.")

def exportStream():
 print("Exporting...")
 streamInfo["process"].stdin.close()
 recentStreamProcesses.append(streamInfo["process"])


cameraProcess = subprocess.Popen([
 "libcamera-vid",
 "-t","0",
 "--width","1920",
 "--height","1080",
 "--codec","h264",
 "--inline",
 "--listen",
 "-o","-"
],stdout=subprocess.PIPE,stdin=subprocess.PIPE,stderr=subprocess.STDOUT)

createNewStream()

while True:
 readData = cameraProcess.stdout.readline()
 
 processStreamLine(readData)



Thank you in advance !


-
Send Canvas to Images and Combine with FFmpeg
19 décembre 2016, par user3783155I am making a web app that is a simple video editor. It has a file input, which loads media into
<video></video>
or<img style='max-width: 300px; max-height: 300px' />
elements, which are drawn onto a canvas with correct timing. The problem I am having is with exporting it to an .mp4. I save each frame into a .png file with AJAX and PHP. When all frames are saved, I call FFmpeg to merge them into a video. To prevent lag, on export, I set each video’splaybackRate
to0.01
and make the frame loop 0.01 times slower.
If I set the clip’s duration to 30 seconds (which can be done in my app), it exports 30 seconds of video footage, but the exported video is faster than the original video. In other words, the exported video fits more in that 30 seconds.
I want to know what I did wrong, or another approach to make it have the intended frame rate.Javascript (frame loop)
var playLoop = setInterval(function() {
if (paused) clearInterval(playLoop);
playFrame(exp); // <video> exp --> exported (boolean)
drawFrame(exp); // <canvas>
var addend = pps/(1000/frameRate);
$(marker).css("left", "+="+addend);
var left = $(marker).css("left");
var value = left.substring(0, left.length-2);
position = parseInt(value)/pps;
if (stopButton.disabled) stopButton.disabled = false;
}, exp ? frameRate/0.01 : frameRate);
</canvas></video>Javascript
drawFrame()
...
// for export
if (exp) {
var dataURL = screen.toDataURL("image/png");
$.ajax({
type: "POST",
url: "server/save.php",
data: {
frameNumber: frameNumber,
imageBase64: dataURL
},
success: function(resp) {
$("#exportstatusfilename").html(resp);
console.log(resp);
}
});
frameNumber++;
if (position > maxDuration) {
stopMovie();
forEachMedia(function(media) {
if (media.tagName === "VIDEO") {
media.muted = false;
media.playbackRate = 1;
}
});
$.ajax({
type: "POST",
url: "server/produce.php",
data: {
fps: Math.round(1000/frameRate),
width: screen.width,
height: screen.height
},
success: function(resp) {
alert("Video exported successfully");
console.log(resp);
// donwload file to client
window.location.replace("server/download.php");
// delete frames
$.ajax({
type: "POST",
url: "server/clear.php"
});
}
});
}}
PHP
produce.php
<?php
$fps = $_POST['fps'];
$width = $_POST['width'];
$height = $_POST['height'];
$cmd = "..\\ffmpeg\\bin\\ffmpeg.exe -r $fps -f image2 -s {$width}x{$height} -i ..\\media\\movie0\\canvas_frame%d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p ..\\media\\movie0.mp4";
$proc = popen($cmd, 'r');
...
?>