
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (43)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (8015)
-
Shows improper/corrupted TS segments from Opencv webcam and FFmpeg
30 juin 2020, par playmaker420Im experimenting with opencv and ffmpeg to create a live hls stream from the webcam using some scripts


The ffmpeg version i use is 3.4


frame-detection.py


import numpy as np
import cv2
import sys


cap = cv2.VideoCapture(0)

while(True):
 # Capture frame-by-frame
 ret, frame = cap.read()
 framestring = frame.tostring()
 sys.stdout.write(str(framestring))

 # Display the resulting frame
 cv2.imshow('frame', frame)
 if cv2.waitKey(1) & 0xFF == ord('q'):
 break

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()



hlslive_generator.sh


#!/bin/bash

# Create folder from args
mkdir -p $1

# Get into the folder
cd $1

# Start running FFmpeg HLS Live streaming
python frame-detection.py | ffmpeg \
 -f rawvideo \
 -framerate 10 \
 -video_size 640x480 \
 -i - foo.mp4 \
 -vcodec libx264 \
 -acodec copy \
 -pix_fmt yuv420p \
 -color_range 2 \
 -hls_time 1 \
 -hls_list_size 5 \
 -hls_flags delete_segments \
 -use_localtime 1 \
 -hls_segment_filename '%Y%m%d-%s.ts' \
 ./playlist.m3u8



I used the following commands to run the scripts and it creates a folder and generate ts segments in it


./hlslive_generator.sh hlssegments



The issue i face here is with the created ts files, on playing these segments with the video player it shows improper/corrupted segments.


Can someone help me to identify the issue ? Thanks in advance


-
Play just audio (without launching a window)
6 août 2017, par Paradise on EIs there a way to hide
ffplay
window on Windows ? I want to play audio files (some of them have video streem as well), but I don’t want a window to appear. Is there any way to hide window completelly ?I took a look at their documentationa and I didn’t find a way to do it. I also searched on the internet and didn’t find anything useful. Everyone are asking "Why in the world do you want to hide window..." and similar questions instead of actually posting an answer on how to do it.
I believe ffplay doesn’t have native way to hide window. So, what are the most easiest alternative ? One of the alternatives would be to download ffplay source code, modify it and recompile it, but it is definitelly not an easy and quick way.
Is there any way I can launch a process without showing window. It would be great if there is some way to achieve it when ffplay is launched from Node.js (becuase I am using Node.js’s module
child_process
to play audio files). So, how can I hide ffplay’s window ?My current code is following :
var cp = require('child_process');
var proc = cp.spawn('ffplay', [
'-no_banner',
'-loglevel', 'panic',
playlist.splice(Math.random() * playlist.length | 0, 1)[0]
]);Question
How to hide ffplay’s window ? If it is possible using
ffplay
native arguemnts (which I didn’t find in ther documentation) then what arguments to use ? If there is no native way, then is it possible to do it using Node.js’s modulechild_process
? If not, is there any other way I can hide ffplay’s window ? Again, I must note that some of files I play have video stream as well. So, is there any way to hide ffplay’s window ?Thanks in advance.
-
Playback : audio-video synchronization algorithm
7 août 2016, par Rick77I’m in the process of creating a very basic video player with the
ffmpeg
libraries and I have all the decoding and re-encoding in place, but I’m stuck on audio video synchronization.My problem is, movies have audio and video streams muxed (intertwined) in a way that audio and video comes in "bursts" (a number of audio packets, followed by juxtaposed video frames), like this, where each packet has its own timestamp :
A A A A A A A A V V V V A A A A A A A V V V V ...
A: decoded and re-encoded audio data chunk
V: decoded and re-encoded video framesupposedly in a way to prevent too much audio to be processed without video, and the other way around.
Now I have to decode the "bursts" and send them to the audio/video playing components in a timely fashion, and I am a bit lost in the details.
- is there a "standard" strategy/paradigm/pattern to face this kind of problems ?
- are there tutorials/documentation/books around explaining the techniques to use ?
- how far can the muxing go in a well coded movie ?
Because I don’t expect anything like this :
AAAAAAAAAAA .... AAAAAAAAAAAAA x10000 VVVVVVVVVVVVVV x1000
audio for the whole clip followed by videoor this :
VVVVVVVVVVVV x1000 AAAAAAAAAAA...AAAAAAAAA x1000
all video frames followed by the audioto happen in a well encoded video (after all, preventing such extremes is what muxing is all about...)
Thanks !
UPDATE : since my description might have been unclear, the issue is not with how the streams are, or about how to decode them : the whole audio/video demuxing, decoding, rescaling and re-encoding is set and sound, and each chunk of data has its own timestamp.
My problem is what to do with the decoded data without incurring in buffer overrun and underrun and, generally, clogging my pipeline, so I guess it might be considered a "scheduling" problem.