
Recherche avancée
Médias (1)
-
The Slip - Artworks
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (82)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)
Sur d’autres sites (7228)
-
How to pipe rawvideo to v4l2loopback using ffmpeg ?
3 août 2020, par sepehr78I am trying to process a video by OpenCV in Python and then send each frame to a virtual camera (i.e., v4l2loopback). I have seen questions asked where OpenCV output is piped to ffmpeg and saved into a file, and other questions where a video file is piped to v4l2 using ffmpeg, but no question where these two are combined. I can do either of the above two things on their own, but not combined.



My Python code uses a subprocess to pipe each frame to ffmpeg. The ffmpeg command for piping the output of OpenCV to an .mp4 file is



ffmpeg -y -f rawvideo -vcodec rawvideo -s 1280x720 -pix_fmt bgr24 -i - -vcodec libx264 -crf 0 -preset fast output.mp4




This works and I have tested it.



The ffmpeg command to pipe a video file to the v4l2 virtual camera is



ffmpeg -re -i input.mp4 -map 0:v -f v4l2 /dev/video0




This also works and I have tested it.



I tried combining the above two commands and came up with



ffmpeg -y -f rawvideo -vcodec rawvideo -s 1280x720 -pix_fmt bgr24 -i - -vcodec libx264 -crf 0 -preset fast -map 0:v -f v4l2 /dev/video0




but I get the following error





[NULL @ 0x55a12fcc60] Unable to find a suitable output format for ''
 : Invalid argument





I would be glad if anyone could help me figure this out.



Thanks.


-
How to add BGM when pipe opencv images to ffmpeg - python
23 mars 2020, par karobbenBasically, I know how to stream by Pipe opencv to ffmpeg using python.
But my problem is I can’t add an audio file (BGM) for it. Does anyone know how to make it with python ?my python code is :
import cv2 as cv
import subprocess as sp
# ffmpeg command
command = ['ffmpeg',
#'-re', '-stream_loop', '-1',
#'-i', '/home/pi/scrpt/Blive/StarBucks_BGN.mp3',
'-y',
'-f', 'rawvideo',
'-vcodec','rawvideo',
'-pix_fmt', 'bgr24',
'-s', "{}x{}".format(width, height),
'-r', str(fps),
'-i', '-',
'-c:v', 'libx264',
'-pix_fmt', 'yuv420p',
'-preset', 'ultrafast',
'-f', 'flv',
rtmpUrl]I know that I can achieve this by :
sudo raspivid -o - -t 0 -w 1280 -h 720 -fps 24 -b 1000000 |
ffmpeg -re -stream_loop -1 -i "/home/pi/scrpt/Blive/StarBucks_BGN.mp3" \
-f h264 -i - -vcodec copy -r 30 -acodec aac -b:a 100k -preset ultrafast \
-tune zerolatency -f flv "rtmp://"So, I tried to add
-re -stream_loop -1 -i "/home/pi/scrpt/Blive/StarBucks_BGN.mp3"
into the python pipe, but it crashed with :[libx264 @ 0x1952aa0] using cpu capabilities: ARMv6 NEON
[libx264 @ 0x1952aa0] profile Constrained Baseline, level 3.1
[libx264 @ 0x1952aa0] 264 - core 148 r2748 97eaef2 - H.264/MPEG-4 AVC codec - Copyleft 2003-2016 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=25 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0
[flv @ 0x1951d00] FLV does not support sample rate 48000, choose from (44100, 22050, 11025)
[flv @ 0x1951d00] Audio codec mp3 not compatible with flv
Could not write header for output file #0 (incorrect codec parameters ?): Function not implementedStream mapping:
Stream #1:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
Stream #0:0 -> #0:1 (mp3 (native) -> mp3 (libmp3lame))
Last message repeated 1 times
Traceback (most recent call last):
File "With_BG.py", line 57, in <module>
p.stdin.write(BG.tostring())
BrokenPipeError: [Errno 32] Broken pipe
</module> -
C# Pipe images to FFmpeg as they are rendered
22 mars 2020, par LiftPizzasI’ve looked all over and everything I’ve found is how to use already-generated images. I think the below code taken from this (Create video from a growing image sequence using FFMPEG) is really close to what I need, but I need to be able to send one image at a time as each one is rendered, which might take several seconds to minutes, and will be taking place in a different function instead of inline like this one does.
static void Main()
{
//Async main method
AsyncMain().GetAwaiter().GetResult();
}
static async Task AsyncMain()
{
Console.WriteLine("Press any key to quit prematurely.");
var maintask = RunFFMPEG();
var readtask = Task.Run(() => Console.Read());
await Task.WhenAny(maintask, readtask);
}
static async Task RunFFMPEG()
{
await Task.Run(() =>
{
const int fps = 30;
const string outfile = "out.mp4";
const string args = "-y -framerate {0} -f image2pipe -i - -r {0} -c:v libx264 -movflags +faststart -pix_fmt yuv420p -crf 19 -preset veryslow {1}";
const string dir = @"C:\testrender\";
const string pattern = "{0}.png";
const string path = dir + pattern;
const int startNum = 0;
const int endNum = 100;
var pinf = new ProcessStartInfo("ffmpeg", string.Format(args, fps, outfile));
pinf.UseShellExecute = false;
pinf.RedirectStandardInput = true;
pinf.WorkingDirectory = dir;
Console.WriteLine("Starting ffmpeg...");
var proc = Process.Start(pinf);
using (var stream = new BinaryWriter(proc.StandardInput.BaseStream))
{
for (var i = startNum; i < endNum; i++)
{
//"D4" turns 5 to 0005 - change depending on pattern of input files
var file = string.Format(path, i.ToString("D4"));
System.Threading.SpinWait.SpinUntil(() => File.Exists(file) && CanReadFile(file));
Console.WriteLine("Found file: " + file);
stream.Write(File.ReadAllBytes(file));
// I don't have input files, I will have bitmaps at the end of an OpenGL render function.
}
}
proc.WaitForExit();
Console.WriteLine("Closed ffmpeg.");
});
bool CanReadFile(string file)
{
//Needs to be able to read file
FileStream fs = null;
try
{
fs = File.OpenRead(file);
return true;
}
catch (IOException)
{
return false;
}
finally
{
if (fs != null)
fs.Close();
}
}
}