
Recherche avancée
Médias (91)
-
Chuck D with Fine Arts Militia - No Meaning No
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Paul Westerberg - Looking Up in Heaven
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Le Tigre - Fake French
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Thievery Corporation - DC 3000
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Dan the Automator - Relaxation Spa Treatment
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Gilberto Gil - Oslodum
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (98)
-
Demande de création d’un canal
12 mars 2010, parEn fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
Automated installation script of MediaSPIP
25 avril 2011, parTo overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
The documentation of the use of this installation script is available here.
The code of this (...)
Sur d’autres sites (6172)
-
Issues with video frame dropout using Accord.NET VideoFileWriter and FFMPEG
9 janvier 2018, par DavidI am testing out writing video files using the Accord.Video library. I have a WPF project created in Visual Studio 2017, and I have installed Accord.Video.FFMPEG as well as Accord.Video.VFW using Nuget, as well as their dependencies.
I have created a very simple video to test basic file output. However, I am running into some issues. My goal is to be able to output videos with a variable frame rate, because in the future I will be using this code to input images from a webcam device that will then be saved to a video file, and video from webcams typically has variable frame rates.
For now, in this example, I am not inputting video from a webcam, but rather I am generating a simple "moving box" image and outputting the frames to a video file. The box changes color every 20 frames : red, green, blue, yellow, and finally white. I also set the frame rate to be 20 fps.
When I use Accord.Video.VFW, the frame rate is correctly set, and all the frames are correctly outputted to the video file. The resulting video looks like this (see the YouTube link) : https://youtu.be/K8E9O7bJIbg
This is just a reference, however. I don’t intend on using Accord.Video.VFW because it outputs uncompressed data to an AVI file, and it doesn’t support variable frame rates. I would like to use Accord.Video.FFMPEG because it is supposed to support variable frame rates.
When I attempt to use the Accord.Video.FFMPEG library, however, the video does not result in how I would expect it to look. Here is a YouTube link : https://youtu.be/cW19yQFUsLI
As you can see, in that example, the box remains the first color for a longer amount of time than the other colors. It also never reaches the final color (white). When I inspect the video file, 100 frames were not outputted to the file. There are 69 or 73 frames typically. And the expected frame rate and duration obviously do not match up.
Here is the code that generates both these videos :
public MainWindow()
{
InitializeComponent();
Accord.Video.VFW.AVIWriter avi_writer = new Accord.Video.VFW.AVIWriter();
avi_writer.FrameRate = 20;
avi_writer.Open("test2.avi", 640, 480);
Accord.Video.FFMPEG.VideoFileWriter k = new Accord.Video.FFMPEG.VideoFileWriter();
k.FrameRate = 20;
k.Width = 640;
k.Height = 480;
k.Open("test.mp4");
for (int i = 0; i < 100; i++)
{
TimeSpan t = new TimeSpan(0, 0, 0, 0, 50 * i);
var b = new System.Drawing.Bitmap(640, 480);
var g = Graphics.FromImage(b);
var br = System.Drawing.Brushes.Blue;
if (t.TotalMilliseconds < 1000)
br = System.Drawing.Brushes.Red;
else if (t.TotalMilliseconds < 2000)
br = System.Drawing.Brushes.Green;
else if (t.TotalMilliseconds < 3000)
br = System.Drawing.Brushes.Blue;
else if (t.TotalMilliseconds < 4000)
br = System.Drawing.Brushes.Yellow;
else
br = System.Drawing.Brushes.White;
g.FillRectangle(br, 50 + i, 50, 100, 100);
System.Console.WriteLine("Frame: " + (i + 1).ToString() + ", Millis: " + t.TotalMilliseconds.ToString());
#region This is the code in question
k.WriteVideoFrame(b, t);
avi_writer.AddFrame(b);
#endregion
}
avi_writer.Close();
k.Close();
System.Console.WriteLine("Finished writing video");
}I have tried changing a few things under the assumption that maybe the "WriteVideoFrame" function isn’t able to finish in time, and so I need to slow down the program so it can complete itself. Under that assumption, I have replaced the "WriteVideoFrame" call with the following code :
Task taskA = new Task(() => k.WriteVideoFrame(b, t));
taskA.Start();
taskA.Wait();And I have tried the following code :
Task.WaitAll(
Task.Run( () =>
{
lock(syncObj)
{
k.WriteVideoFrame(b, t);
}
}
));And even just a standard call where I don’t specify a timestamp :
k.WriteVideoFrame(b);
None of these work. They all result in something similar.
Any suggestions on getting the WriteVideoFrame function to work that is a part of the Accord.Video.FFMPEG.VideoFileWriter class ?
Thanks for any and all help !
[edits below]
I have done some more investigating. I still haven’t found a good solution, but here is what I have found so far. After declaring my VideoFileWriter object, I have tried setting up some options for the video.
When I use an H264 codec with the following options, it correctly saves 100 frames at a frame-rate of 20 fps, however any normal media player (both VLC and Windows Media Player) end up playing a 10-second video instead of a 5-second video. Essentially, it seems like they play it at half-speed. Here is the code that gives that result :
k.VideoCodec = Accord.Video.FFMPEG.VideoCodec.H264;
k.VideoOptions["crf"] = "18";
k.VideoOptions["preset"] = "veryfast";
k.VideoOptions["tune"] = "zerolatency";
k.VideoOptions["x264opts"] = "no-mbtree:sliced-threads:sync-lookahead=0";Additionally, if I use an Mpeg4 codec, I get the same "half-speed" result :
k.VideoCodec = Accord.Video.FFMPEG.VideoCodec.Mpeg4;
However, if I use a WMV codec, then it correctly results in 100 frames at 20 fps, and a 5 second video that is correctly played by both media players :
k.VideoCodec = Accord.Video.FFMPEG.VideoCodec.Wmv1;
Although this is good news, this still doesn’t solve the problem because WMV doesn’t support variable frame rates. Also, this still doesn’t answer the question as to why the problem is happening in the first place.
As always, any help would be appreciated !
-
Introducing Crash Analytics for Matomo
-
Combine Audio and Images in Stream
19 décembre 2017, par SenorContentoI would like to be able to create images on the fly and also create audio on the fly too and be able to combine them together into an rtmp stream (for Twitch or YouTube). The goal is to accomplish this in Python 3 as that is the language my bot is written in. Bonus points for not having to save to disk.
So far, I have figured out how to stream to rtmp servers using ffmpeg by loading a PNG image and playing it on loop as well as loading a mp3 and then combining them together in the stream. The problem is I have to load at least one of them from file.
I know I can use Moviepy to create videos, but I cannot figure out whether or not I can stream the video from Moviepy to ffmpeg or directly to rtmp. I think that I have to generate a lot of really short clips and send them, but I want to know if there’s an existing solution.
There’s also OpenCV which I hear can stream to rtmp, but cannot handle audio.
A redacted version of an ffmpeg command I have successfully tested with is
ffmpeg -loop 1 -framerate 15 -i ScreenRover.png -i "Song-Stereo.mp3" -c:v libx264 -preset fast -pix_fmt yuv420p -threads 0 -f flv rtmp://SITE-SUCH-AS-TWITCH/.../STREAM-KEY
or
cat Song-Stereo.mp3 | ffmpeg -loop 1 -framerate 15 -i ScreenRover.png -i - -c:v libx264 -preset fast -pix_fmt yuv420p -threads 0 -f flv rtmp://SITE-SUCH-AS-TWITCH/.../STREAM-KEY
I know these commands are not set up properly for smooth streaming, the result manages to screw up both Twitch’s and Youtube’s player and I will have to figure out how to fix that.
The problem with this is I don’t think I can stream both the image and the audio at once when creating them on the spot. I have to load one of them from the hard drive. This becomes a problem when trying to react to a command or user chat or anything else that requires live reactions. I also do not want to destroy my hard drive by constantly saving to it.
As for the python code, what I have tried so far in order to create a video is the following code. This still saves to the HD and is not responsive in realtime, so this is not very useful to me. The video itself is okay, with the one exception that as time passes on, the clock the qr code says versus the video’s clock start to spread apart farther and farther as the video gets closer to the end. I can work around that limitation if it shows up while live streaming.
def make_frame(t):
img = qrcode.make("Hello! The second is %s!" % t)
return numpy.array(img.convert("RGB"))
clip = mpy.VideoClip(make_frame, duration=120)
clip.write_gif("test.gif",fps=15)
gifclip = mpy.VideoFileClip("test.gif")
gifclip.set_duration(120).write_videofile("test.mp4",fps=15)My goal is to be able to produce something along the psuedo-code of
original_video = qrcode_generator("I don't know, a clock, pyotp, today's news sources, just anything that can be generated on the fly!")
original_video.overlay_text(0,0,"This is some sample text, the left two are coordinates, the right three are font, size, and color", Times_New_Roman, 12, Blue)
original_video.add_audio(sine_wave_generator(0,180,2)) # frequency min-max, seconds
# NOTICE - I did not add any time measurements to the actual video itself. The whole point is this is a live stream and not a video clip, so the time frame would be now. The 2 seconds list above is for our psuedo sine wave generator to know how long the audio clip should be, not for the actual streaming library.
stream.send_to_rtmp_server(original_video) # Doesn't matter if ffmpeg or some native libraryThe above example is what I am looking for in terms of video creation in Python and then streaming. I am not trying to create a clip and then stream it later, I am trying to have the program be able to respond to outside events and then update it’s stream to do whatever it wants. It is sort of like a chat bot, but with video instead of text.
def track_movement(...):
...
return ...
original_video = user_submitted_clip(chat.lastVideoMessage)
original_video.overlay_text(0,0,"The robot watches the user's movements and puts a blue square around it.", Times_New_Roman, 12, Blue)
original_video.add_audio(sine_wave_generator(0,180,2)) # frequency min-max, seconds
# It would be awesome if I could also figure out how to perform advance actions such as tracking movements or pulling a face out of a clip and then applying effects to it on the fly. I know OpenCV can track movements and I hear that it can work with streams, but I cannot figure out how that works. Any help would be appreciated! Thanks!Because I forgot to add the imports, here are some useful imports I have in my file !
import pyotp
import qrcode
from io import BytesIO
from moviepy import editor as mpyThe library, pyotp, is for generating one time pad authenticator codes, qrcode is for the qr codes, BytesIO is used for virtual files, and moviepy is what I used to generate the GIF and MP4. I believe BytesIO might be useful for piping data to the streaming service, but how that happens, depends entirely on how data is sent to the service, whether it be ffmpeg over command line (from subprocess import Popen, PIPE) or it be a native library.