
Recherche avancée
Médias (91)
-
DJ Z-trip - Victory Lap : The Obama Mix Pt. 2
15 septembre 2011
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Matmos - Action at a Distance
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
DJ Dolores - Oslodum 2004 (includes (cc) sample of “Oslodum” by Gilberto Gil)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Danger Mouse & Jemini - What U Sittin’ On ? (starring Cee Lo and Tha Alkaholiks)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Cornelius - Wataridori 2
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Rapture - Sister Saviour (Blackstrobe Remix)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (97)
-
Demande de création d’un canal
12 mars 2010, parEn fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Automated installation script of MediaSPIP
25 avril 2011, parTo overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
The documentation of the use of this installation script is available here.
The code of this (...)
Sur d’autres sites (6791)
-
FFMPEG : While decoding video, is possible to generate result to user's provided buffer ?
26 octobre 2023, par cbelIn ffmpeg decoding video scenario, H264 for example, typically we allocate an
AVFrame
and decode the compressed data, then we get the result from the memberdata
andlinesize
ofAVFrame
. As following code :


// input setting: data and size are a H264 data.
AVPacket avpkt;
av_init_packet(&avpkt);
avpkt.data = const_cast(data);
avpkt.size = size;

// decode video: H264 ---> YUV420
AVFrame *picture = avcodec_alloc_frame();
int len = avcodec_decode_video2(context, picture, &got_picture, &avpkt);




We may use the result to do something other tasks, for example, using DirectX9 to render. That is, to prepare buffers(DirectX9 Textures), and copy from the result of decoding.



D3DLOCKED_RECT lrY;
D3DLOCKED_RECT lrU;
D3DLOCKED_RECT lrV;
textureY->LockRect(0, &lrY, NULL, 0);
textureU->LockRect(0, &lrU, NULL, 0);
textureV->LockRect(0, &lrV, NULL, 0);

// copy YUV420: picture->data ---> lr.pBits.
my_copy_image_function(picture->data[0], picture->linesize[0], lrY.pBits, lrY.Pitch, width, height);
my_copy_image_function(picture->data[1], picture->linesize[1], lrU.pBits, lrU.Pitch, width / 2, height / 2);
my_copy_image_function(picture->data[2], picture->linesize[2], lrV.pBits, lrV.Pitch, width / 2, height / 2);




This process is considered that 2 copy happens(ffmpeg copy result to picture->data, and then copy picture->data to DirectX9 Texture).



My question is : is it possible to improve the process to only 1 copy ? On the other hand, can we provide buffers(
pBits
, the buffer of DirectX9 textures) to ffmpeg, and decode function results to buffer of DirectX9 texture, not to buffers ofAVFrame
?

-
Efficient real-time video stream processing and forwarding with RTMP servers
19 mai 2023, par dumbQuestionsI have a scenario where I need to retrieve a video stream from an RTMP server, apply image processing (specifically, adding blur to frames), and then forward the processed stream to another RTMP server (in this case, Twitch).


Currently, I'm using ffmpeg in conjunction with cv2 to retrieve and process the stream. However, this approach introduces significant lag when applying the blur. I'm seeking an alternative method that can achieve the desired result more efficiently. I did attempt to solely rely on ffmpeg for the entire process, but I couldn't find a way to selectively process frames based on a given condition and subsequently transmit only those processed frames.


Is there a more efficient approach or alternative solution that can address this issue and enable real-time video stream processing with minimal lag ?


Thanks in advance !


def forward_stream(server_url, stream_key, twitch_stream_key):
 get_ffmpeg_command = [...]

 send_ffmpeg_command [...]

 # Start get FFmpeg process
 read_process = subprocess.Popen(get_ffmpeg_command, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)

 # Start send FFmpeg process
 send_process = send_process = subprocess.Popen(send_ffmpeg_command, stdin=subprocess.PIPE, stderr=subprocess.DEVNULL)

 # Open video capture
 cap = cv2.VideoCapture(f'{server_url}')

 while True:
 # Read the frame
 ret, frame = cap.read()
 if ret:
 # Apply machine learning algorithm
 should_blur = machine_learning_algorithm(frame)

 # Apply blur if necessary
 if machine_learning_algorithm(frame):
 frame = cv2.blur(frame, (25, 25))

 # Write the frame to FFmpeg process
 send_process.stdin.write(frame.tobytes())
 else:
 break

 # Release resources
 cap.release()
 read_process.stdin.close()
 read_process.wait()




-
avcodec/xbmenc : Allow for making UW images
19 janvier 2021, par Jose Da Silvaavcodec/xbmenc : Allow for making UW images
I've run into some bugs where I was downloading a bunch of data and began
seeing weird hiccups. For example, javascript promises to allow you to push
some very long lines of data, but the hiccups I saw was with data larger
than 2k in length (windows) pushed out of a child process stdout piped into
the stdin of the calling parent program.
Soo much for smooth promises, this was broken and would run into similar
problems on a linux PC with 32k line limits.
The solution was to break the data into smaller chunks than 2k - and then
these data hiccups disappeared (windows PC).It would be expected to be similar for linux PCs (32k I think) and other
OSes with different sizes.If the ANSI required minimum needs to be 509 chars or larger (assuming
509+<CR>+<LF>+<0>=512), then 509 was chosen as the shortest worst-case
scenario) in this patch.
Most small pictures will go output looking pretty much the same data out
until you get to about 84bytes (672 pixels wide), where lines out begin to
be split. For example a UW 4K will exceed a 2k readln and a UW 10K picture
approaches an 8k readlnThe purpose for this patch is to ensure that data remains below the
readline limits (of 509 chars), so that programs (like javascript) can push
data in large chunks without breaking into hiccups because the data length
is too long to be pushed cleanly in one go.
Subject : [PATCH 3/3] avcodec/xbmenc : Allow for making UW imagesWorst-case ANSI must allow for 509 chars, while Windows allows for 2048
and Linux for 32K line length. This allows an OS with a small readline
access limitation to fetch very wide images (created from ffmpeg).