
Recherche avancée
Médias (91)
-
Valkaama DVD Cover Outside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Valkaama DVD Cover Inside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
1,000,000
27 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Demon Seed
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Four of Us are Dying
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (78)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...) -
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (6512)
-
FFMPEG compiled binaries don't run using MinGW
9 juin 2015, par Paul KnopfI am trying to build windows executables/dlls for Windows XP, and they are not working. They are the correct architecture. They run fine on my Windows 8 device machine.
I used dependency walker to find missing DLLs, and all were present.
Here are the compiled executables I am trying to run.
I ran the windows build script for ffmpeg.
Here is a
dumpbin /headers ffmpeg.exe
Microsoft (R) COFF/PE Dumper Version 10.00.30319.01
Copyright (C) Microsoft Corporation. All rights reserved.
Dump of file ffmpeg.exe
PE signature found
File Type: EXECUTABLE IMAGE
FILE HEADER VALUES
14C machine (x86)
7 number of sections
51A40 time date stamp Sun Jan 04 15:53:20 1970
0 file pointer to symbol table
0 number of symbols
E0 size of optional header
32F characteristics
Relocations stripped
Executable
Line numbers stripped
Symbols stripped
Application can handle large (>2GB) addresses
32 bit word machine
Debug information stripped
OPTIONAL HEADER VALUES
10B magic # (PE32)
2.25 linker version
41400 size of code
4FA00 size of initialized data
1200 size of uninitialized data
14E0 entry point (004014E0)
1000 base of code
43000 base of data
400000 image base (00400000 to 00456FFF)
1000 section alignment
200 file alignment
4.00 operating system version
1.00 image version
4.00 subsystem version
0 Win32 version
57000 size of image
400 size of headers
597A9 checksum
3 subsystem (Windows CUI)
140 DLL characteristics
Dynamic base
NX compatible
200000 size of stack reserve
1000 size of stack commit
100000 size of heap reserve
1000 size of heap commit
0 loader flags
10 number of directories
0 [ 0] RVA [size] of Export Directory
51000 [ 36F0] RVA [size] of Import Directory
0 [ 0] RVA [size] of Resource Directory
0 [ 0] RVA [size] of Exception Directory
0 [ 0] RVA [size] of Certificates Directory
0 [ 0] RVA [size] of Base Relocation Directory
0 [ 0] RVA [size] of Debug Directory
0 [ 0] RVA [size] of Architecture Directory
0 [ 0] RVA [size] of Global Pointer Directory
56004 [ 18] RVA [size] of Thread Storage Directory
0 [ 0] RVA [size] of Load Configuration Directory
0 [ 0] RVA [size] of Bound Import Directory
517F0 [ 6C4] RVA [size] of Import Address Table Directory
0 [ 0] RVA [size] of Delay Import Directory
0 [ 0] RVA [size] of COM Descriptor Directory
0 [ 0] RVA [size] of Reserved Directory
SECTION HEADER #1
.text name
412BC virtual size
1000 virtual address (00401000 to 004422BB)
41400 size of raw data
400 file pointer to raw data (00000400 to 000417FF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
60500060 flags
Code
Initialized Data
RESERVED - UNKNOWN
RESERVED - UNKNOWN
Execute Read
SECTION HEADER #2
.data name
19C virtual size
43000 virtual address (00443000 to 0044319B)
200 size of raw data
41800 file pointer to raw data (00041800 to 000419FF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
C0700040 flags
Initialized Data
RESERVED - UNKNOWN
RESERVED - UNKNOWN
RESERVED - UNKNOWN
Read Write
SECTION HEADER #3
.rdata name
A7D8 virtual size
44000 virtual address (00444000 to 0044E7D7)
A800 size of raw data
41A00 file pointer to raw data (00041A00 to 0004C1FF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
40700040 flags
Initialized Data
RESERVED - UNKNOWN
RESERVED - UNKNOWN
RESERVED - UNKNOWN
Read Only
SECTION HEADER #4
.bss name
1200 virtual size
4F000 virtual address (0044F000 to 004501FF)
0 size of raw data
0 file pointer to raw data
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
C0700080 flags
Uninitialized Data
RESERVED - UNKNOWN
RESERVED - UNKNOWN
RESERVED - UNKNOWN
Read Write
SECTION HEADER #5
.idata name
36F0 virtual size
51000 virtual address (00451000 to 004546EF)
3800 size of raw data
4C200 file pointer to raw data (0004C200 to 0004F9FF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
C0300040 flags
Initialized Data
RESERVED - UNKNOWN
RESERVED - UNKNOWN
Read Write
SECTION HEADER #6
.CRT name
3C virtual size
55000 virtual address (00455000 to 0045503B)
200 size of raw data
4FA00 file pointer to raw data (0004FA00 to 0004FBFF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
C0300040 flags
Initialized Data
RESERVED - UNKNOWN
RESERVED - UNKNOWN
Read Write
SECTION HEADER #7
.tls name
20 virtual size
56000 virtual address (00456000 to 0045601F)
200 size of raw data
4FC00 file pointer to raw data (0004FC00 to 0004FDFF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
C0300040 flags
Initialized Data
RESERVED - UNKNOWN
RESERVED - UNKNOWN
Read Write
Summary
1000 .CRT
2000 .bss
1000 .data
4000 .idata
B000 .rdata
42000 .text
1000 .tlsWhen I attempt to run the executable on XP, it just closes. There is no "missing dll" messages, nor anything in the event viewer.
-
How to render a blend of two videos with alpha channel in python in real time ? [closed]
21 décembre 2024, par Francesco CalderoneI need to play two videos in real time in Python.
One video is a background video with no alpha channel. I am using H264 but it can be any codec.
The second video is an overlay video. This video with an alpha channel needs to be played in real-time on top of the first video. I am using Quicktime 444 with an alpha channel but again, it can be any codec.


In terms of libraries, I tried a combination of cv and numpy, I tried pymovie, pyAV, ffmpeg... so far, all the results have been unsuccessful. When the videos render, the frame rate drops way below 30FPS, and the resulting stream is glitchy.


I also tried rendering the video without an alpha channel and performing green screen chroma keying in real-time. Needless to say, even worse.


What solution can I use ?


Here's my attempted code with ffmpeg


import ffmpeg
import cv2
import numpy as np

def decode_video_stream(video_path, pix_fmt, width, height, fps):
 process = (
 ffmpeg
 .input(video_path)
 .output('pipe:', format='rawvideo', pix_fmt=pix_fmt, s=f'{width}x{height}', r=fps)
 .run_async(pipe_stdout=True, pipe_stderr=True)
 )
 return process

def read_frame(process, width, height, channels):
 frame_size = width * height * channels
 raw_frame = process.stdout.read(frame_size)
 if not raw_frame:
 return None
 frame = np.frombuffer(raw_frame, np.uint8).reshape((height, width, channels))
 return frame

def play_videos_with_alpha(base_video_path, alpha_video_path, resolution=(1280, 720), fps=30):
 width, height = resolution
 frame_time = int(1000 / fps) # Frame time in milliseconds

 # Initialize FFmpeg decoding processes
 base_process = decode_video_stream(base_video_path, 'rgb24', width, height, fps)
 alpha_process = decode_video_stream(alpha_video_path, 'rgba', width, height, fps)

 cv2.namedWindow("Blended Video", cv2.WINDOW_NORMAL)

 try:
 while True:
 # Read frames
 base_frame = read_frame(base_process, width, height, channels=3)
 alpha_frame = read_frame(alpha_process, width, height, channels=4)

 # Restart processes if end of video is reached
 if base_frame is None:
 base_process.stdout.close()
 base_process = decode_video_stream(base_video_path, 'rgb24', width, height, fps)
 base_frame = read_frame(base_process, width, height, channels=3)

 if alpha_frame is None:
 alpha_process.stdout.close()
 alpha_process = decode_video_stream(alpha_video_path, 'rgba', width, height, fps)
 alpha_frame = read_frame(alpha_process, width, height, channels=4)

 # Separate RGB and alpha channels from alpha video
 rgb_image = cv2.cvtColor(alpha_frame[:, :, :3], cv2.COLOR_RGB2BGR)
 alpha_channel = alpha_frame[:, :, 3] / 255.0 # Normalize alpha

 # Convert base frame to BGR format for blending
 base_image = cv2.cvtColor(base_frame, cv2.COLOR_RGB2BGR)

 # Blend the images
 blended_image = (base_image * (1 - alpha_channel[:, :, None]) + rgb_image * alpha_channel[:, :, None]).astype(np.uint8)

 # Display the result
 cv2.imshow("Blended Video", blended_image)

 if cv2.waitKey(frame_time) & 0xFF == ord('q'):
 break

 except Exception as e:
 print("Error during playback:", e)

 finally:
 # Clean up
 base_process.stdout.close()
 alpha_process.stdout.close()
 cv2.destroyAllWindows()

base_video_path = "test.mp4" # Background video
alpha_video_path = "test.mov" # Overlay video
play_videos_with_alpha(base_video_path, alpha_video_path, resolution=(1280, 720), fps=30)



Which is so far the version that drops less frames. I've been thinking about threading, or using CUDA, but ideally I want something that runs pretty much on any machine. What would be the least computationally heavy operation without reducing the frame size (1920 x 1080) and without pre-rendering the blend and exporting pre-blended file ? Is there a way ? Maybe I'm getting at it all wrong. I feel lost. Please help. Thanks.


-
Trim Overlay memory issue
4 juillet 2019, par Костянтин ТюртюбекGreetings fellow FFmpeg users,
We are experiencing a strange memory leek issue (or maybe just doing something really wrong) and need some directions on how to debug it.
What are we trying to achieve :
Process a conference recording that includes multiple user streams, each in its own separate file (all files are mp4/opus).
- Make a dynamic scene from a set of recordings, based on their volume level at set point of time.
- The scene must include two parts : smaller grid of all the participants videos, bigger grid of currently talking people. Something like Google Hangouts or Skype does in their applications.
What went wrong :
- Memory footprint started unpredictably skyrocketing for some reason during montage
What are we using :
First FFmpeg command that reads filter_complex_script from file and adds drawbox as a talking indication on each video source file, when its volume is over a set threshold.
Second FFmpeg command that reads filter_complex_script from file and :
- takes an input file (using 0:v),
- trims a part of it, when the user was talking,
- scales it according to the amount of concurrently talking users,
- pads to that resolution (in case if user video is smaller)
filter_complex command using SELECT :
[0]select='between(t, 1, 2)', scale=762:428:force_original_aspect_ratio=decrease,pad=763:429:(ow-iw)/2:(oh-ih)/2[stream-0-workspace-scale-1-1];
[block-2-grid][stream-0-workspace-scale-1-1]
overlay=repeatlast=1:shortest=0:x=10:y=316:enable='between(t, 1, 2)'
[block-2-workspace-1];filter_complex command using TRIM :
[input-file-tag]
trim=start=#{start}:duration=#{duration},
setpts=PTS-STARTPTS,
scale=#{w-1}:#{h-1}:force_original_aspect_ratio=decrease,
pad=#{w}:#{h}:(ow-iw)/2:(oh-ih)/2
[input-file-trimmed];
[previous-block-tag]
overlay=repeatlast=1:shortest=0:x=#{x}:y=#{y}:enable='between(t, #{from}, #{to})'
[next-block-tag]We have tried going the TRIM command way, tried the SELECT command way. Problem is, both take insane amounts of ram during execution.
Examples and more description :
-
Lets assume that only two of the five inputs have the volume above a
certain volume threshold from second two to five. -
We are trying to display only them according to some overlay math.
-
Cropped commands in human readable form : https://pastebin.com/YwrnRgnA
-
Full FFmpeg command is way too long to read through, that is the reason we started using filter_complex_script and loading it from file
-
Sometimes one block of video conference may have up to 300+
intermediate overlays, which leads to the memory issue described. We were expecting the memory footprint to be similar to the amount of input files or maybe two-to-three times higher. However, we reach 15Gb or RAM usage withing the first two minutes of montage, while the input files are no bigger than 200Mb.
What have we done in terms of debugging :
-
We had been using split at first, but quickly figured out that split
does in fact copy each input and load it in memory, so we had to
ditch that approach. -
As matter of fact we moved to using the input files themselves, so
the problem lies elsewhere. -
To clarify, we have split our ffmpeg command into two separate ones.
First one overlays the talking box animation using drawbox as well as
user avatar and name. It outputs new video files which we than use in
the command described above as direct input files tags, like 0:v, 1:v
etc.
Thank you for taking time reading through our issue.
We sure hope that you can help us narrow it down.
Please feel free to ask for any additional information or descriptions if needed.Have a good day !