
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (107)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Les sons
15 mai 2013, par -
Le plugin : Gestion de la mutualisation
2 mars 2010, parLe plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
Installation basique
On installe les fichiers de SPIP sur le serveur.
On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
< ?php (...)
Sur d’autres sites (8818)
-
Can I use the file buffer or stream as input for fluent-ffmpeg ? I am trying to avoid saving the video locally to get its path before removing
22 avril 2023, par Moath ThawahrehI am receiving the file via an api, I was trying to process the file.buffer as input for FFmpeg but it did not work, I had to save the video locally first and then process the path and remove the saved video later on.
I don't want to believe that there is no other way to solve this and I have been looking for solutions and workarounds but it was all about ffmpeg input as a path.


I would love to find a solution using fluent-ffmpeg because it has some other great features, but I won't mind any suggestions for compressing the video using any different approaches if it's more efficient


Again my code below works fine but I have to save the video and then remove it I am hoping for a more efficient solution :


fs.writeFileSync('temp.mp4', file.buffer);

 // Resize the temporary file using ffmpeg
 ffmpeg('temp.mp4') // here I tried pass file.buffer as readable stream,it receives paths only 
 .format('mp4')
 .size('50%')
 .save('resized.mp4')
 .on('end', async () => {
 // Upload the resized file to Firebase
 const resizedFileStream = bucket.file(`video/${uniqueId}`).createWriteStream();
 fs.createReadStream('resized.mp4').pipe(resizedFileStream);

 await new Promise<void>((resolve, reject) => {
 resizedFileStream
 .on('finish', () => {
 // Remove the local files after they have been uploaded
 fs.unlinkSync('temp.mp4');
 fs.unlinkSync('resized.mp4');
 resolve();
 })
 .on('error', reject);
 });

 // Get the URL of the uploaded resized version
 const resizedFile = bucket.file(`video/${uniqueId}`);
 const url = await resizedFile.getSignedUrl({
 action: 'read',
 expires: '03-17-2025', // Change this to a reasonable expiration date
 });

 console.log('Resized file uploaded successfully.');
 })
 .on('error', (err) => {
 console.log('An error occurred: ' + err.message);
 });
</void>


-
FFmpeg get frame rate
22 septembre 2021, par zhin dinsI have several images and I am reproducing them in 78.7ms, I am creating like the 80s video effect. But, I am unable to find the correct ms, and this images with the original videos are unsync.


I dumped the video to images using this command => ffmpeg -i *.mp4 the80effect/img-%d.jpg And now, I have 48622 frames. The video FPS is 24


So, 48622/24 = 2025 +- I cannot use 2025ms since those images will load very slow. And the and the approximate value is 78.7ms per frame/image


How can I find the correct value ? The video duration in seconds is 2026. I have tried all math to find this but I'm failing. How many images (one frame) per msCould you help me ? Thank you.


-
NumPy array of a video changes from the original after writing into the same video
29 mars 2021, par RashiqI have a video (
test.mkv
) that I have converted into a 4D NumPy array - (frame, height, width, color_channel). I have even managed to convert that array back into the same video (test_2.mkv
) without altering anything. However, after reading this new,test_2.mkv
, back into a new NumPy array, the array of the first video is different from the second video's array i.e. their hashes don't match and thenumpy.array_equal()
function returns false. I have tried using both python-ffmpeg and scikit-video but cannot get the arrays to match.

Python-ffmpeg attempt :


import ffmpeg
import numpy as np
import hashlib

file_name = 'test.mkv'

# Get video dimensions and framerate
probe = ffmpeg.probe(file_name)
video_stream = next((stream for stream in probe['streams'] if stream['codec_type'] == 'video'), None)
width = int(video_stream['width'])
height = int(video_stream['height'])
frame_rate = video_stream['avg_frame_rate']

# Read video into buffer
out, error = (
 ffmpeg
 .input(file_name, threads=120)
 .output("pipe:", format='rawvideo', pix_fmt='rgb24')
 .run(capture_stdout=True)
)

# Convert video buffer to array
video = (
 np
 .frombuffer(out, np.uint8)
 .reshape([-1, height, width, 3])
)

# Convert array to buffer
video_buffer = (
 np.ndarray
 .flatten(video)
 .tobytes()
)

# Write buffer back into a video
process = (
 ffmpeg
 .input('pipe:', format='rawvideo', s='{}x{}'.format(width, height))
 .output("test_2.mkv", r=frame_rate)
 .overwrite_output()
 .run_async(pipe_stdin=True)
)
process.communicate(input=video_buffer)

# Read the newly written video
out_2, error = (
 ffmpeg
 .input("test_2.mkv", threads=40)
 .output("pipe:", format='rawvideo', pix_fmt='rgb24')
 .run(capture_stdout=True)
)

# Convert new video into array
video_2 = (
 np
 .frombuffer(out_2, np.uint8)
 .reshape([-1, height, width, 3])
)

# Video dimesions change
print(f'{video.shape} vs {video_2.shape}') # (844, 1080, 608, 3) vs (2025, 1080, 608, 3)
print(f'{np.array_equal(video, video_2)}') # False

# Hashes don't match
print(hashlib.sha256(bytes(video_2)).digest()) # b'\x88\x00\xc8\x0ed\x84!\x01\x9e\x08 \xd0U\x9a(\x02\x0b-\xeeA\xecU\xf7\xad0xa\x9e\\\xbck\xc3'
print(hashlib.sha256(bytes(video)).digest()) # b'\x9d\xc1\x07xh\x1b\x04I\xed\x906\xe57\xba\xf3\xf1k\x08\xfa\xf1\xfaM\x9a\xcf\xa9\t8\xf0\xc9\t\xa9\xb7'



Scikit-video attempt :


import skvideo.io as sk
import numpy as np

video_data = sk.vread('test.mkv')

sk.vwrite('test_2_ski.mkv', video_data)

video_data_2 = sk.vread('test_2_ski.mkv')

# Dimensions match but...
print(video_data.shape) # (844, 1080, 608, 3)
print(video_data_2.shape) # (844, 1080, 608, 3)

# ...array elements don't
print(np.array_equal(video_data, video_data_2)) # False

# Hashes don't match either
print(hashlib.sha256(bytes(video_2)).digest()) # b'\x8b?]\x8epD:\xd9B\x14\xc7\xba\xect\x15G\xfaRP\xde\xad&EC\x15\xc3\x07\n{a[\x80'
print(hashlib.sha256(bytes(video)).digest()) # b'\x9d\xc1\x07xh\x1b\x04I\xed\x906\xe57\xba\xf3\xf1k\x08\xfa\xf1\xfaM\x9a\xcf\xa9\t8\xf0\xc9\t\xa9\xb7'



I don't understand where I'm going wrong and both the respective documentations do not highlight how to do this particular task. Any help is appreciated. Thank you.