
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (46)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
MediaSPIP Core : La Configuration
9 novembre 2010, parMediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...) -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...)
Sur d’autres sites (7141)
-
Seeking CLI Tool for Creating Text Animations with Easing Curves [closed]
15 novembre 2023, par anonymous-devI'm working on a video project where I need to animate text using various easing curves for smooth and dynamic transitions with the terminal. Specifically, I'm looking to apply the following easing curves to text animations :


bounceIn
bounceInOut
bounceOut
decelerate
ease
easeIn
easeInBack
easeInCirc
easeInCubic
easeInExpo
easeInOut
easeInOutBack
easeInOutCirc
easeInOutCubic
easeInOutCubicEmphasized
easeInOutExpo
easeInOutQuad
easeInOutQuart
easeInOutQuint
easeInOutSine
easeInQuad
easeInQuart
easeInQuint
easeInSine
easeInToLinear
easeOut
easeOutBack
easeOutCirc
easeOutCubic
easeOutExpo
easeOutQuad
easeOutQuart
easeOutQuint
easeOutSine
elasticIn
elasticInOut
elasticOut
fastEaseInToSlowEaseOut
fastLinearToSlowEaseIn
fastOutSlowIn
linearToEaseOut
slowMiddle



My initial thought was to use ffmpeg for this task, however, it appears that ffmpeg may not support these advanced easing curves for text animation.


I am seeking recommendations for a command-line interface (CLI) tool that can handle these types of animations.


Key requirements include :


- 

- Easing Curve Support : The tool should support a wide range of easing curves as listed above.
- Efficiency : Ability to render animations quickly, preferably with performance close to what I can achieve with ffmpeg filters.
- Direct Rendering : Ideally, the tool should render animations in one go, without the need to write each individual frame to disk.
- Should work with transformations such as translate, scale and rotate. For example a text translates from a to b with a basing curve applied to the transition.










I looked into ImageMagick, but it seems more suited for frame-by-frame image processing, which is not efficient for my needs.


Could anyone suggest a CLI tool that fits these criteria ? Or is there a way to extend ffmpeg's capabilities to achieve these animations ?


-
TypeError : must be real number, not NoneType using spyder anaconda
13 août 2023, par faisal2kimport moviepy.editor as mp
import tkinter as tk
from tkinter import filedialog
import math
from PIL import Image
import numpy


def zoom_in_effect(clip, zoom_ratio=0.02):
 def effect(get_frame, t):
 img = Image.fromarray(get_frame(t))
 base_size = img.size

 new_size = [
 math.ceil(img.size[0] * (1 + (zoom_ratio * t))),
 math.ceil(img.size[1] * (1 + (zoom_ratio * t)))
 ]

 # The new dimensions must be even.
 new_size[0] = new_size[0] + (new_size[0] % 2)
 new_size[1] = new_size[1] + (new_size[1] % 2)

 img = img.resize(new_size, Image.LANCZOS)

 x = math.ceil((new_size[0] - base_size[0]) / 2)
 y = math.ceil((new_size[1] - base_size[1]) / 2)

 img = img.crop([
 x, y, new_size[0] - x, new_size[1] - y
 ]).resize(base_size, Image.LANCZOS)

 #result = numpy.array(img)
 result = numpy.array(img, dtype=numpy.uint8)

 img.close()

 return result

 return clip.fl(effect)


 

def make_center_video():
 
 size = (1080, 1080)

 audio_file = '/home/faisal/pythonfiles/audio/tts_voice.wav'
 audio = mp.AudioFileClip(audio_file)
 
 root = tk.Tk()
 root.withdraw()
 
 print("waiting for Image Selection....")

 img = filedialog.askopenfilename()


 slide = mp.ImageClip(img).set_fps(29).set_duration(audio.duration).resize(size)
 slide = zoom_in_effect(slide, 0.02)
 slide.write_videofile('/home/faisal/pythonfiles/videos/zoom-short.mp4',codec='libx264', fps=29)
 
 
 size = (600, 600)

 


 slide = mp.ImageClip(img).set_fps(29).set_duration(audio.duration).resize(size)
 slide = zoom_in_effect(slide, 0.02)
 slide.write_videofile('/home/faisal/pythonfiles/videos/zoom-wide.mp4',codec='libx264', fps=29)
 
import traceback

try:
 make_center_video()
except Exception as e:
 traceback.print_exc()
 print(f"An error occurred: {e}")



I'm trying to make the zoom video using image but facing


TypeError: must be real number, not NoneType.



It used was to run but I don't remember if I might have updated numpy, ffmpeg, or any thing else that is now causing the error. I have tried the code on python 3.10 and 3.11 and get the same error in both. I was previously running it on python 3.10.


An error occurred: must be real number, not NoneType
Traceback (most recent call last):
 File "/home/faisal/pythonfiles/code/zoom_video.py", line 76, in <module>
 make_center_video()
 File "/home/faisal/pythonfiles/code/zoom_video.py", line 61, in make_center_video
 slide.write_videofile('/home/faisal/pythonfiles/videos/zoom-short.mp4',codec='libx264', fps=29)
 File "/home/faisal/anaconda3/envs/py-310/lib/python3.10/site-packages/decorator.py", line 232, in fun
 return caller(func, *(extras + args), **kw)
 File "/home/faisal/anaconda3/envs/py-310/lib/python3.10/site-packages/moviepy/decorators.py", line 54, in requires_duration
 return f(clip, *a, **k)
 File "/home/faisal/anaconda3/envs/py-310/lib/python3.10/site-packages/decorator.py", line 232, in fun
 return caller(func, *(extras + args), **kw)
 File "/home/faisal/anaconda3/envs/py-310/lib/python3.10/site-packages/moviepy/decorators.py", line 135, in use_clip_fps_by_default
 return f(clip, *new_a, **new_kw)
 File "/home/faisal/anaconda3/envs/py-310/lib/python3.10/site-packages/decorator.py", line 232, in fun
 return caller(func, *(extras + args), **kw)
 File "/home/faisal/anaconda3/envs/py-310/lib/python3.10/site-packages/moviepy/decorators.py", line 22, in convert_masks_to_RGB
 return f(clip, *a, **k)
 File "/home/faisal/anaconda3/envs/py-310/lib/python3.10/site-packages/moviepy/video/VideoClip.py", line 300, in write_videofile
 ffmpeg_write_video(self, filename, fps, codec,
 File "/home/faisal/anaconda3/envs/py-310/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_writer.py", line 213, in ffmpeg_write_video
 with FFMPEG_VideoWriter(filename, clip.size, fps, codec = codec,
 File "/home/faisal/anaconda3/envs/py-310/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_writer.py", line 88, in __init__
 '-r', '%.02f' % fps,
TypeError: must be real number, not NoneType
</module>


-
JavaScript MediaSource && ffmpeg chunks
17 mai 2023, par OmriHalifaI have written the following code for a player that can receive chunks sent by ffmpeg through stdout and display them using mediaSource :


index.js (server of this request)


const express = require('express')
const app = express()
const port = 4545
const cp = require('child_process')
const cors = require('cors')
const { Readable } = require('stream');



app.use(cors())

app.get('/startRecording', (req, res) => {
 const ffmpeg = cp.spawn('ffmpeg', ['-f', 'dshow', '-i', 'video=HP Wide Vision HD Camera', '-profile:v', 'high', '-pix_fmt', 'yuvj420p', '-level:v', '4.1', '-preset', 'ultrafast', '-tune', 'zerolatency', '-vcodec', 'libx264', '-r', '10', '-b:v', '512k', '-s', '640x360', '-acodec', 'aac', '-ac', '2', '-ab', '32k', '-ar', '44100', '-f', 'mpegts', '-flush_packets', '0', '-' /*'udp://235.235.235.235:12345?pkt_size=1316'*/ ]);
 
 ffmpeg.stdout.on('data', (data) => {
 //console.log(`stdout: ${data}`);
 res.write(data)
 });

 ffmpeg.stderr.on('data', (data) => {
 const byteData = Buffer.from(data, 'utf8'); // Replace with your actual byte data
 const byteStream = new Readable();
 byteStream.push(byteData);
 byteStream.push(null);
 const encoding = 'utf8';
 let text = '';
 byteStream.on('data', (chunk) => {
 text += chunk.toString(encoding);
 });

 byteStream.on('end', () => {
 console.log(text); // Output the converted text
 });


 //console.log({data})
 //res.write(data)
 });

 ffmpeg.on('close', (code) => {
 console.log(`child process exited with code ${code}`);
 });
})

app.listen(port, () => {
 console.log(`Video's Server listening on port ${port}`); 
});



App.js (In react, the side of the player) :


import { useEffect } from 'react';

function App() {
 async function transcode() {
 const mediaSource = new MediaSource();
 const videoElement = document.getElementById('videoElement');
 videoElement.src = URL.createObjectURL(mediaSource);
 
 
 mediaSource.addEventListener('sourceopen', async () => {
 console.log('MediaSource open');
 const sourceBuffer = mediaSource.addSourceBuffer('video/mp4; codecs="avc1.42c01e"');
 try {
 const response = await fetch('http://localhost:4545/startRecording');
 const reader = response.body.getReader();
 
 reader.read().then(async function processText({ done, value }) {
 if (done) {
 console.log('Stream complete');
 return;
 }

 console.log("B4 append", videoElement)
 await sourceBuffer.appendBuffer(value);
 console.log("after append",value);
 // Display the contents of the sourceBuffer
 sourceBuffer.addEventListener('updateend', function(e) { if (!sourceBuffer.updating && mediaSource.readyState === 'open') { mediaSource.endOfStream(); } });
 
 // Call next read and repeat the process
 return reader.read().then(processText);
 });
 } catch (error) {
 console.error(error);
 }
 });

 console.log("B4 play")
 await videoElement.play();
 console.log("after play")

 }
 
 
 useEffect(() => {}, []);

 return (
 <div classname="App">
 <div>
 <video></video>
 </div>
 <button>start streaming</button>
 </div>
 );
}

export default App;




this what i get :
what i get


the chunks are being received and passed to the Uint8Array correctly, but the video is not being displayed. why can be the result of this and how to correct it ?