
Recherche avancée
Autres articles (82)
-
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (4967)
-
avcodec/mpegvideo_dec : Don't use MotionEstContext as scratch space
30 octobre 2022, par Andreas Rheinhardtavcodec/mpegvideo_dec : Don't use MotionEstContext as scratch space
Decoders that might use quarter pixel motion estimation
(namely MPEG-4 as well as the VC-1 family) currently
use MpegEncContext.me.qpel_(put|avg) as scratch space
for pointers to arrays of function pointers.
(MotionEstContext contains such pointers as it supports
quarter pixel motion estimation.) The MotionEstContext
is unused apart from this for the decoding part of
mpegvideo.Using the context at all is for decoding is actually
unnecessary and easily avoided : All codecs with
quarter pixels set me.qpel_avg to qdsp.avg_qpel_pixels_tab,
so one can just unconditionally use this in ff_mpv_reconstruct_mb().
MPEG-4 sets qpel_put to qdsp.put_qpel_pixels_tab
or to qdsp.put_no_rnd_qpel_pixels_tab based upon
whether the current frame is a b-frame with no_rounding
or not, while the VC-1-based decoders set it to
qdsp.put_qpel_pixels_tab unconditionally. Given
that no_rounding is always zero for VC-1, using
the same check for VC-1 as well as for MPEG-4 would work.
Since ff_mpv_reconstruct_mb() already has exactly
the right check (for hpeldsp), it can simply be reused.(This change will result in ff_mpv_motion() receiving
a pointer to an array of NULL function pointers instead
of a NULL pointer for codecs without qpeldsp (like MPEG-1/2).
It doesn't matter.)Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
-
Stuck in installing a voicecloner via Python (module not found)
25 novembre 2023, par WimmahI use Python 3.11.5


As a great Python n00b I enter this forum because I'm stuck with installing a Voice Cloner (for personal use to do a funny trick for X-mas with my family) Its this tool that i'm trying to install : https://github.com/CorentinJ/Real-Time-Voice-Cloning


With a little help of chatGTP I came quite far but for some reason the downloaded datasets cant be found. Instructions of the tool state :


Install intructions form Github
So my tree looks like this :


(base) willem@willems-air Voice cloner % tree
.
├── demo_cli.py
├── demo_toolbox.py
├── encoder_preprocess.py
├── encoder_train.py
├── saved_models
│   └── default
│   ├── encoder.pt
│   ├── synthesizer.pt
│   └── vocoder.pt
├── synthesizer_preprocess_audio.py
├── synthesizer_preprocess_embeds.py
├── synthesizer_train.py
└── vocoder_train.py

3 directories, 11 files



However, when I give the command to execute the demo, I get the message that a needed module cant be found :


(base) willem@willems-air Voice cloner % python demo_cli.py
Traceback (most recent call last):
 File "/Users/willem/Desktop/Voice cloner/demo_cli.py", line 10, in <module>
 from encoder import inference as encoder
ModuleNotFoundError: No module named 'encoder'
</module>


I build a tree that (for me) looks inline with the installation instructions...(And of course i downloaded the modules without any errors)
Here also the first lines of the command demo_cli.py where you also see the path :


import argparse
import os
from pathlib import Path

import librosa
import numpy as np
import soundfile as sf
import torch

from encoder import inference as encoder
from encoder.params_model import model_embedding_size as speaker_embedding_size
from synthesizer.inference import Synthesizer
from utils.argutils import print_args
from utils.default_models import ensure_default_models
from vocoder import inference as vocoder


if __name__ == '__main__':
 parser = argparse.ArgumentParser(
 formatter_class=argparse.ArgumentDefaultsHelpFormatter
 )
 parser.add_argument("-e", "--enc_model_fpath", type=Path,
 default="saved_models/default/encoder.pt",



I think i missed out a quite basic step here, but this far ChatGTP is looping and cant help any more, so I need a human tip i guess ;)


Thx in advance !


-
Python code mutes whole video instead of sliding a song. What shall I do ?
16 juillet 2023, par Armed NunI am trying to separate a song into 4 parts and slide the parts in random parts of a video. The problem with my code is that the final output video is muted. I want to play parts of the song at random intervals and while the song is playing the original video shall be muted. Thanks to everyone who helps


import random
from moviepy.editor import *

def split_audio_into_parts(mp3_path, num_parts):
 audio = AudioFileClip(mp3_path)
 duration = audio.duration
 part_duration = duration / num_parts

 parts = []
 for i in range(num_parts):
 start_time = i * part_duration
 end_time = start_time + part_duration if i < num_parts - 1 else duration
 part = audio.subclip(start_time, end_time)
 parts.append(part)

 return parts

def split_video_into_segments(video_path, num_segments):
 video = VideoFileClip(video_path)
 duration = video.duration
 segment_duration = duration / num_segments

 segments = []
 for i in range(num_segments):
 start_time = i * segment_duration
 end_time = start_time + segment_duration if i < num_segments - 1 else duration
 segment = video.subclip(start_time, end_time)
 segments.append(segment)

 return segments

def insert_audio_into_segments(segments, audio_parts):
 modified_segments = []
 for segment, audio_part in zip(segments, audio_parts):
 audio_part = audio_part.volumex(0) # Mute the audio part
 modified_segment = segment.set_audio(audio_part)
 modified_segments.append(modified_segment)

 return modified_segments

def combine_segments(segments):
 final_video = concatenate_videoclips(segments)
 return final_video

# Example usage
mp3_file_path = "C:/Users/Kris/PycharmProjects/videoeditingscript124234/DENKATA - Podvodnica Demo (1).mp3"
video_file_path = "C:/Users/Kris/PycharmProjects/videoeditingscript124234/family.guy.s21e13.1080p.web.h264-cakes[eztv.re].mkv"
num_parts = 4

audio_parts = split_audio_into_parts(mp3_file_path, num_parts)
segments = split_video_into_segments(video_file_path, num_parts)
segments = insert_audio_into_segments(segments, audio_parts)
final_video = combine_segments(segments)
final_video.write_videofile("output.mp4", codec="libx264", audio_codec="aac")



I tried entering most stuff into chatGPT and asking questions around forums but without sucess, so lets hope I can see my solution here