Recherche avancée

Médias (0)

Mot : - Tags -/metadatas

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (82)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (4967)

  • avcodec/mpegvideo_dec : Don't use MotionEstContext as scratch space

    30 octobre 2022, par Andreas Rheinhardt
    avcodec/mpegvideo_dec : Don't use MotionEstContext as scratch space
    

    Decoders that might use quarter pixel motion estimation
    (namely MPEG-4 as well as the VC-1 family) currently
    use MpegEncContext.me.qpel_(put|avg) as scratch space
    for pointers to arrays of function pointers.
    (MotionEstContext contains such pointers as it supports
    quarter pixel motion estimation.) The MotionEstContext
    is unused apart from this for the decoding part of
    mpegvideo.

    Using the context at all is for decoding is actually
    unnecessary and easily avoided : All codecs with
    quarter pixels set me.qpel_avg to qdsp.avg_qpel_pixels_tab,
    so one can just unconditionally use this in ff_mpv_reconstruct_mb().
    MPEG-4 sets qpel_put to qdsp.put_qpel_pixels_tab
    or to qdsp.put_no_rnd_qpel_pixels_tab based upon
    whether the current frame is a b-frame with no_rounding
    or not, while the VC-1-based decoders set it to
    qdsp.put_qpel_pixels_tab unconditionally. Given
    that no_rounding is always zero for VC-1, using
    the same check for VC-1 as well as for MPEG-4 would work.
    Since ff_mpv_reconstruct_mb() already has exactly
    the right check (for hpeldsp), it can simply be reused.

    (This change will result in ff_mpv_motion() receiving
    a pointer to an array of NULL function pointers instead
    of a NULL pointer for codecs without qpeldsp (like MPEG-1/2).
    It doesn't matter.)

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] libavcodec/h263dec.c
    • [DH] libavcodec/mpv_reconstruct_mb_template.c
    • [DH] libavcodec/mss2.c
    • [DH] libavcodec/vc1dec.c
  • Stuck in installing a voicecloner via Python (module not found)

    25 novembre 2023, par Wimmah

    I use Python 3.11.5

    &#xA;

    As a great Python n00b I enter this forum because I'm stuck with installing a Voice Cloner (for personal use to do a funny trick for X-mas with my family) Its this tool that i'm trying to install : https://github.com/CorentinJ/Real-Time-Voice-Cloning

    &#xA;

    With a little help of chatGTP I came quite far but for some reason the downloaded datasets cant be found. Instructions of the tool state :

    &#xA;

    Install intructions form Github&#xA;So my tree looks like this :

    &#xA;

    (base) willem@willems-air Voice cloner % tree&#xA;.&#xA;├── demo_cli.py&#xA;├── demo_toolbox.py&#xA;├── encoder_preprocess.py&#xA;├── encoder_train.py&#xA;├── saved_models&#xA;│&#xA0;&#xA0; └── default&#xA;│&#xA0;&#xA0;     ├── encoder.pt&#xA;│&#xA0;&#xA0;     ├── synthesizer.pt&#xA;│&#xA0;&#xA0;     └── vocoder.pt&#xA;├── synthesizer_preprocess_audio.py&#xA;├── synthesizer_preprocess_embeds.py&#xA;├── synthesizer_train.py&#xA;└── vocoder_train.py&#xA;&#xA;3 directories, 11 files&#xA;

    &#xA;

    However, when I give the command to execute the demo, I get the message that a needed module cant be found :

    &#xA;

    (base) willem@willems-air Voice cloner % python demo_cli.py&#xA;Traceback (most recent call last):&#xA;  File "/Users/willem/Desktop/Voice cloner/demo_cli.py", line 10, in <module>&#xA;    from encoder import inference as encoder&#xA;ModuleNotFoundError: No module named &#x27;encoder&#x27;&#xA;</module>

    &#xA;

    I build a tree that (for me) looks inline with the installation instructions...(And of course i downloaded the modules without any errors)&#xA;Here also the first lines of the command demo_cli.py where you also see the path :

    &#xA;

    import argparse&#xA;import os&#xA;from pathlib import Path&#xA;&#xA;import librosa&#xA;import numpy as np&#xA;import soundfile as sf&#xA;import torch&#xA;&#xA;from encoder import inference as encoder&#xA;from encoder.params_model import model_embedding_size as speaker_embedding_size&#xA;from synthesizer.inference import Synthesizer&#xA;from utils.argutils import print_args&#xA;from utils.default_models import ensure_default_models&#xA;from vocoder import inference as vocoder&#xA;&#xA;&#xA;if __name__ == &#x27;__main__&#x27;:&#xA;    parser = argparse.ArgumentParser(&#xA;        formatter_class=argparse.ArgumentDefaultsHelpFormatter&#xA;    )&#xA;    parser.add_argument("-e", "--enc_model_fpath", type=Path,&#xA;                        default="saved_models/default/encoder.pt",&#xA;

    &#xA;

    I think i missed out a quite basic step here, but this far ChatGTP is looping and cant help any more, so I need a human tip i guess ;)

    &#xA;

    Thx in advance !

    &#xA;

  • Python code mutes whole video instead of sliding a song. What shall I do ?

    16 juillet 2023, par Armed Nun

    I am trying to separate a song into 4 parts and slide the parts in random parts of a video. The problem with my code is that the final output video is muted. I want to play parts of the song at random intervals and while the song is playing the original video shall be muted. Thanks to everyone who helps

    &#xA;

    import random&#xA;from moviepy.editor import *&#xA;&#xA;def split_audio_into_parts(mp3_path, num_parts):&#xA;    audio = AudioFileClip(mp3_path)&#xA;    duration = audio.duration&#xA;    part_duration = duration / num_parts&#xA;&#xA;    parts = []&#xA;    for i in range(num_parts):&#xA;        start_time = i * part_duration&#xA;        end_time = start_time &#x2B; part_duration if i &lt; num_parts - 1 else duration&#xA;        part = audio.subclip(start_time, end_time)&#xA;        parts.append(part)&#xA;&#xA;    return parts&#xA;&#xA;def split_video_into_segments(video_path, num_segments):&#xA;    video = VideoFileClip(video_path)&#xA;    duration = video.duration&#xA;    segment_duration = duration / num_segments&#xA;&#xA;    segments = []&#xA;    for i in range(num_segments):&#xA;        start_time = i * segment_duration&#xA;        end_time = start_time &#x2B; segment_duration if i &lt; num_segments - 1 else duration&#xA;        segment = video.subclip(start_time, end_time)&#xA;        segments.append(segment)&#xA;&#xA;    return segments&#xA;&#xA;def insert_audio_into_segments(segments, audio_parts):&#xA;    modified_segments = []&#xA;    for segment, audio_part in zip(segments, audio_parts):&#xA;        audio_part = audio_part.volumex(0)  # Mute the audio part&#xA;        modified_segment = segment.set_audio(audio_part)&#xA;        modified_segments.append(modified_segment)&#xA;&#xA;    return modified_segments&#xA;&#xA;def combine_segments(segments):&#xA;    final_video = concatenate_videoclips(segments)&#xA;    return final_video&#xA;&#xA;# Example usage&#xA;mp3_file_path = "C:/Users/Kris/PycharmProjects/videoeditingscript124234/DENKATA - Podvodnica Demo (1).mp3"&#xA;video_file_path = "C:/Users/Kris/PycharmProjects/videoeditingscript124234/family.guy.s21e13.1080p.web.h264-cakes[eztv.re].mkv"&#xA;num_parts = 4&#xA;&#xA;audio_parts = split_audio_into_parts(mp3_file_path, num_parts)&#xA;segments = split_video_into_segments(video_file_path, num_parts)&#xA;segments = insert_audio_into_segments(segments, audio_parts)&#xA;final_video = combine_segments(segments)&#xA;final_video.write_videofile("output.mp4", codec="libx264", audio_codec="aac")&#xA;

    &#xA;

    I tried entering most stuff into chatGPT and asking questions around forums but without sucess, so lets hope I can see my solution here

    &#xA;