Recherche avancée

Médias (91)

Autres articles (43)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (6683)

  • playing an mp3 into a voice channel with discord.py

    12 décembre 2020, par Tanuj KS

    I'm trying to make a text-to-speech bot for people to use in no-mic-chats to speak into voice channels. I found other articles explaining this but most of them show the windows ffmpeg version but I am on a Mac. this is my code so far :

    


    @bot.command()
async def speak(ctx, message):
    tts = gtts.gTTS(message, lang="en")
    tts.save("text.mp3")
    if ctx.guild.voice_client:
        vc = ctx.guild.voice_client
    else:
        voice_channel = get(ctx.guild.voice_channels, name="Voice Lounge")
        vc = await voice_channel.connect()
    vc.play(source='text.mp3')


    


    This gives me the error :
discord.ext.commands.errors.CommandInvokeError: Command raised an exception: ClientException: ffmpeg was not found.

    


    Some people said to specify the FFMPEG.exe file but I do not see that. I downloaded the FFMPEG git from git clone https://git.ffmpeg.org/ffmpeg.git ffmpeg

    


    but I still get the error. other articles said to specify the executable but I can't find any ffmpeg.exe which I have to specify the path for.

    


    Thanks in advance

    


  • FFmpeg - HLS MUXER producing 0 bytes chunks as well as ) bytes manifest [closed]

    30 avril 2024, par Mohideen Irfan

    My ffmpeg code produces 0 bytes chunks as well as 0 bytes manifest. can someone tell what may be the cause for that. AVPacket contains the data too. Whether the data is valid or not valid that's off-topic here, right ? As it writes any data into the muxer right ? But why I am receiving ffmpeg log like :

    


    [hls @ 0x617000084280] write_packet_common size:128 dts:192060 pts:192060
[hls @ 0x617000084280] compute_muxer_pkt_fields: pts:192060 dts:192060 cur_dts:186030 b:0 size:128 st:1
[hls @ 0x617000084280] av_write_frame: pts2:192060 dts2:192060
[hls @ 0x617000084280] Opening '/media/RT_RT_1_12345_1698653290847_8749823174342/hls/wmslive_media_video_0.ts' for writing
[file @ 0x611000351f00] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 0x6130003b0400] Statistics: 0 bytes written, 0 seeks, 1 writeouts
[hls @ 0x617000084280] Opening '/media/RT_RT_1_12345_1698653290847_8749823174342/hls/temporarymanifest_hls_wmslive_video.m3u8.tmp' for writing
[file @ 0x612000941d40] Setting default whitelist 'file,crypto,data'
EXT-X-MEDIA-SEQUENCE:0
[AVIOContext @ 0x6130003b05c0] Statistics: 0 bytes written, 0 seeks, 1 writeouts"


    


    I am working on this issue for almost a week. There were no errors. when I write av_frame_write it returns success. chunks are getting generated but nothing present inside the chunks or manifest. How to resolve this ?

    


  • How to make only words (not sentences) in SRT file by AssemblyAI

    19 avril 2024, par flip

    Okay... So Im trying to create a video making program thing and I'm basically finished but I wanted to change 1 key thing and don't know how to...
Here's the code :

    


    import assemblyai as aai
import colorama
from colorama import *
import time
import os

aai.settings.api_key = "(cant share)"

print('')
print('')
print(Fore.GREEN + 'Process 3: Creating subtitles...')
print('')
time.sleep(1)
print(Fore.YELLOW + '>> Creating subtitles...')
transcript = aai. Transcriber().transcribe("output/output-tts.mp3")

subtitles = transcript.export_subtitles_srt()
print('>> Created subtitles!')

f = open("subtitles.srt","a")
f.write(subtitles)
print('')
print('Program >> You are going to have to manually run the last python file [addsubtitles.py] because \n this program needs to close to write down the subtitles')
print('')
time.sleep(7)
f.close()


    


    when it exports the SRT it exports it something like this :

    


    1
00:00:00,160 --> 00:00:04,238
Put a finger down if you have ever kissed a

2
00:00:04,286 --> 00:00:05,374
goddamn dog.




    


    but i want it to go word by word but still synced with the time frame in the audio and then generate a srt file that has word by word and no sentences.
How could I do this ? because I have no idea how to implement that into the code.

    


    I tried watching or searching the internet for any ways on doing this because I can't figure it out but there are still no results. Would appreciate if anyone could help.