Recherche avancée

Médias (91)

Autres articles (97)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

Sur d’autres sites (5287)

  • How to play video file with audio with DearPyGUI (Python) ?

    1er mars 2023, par Vi Tiet

    I'm using DearPyGUI to make a simple media player that can play video file (mp4, etc.) together with it's audio. The pre-requisite is that DearPyGUI is a must, however video feature will not exist until v2.0, which is still far in the future.

    


    Currently, I can only render the frames using OpenCV library for Python, however, the problem is how can I play the audio as well as play it in sync with the output video frames ?

    


    For context, I'm quite new to Python, and I don't know much about video and audio streaming, but I've thought of some approaches to this problem by looking through help posts online (However, I still have no idea how I can implement any of these seamlessly) :

    


      

    1. OpenCV for video frames, and audio ??? some libraries like ffmpeg-python or miniaudio to play sound... (How...?)

      


    2. 


    3. Extract video frames and audio here and then use the raw data to play it (How...?)

      


    4. 


    5. This example here is pretty close to what I want excluding the playing video and audio part, but I have no idea where to go from there. The video stream and the audio stream are instances of ffmpeg.nodes.FilterableStream, and they appear to hold addresses to somewhere. (No idea...)

      


    6. 


    7. Another very close idea is using ffpyplayer I was able to get the video frame. However, the below code yields a blueish purple color tint to the video, and the frame rate is very slow compared to original (So close...)

      


    8. 


    


    import time
import numpy as np
import cv2 as cv
from ffpyplayer.player import MediaPlayer


# https://github.com/Kazuhito00/Image-Processing-Node-Editor/blob/main/node_editor/util.py 
def cv2dpg(frame): 

    data = cv.resize(frame, (VIDEO_WIDTH, VIDEO_HEIGHT))
    data = np.flip(frame, 2)
    data = data.ravel()
    data = np.asfarray(data, dtype=np.float32)

    return np.true_divide(data, 255.0)


# https://stackoverflow.com/questions/59611075/how-would-i-go-about-playing-a-video-stream-with-ffpyplayer
# https://matham.github.io/ffpyplayer/examples.html#examples
def play_video(loaded_file_path):

    global player, is_playing
    player = MediaPlayer(loaded_file_path)

    while is_playing:

        frame, val = player.get_frame()

        if val == 'eof':
            is_playing = False
            break

        elif not frame:
            time.sleep(0.01)

        elif val != 'eof' and frame is not None:
            img, t = frame
            w = img.get_size()[0]
            h = img.get_size()[1]
            cv_mat = np.uint8(np.asarray(list(img.to_bytearray()[0])).reshape((h, w, 3)))
            texture_data = cv2dpg(cv_mat)
            dpg.set_value(VIDEO_CANVAS_TAG, texture_data)

    dpg.set_value(VIDEO_CANVAS_TAG, DEFAULT_VIDEO_TEXTURE)


    


    I still need to do more research, but any pointer to somewhere good to start off (either handling raw data or using different libraries) would be greatly appreciated !

    


    EDIT :
For more context, I'm using raw texture like this example of DearPyGUI official documentation to render the video frames that were extracted in the while loop.

    


  • Discord bot stop playing music in random time of song

    25 janvier 2021, par Jusmejtr

    I have a discord to let me play a random song from the list.

    


    How bot works :
Bot IS connected to firestore Cloud (firebase) where i have economy data from my server. Price for playing random song is 75 coins.

    


    Everything worked as it should, but yesterday I used command, the bot started playing and after a while it stopped playing music and also no other commands worked, bot probably get freezed.

    


    I have no errors in the console until after a minute it showed me this error.

    


    https://pastebin.com/ay9gV75T

    


    The bot is hosted on Heroku and I also added this buildpack to ffmpeg in the settings.

    


    https://github.com/jonathanong/heroku-buildpack-ffmpeg-latest

    


    This is my code :

    


    module.exports = {
    name: "buy-music",
    description: "buy a music",

    async execute(message, config, db){
        const PREFIX = (config.prefix);

        if(message.content === PREFIX + "buy music"){
            const ytdl = require("ytdl-core");
            message.delete();
            let uzivatel = message.author.tag;

            let voiceChannel = message.member.voice.channel;
            if(!voiceChannel) return message.reply("Musíš byť vo voice roomke");

            let cena = 75;
    
            db.collection('economy').doc(uzivatel).get().then(async (q) => {
                if(!q.exists) return message.reply("Nemáš vytvorený účet");
                var hodnota = q.data().money;
                if(hodnota < cena) return message.reply("Nemáš dostatok financií");

                db.collection('statusy').doc('music').get().then(async (asaj) => {
                    let stav = asaj.data().stav;
                    if(stav == "off"){
                        db.collection('statusy').doc('music').update({
                            "stav": "on",
                            "autor": message.author.tag,
                        });
                        hodnota -= cena;
                        db.collection('economy').doc(uzivatel).update({
                            'money': hodnota
                        });
                        function randomhraj(){
                            var pole = [
                     My YT links

                            ]
                            let rnd = Math.floor(Math.random() * pole.length);
                            let output = pole[rnd];
                            return output;
                        }
        
                        try{
                            var pripojenie = await voiceChannel.join();
                            message.reply(`Úspešne si si kúpil chuťovečku`);
                        }catch(error){
                            console.log(`Error pri pripajani do room (music join) ${error}`);
                        }
                        
                        const dispatcher = pripojenie.play(ytdl(randomhraj())).on("finish", async() => {
                            await voiceChannel.leave();
                            await db.collection('statusy').doc('music').update({
                                "stav": "off",
                                "autor": "nikto",
                            });
                        }).on("error", error => {
                            console.log(error)
                        })
                        dispatcher.setVolumeLogarithmic(5 / 5)
                    }else{
                        message.reply("Momentálne si hudbu kúpil niekto iný alebo ak si hudbu kúpil a chceš ju zastaviť použi príkaz *stop");
                    }
                    
                });
            });
    
        }else if(message.content === PREFIX + "stop"){
            message.delete();
            db.collection('statusy').doc('music').get().then((n) => {
                let kto = n.data().autor;
                let meno = message.author.tag;
                if(!message.member.voice.channel) return message.channel.send("Musíš byť vo voice roomke pre stopnutie hudby");
                if(kto == meno){
                    message.member.voice.channel.leave();
                    message.channel.send("Úspešne odpojený");
                    db.collection('statusy').doc('music').update({
                        "stav": "off",
                        "autor": "nikto",
                    });
                }else{
                    message.reply("Zastaviť hudbu môže len ten kto si ju kúpil");
                }
            });
        }
        
    }
}


    


  • ffmpeg vaapi (intel) hardware decode, drawbox, hardware encode

    11 mars 2023, par Tom

    So I am running a go2rtc server and I'm receiving a rtsp stream from a camera and I want to draw a box on top of the video. The system has a Pentium Silver J5005 with iGPU. From what I understand I should be able to use hwmap instead of hwdownload/hwupload in this case because the iGPU and CPU share the same system memory. Anyway, leaving out the drawing part, I can tell that hardware decoding and encoding is working because ffmpeg only uses about 8% CPU. This is the ffmpeg command that I have working, but it is only decoding and re-encoding the video :

    


    ffmpeg -hide_banner -v error -allowed_media_types video -loglevel verbose -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device /dev/dri/renderD128 \
  -i rtsp://... -c:v h264_vaapi -g 50 -bf 0 -profile:v high -level:v 4.1 -sei:v 0 -an \
  -filter_complex "[0:v]scale_vaapi,hwmap=mode=read+write+direct,format=nv12[in];\
    [in]format=vaapi|nv12,hwmap[out]" -map "[out]" \
  -c:v h264_vaapi -an -user_agent ffmpeg/go2rtc -rtsp_transport tcp -f rtsp rtsp://..."


    


    Now i'm trying to insert a drawbox filter :

    


    ffmpeg -hide_banner -v error -allowed_media_types video -loglevel verbose -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device /dev/dri/renderD128 \
  -i rtsp://... -c:v h264_vaapi -g 50 -bf 0 -profile:v high -level:v 4.1 -sei:v 0 -an \
  -filter_complex "[0:v]scale_vaapi,hwmap=mode=read+write+direct,format=nv12[in];\
    [in]drawbox=x=10:y=10:w=100:h=100:color=pink@0.5:t=fill[in2];\
    [in2]format=vaapi|nv12,hwmap[out]" -map "[out]" \
  -c:v h264_vaapi -an -user_agent ffmpeg/go2rtc -rtsp_transport tcp -f rtsp rtsp://...



    


    But this fails immedicately :

    


    [h264 @ 0x55bf016ffc40] Reinit context to 2304x1296, pix_fmt: vaapi
[graph 0 input from stream 0:0 @ 0x55bf01fe9100] w:2304 h:1296 pixfmt:vaapi tb:1/90000 fr:20/1 sar:0/1
[auto_scale_0 @ 0x55bf01fee800] w:iw h:ih flags:'' interl:0
[Parsed_drawbox_3 @ 0x55bf01fe8180] auto-inserting filter 'auto_scale_0' between the filter 'Parsed_format_2' and the filter 'Parsed_drawbox_3'
[auto_scale_1 @ 0x55bf01feffc0] w:iw h:ih flags:'' interl:0
[Parsed_format_4 @ 0x55bf01fe8780] auto-inserting filter 'auto_scale_1' between the filter 'Parsed_drawbox_3' and the filter 'Parsed_format_4'
[auto_scale_0 @ 0x55bf01fee800] w:2304 h:1296 fmt:nv12 sar:0/1 -> w:2304 h:1296 fmt:yuv420p sar:0/1 flags:0x0
[Parsed_drawbox_3 @ 0x55bf01fe8180] x:10 y:10 w:100 h:100 color:0xC67B9B7F
[auto_scale_1 @ 0x55bf01feffc0] w:2304 h:1296 fmt:yuv420p sar:0/1 -> w:2304 h:1296 fmt:nv12 sar:0/1 flags:0x0
    Last message repeated 3 times
[Parsed_hwmap_5 @ 0x55bf01fe8bc0] Failed to map frame: -38.
Error while filtering: Function not implemented
Failed to inject frame into filter network: Function not implemented
Error while processing the decoded data for stream #0:0


    


    I found a similar question, but the solution of setting -hwaccel_output_format nv12 causes ffmpeg to fail (even if I don't include the drawbox step) :

    


    [Parsed_scale_vaapi_0 @ 0x55ab1c1d2540] auto-inserting filter 'auto_scale_0' between the filter 'graph 0 input from stream 0:0' and the filter 'Parsed_scale_vaapi_0'
Impossible to convert between the formats supported by the filter 'graph 0 input from stream 0:0' and the filter 'auto_scale_0'


    


    It seems like the problem is the nv12 pixel format. I tried countless of ways to convert to e.g. rgb24 but everything I tried just caused ffmpeg to fail.