Recherche avancée

Médias (1)

Mot : - Tags -/ogg

Autres articles (62)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (6690)

  • FFMPEG screen recording by gdigrab "can't find input device." on Windows 10

    7 juillet 2021, par Mateusz Bugaj

    I need to implement screen recording in my software but I have a problem with getting gdigrab in windows 10. This code return

    


    


    "can't find input device." (av_find_input_format("gdigrab") return NULL).

    


    


    If I try to record desktop by ffmpeg.exe using ffmpeg -f gdigrab -i desktop out.avi, the desktop is recording correctly.

    


    #define _CRT_SECURE_NO_WARNINGS
#include 

extern "C"
{
#include "ffmpeg-x86-4.4/include/libavcodec/avcodec.h" 
#include "ffmpeg-x86-4.4/include/libavformat/avformat.h"
#include "ffmpeg-x86-4.4/include/libswscale/swscale.h"
#include "ffmpeg-x86-4.4/include/libavdevice/avdevice.h"
#include "ffmpeg-x86-4.4/include/libavutil/imgutils.h"
#include "ffmpeg-x86-4.4/include/libavutil/dict.h"
#include "SDL2/include/SDL.h"
}


#define OUTPUT_YUV420P 1
#define OUTPUT_H264 1

int main(int argc, char* argv[])
{
    AVFormatContext* pFormatCtx;
    AVStream* videoStream;
    AVCodecContext* pCodecCtx;
    AVCodec* pCodec;
    AVFrame* pFrame, * pFrameYUV;
    AVPacket* pPacket;
    SwsContext* pImgConvertCtx;

    int videoIndex = -1;
    unsigned int i = 0;

    SDL_Window* screen;
    SDL_Renderer* sdlRenderer;
    SDL_Texture* sdlTexture;
    SDL_Rect sdlRect;

    int screen_w = 0;
    int screen_h = 0;

    printf("Starting...\n");



    //AVInputFormat* p = NULL;

    //register device
    avdevice_register_all();
    
    //use gdigrab
    AVInputFormat* ifmt = av_find_input_format("gdigrab");

    if (!ifmt)
    {
        printf("can't find input device.\n");
        return -1;
    }

    AVDictionary* options = NULL;
    if (avformat_open_input(&pFormatCtx, "desktop", ifmt, &options) != 0)
    {
        printf("can't open input stream.\n");
        return -1;
    }

    if (avformat_find_stream_info(pFormatCtx, NULL) < 0)
    {
        printf("can't find stream information.\n");
        return -1;
    }


    


  • FFMPEG using AV_PIX_FMT_D3D11 gives "Error registering the input resource" from NVENC

    13 novembre 2024, par nbabcock

    Input frames start on the GPU as ID3D11Texture2D pointers.

    


    I encode them to H264 using FFMPEG + NVENC. NVENC works perfectly if I download the textures to CPU memory as format AV_PIX_FMT_BGR0, but I'd like to cut out the CPU texture download entirely, and pass the GPU memory pointer directly into the encoder in native format. I write frames like this :

    


    int write_gpu_video_frame(ID3D11Texture2D* gpuTex, AVFormatContext* oc, OutputStream* ost) {
    AVFrame *hw_frame = ost->hw_frame;

    printf("gpuTex address = 0x%x\n", &gpuTex);

    hw_frame->data[0] = (uint8_t *) gpuTex;
    hw_frame->data[1] = (uint8_t *) (intptr_t) 0;
    hw_frame->pts     = ost->next_pts++;

    return write_frame(oc, ost->enc, ost->st, hw_frame);
    // write_frame is identical to sample code in ffmpeg repo
}


    


    Running the code with this modification gives the following error :

    


    gpuTex address = 0x4582f6d0
[h264_nvenc @ 00000191233e1bc0] Error registering an input resource: invalid call (9):
[h264_nvenc @ 00000191233e1bc0] Could not register an input HW frame
Error sending a frame to the encoder: Unknown error occurred


    



    


    Here's some supplemental code used in setting up and configuring the hw context and encoder :

    


    /* A few config flags */
#define ENABLE_NVENC TRUE
#define USE_D3D11 TRUE // Skip downloading textures to CPU memory and send it straight to NVENC


    


    /* Init hardware frame context */
static int set_hwframe_ctx(AVCodecContext* ctx, AVBufferRef* hw_device_ctx) {
    AVBufferRef*       hw_frames_ref;
    AVHWFramesContext* frames_ctx = NULL;
    int                err        = 0;

    if (!(hw_frames_ref = av_hwframe_ctx_alloc(hw_device_ctx))) {
        fprintf(stderr, "Failed to create HW frame context.\n");
        throw;
    }
    frames_ctx                    = (AVHWFramesContext*) (hw_frames_ref->data);
    frames_ctx->format            = AV_PIX_FMT_D3D11;
    frames_ctx->sw_format         = AV_PIX_FMT_NV12;
    frames_ctx->width             = STREAM_WIDTH;
    frames_ctx->height            = STREAM_HEIGHT;
    //frames_ctx->initial_pool_size = 20;
    if ((err = av_hwframe_ctx_init(hw_frames_ref)) < 0) {
        fprintf(stderr, "Failed to initialize hw frame context. Error code: %s\n", av_err2str(err));
        av_buffer_unref(&hw_frames_ref);
        throw;
    }
    ctx->hw_frames_ctx = av_buffer_ref(hw_frames_ref);
    if (!ctx->hw_frames_ctx)
        err = AVERROR(ENOMEM);

    av_buffer_unref(&hw_frames_ref);
    return err;
}


    


    /* Add an output stream. */
static void add_video_stream(
    OutputStream* ost,
    AVFormatContext* oc,
    const AVCodec** codec,
    enum AVCodecID  codec_id,
    int width,
    int height
) {
    AVCodecContext* c;
    int             i;
    bool            nvenc = false;

    /* find the encoder */
    if (ENABLE_NVENC) {
        printf("Getting nvenc encoder\n");
        *codec = avcodec_find_encoder_by_name("h264_nvenc");
        nvenc  = true;
    }
    
    if (!ENABLE_NVENC || *codec == NULL) {
        printf("Getting standard encoder\n");
        avcodec_find_encoder(codec_id);
        nvenc = false;
    }
    if (!(*codec)) {
        fprintf(stderr, "Could not find encoder for '%s'\n",
                avcodec_get_name(codec_id));
        exit(1);
    }

    ost->st = avformat_new_stream(oc, NULL);
    if (!ost->st) {
        fprintf(stderr, "Could not allocate stream\n");
        exit(1);
    }
    ost->st->id = oc->nb_streams - 1;
    c           = avcodec_alloc_context3(*codec);
    if (!c) {
        fprintf(stderr, "Could not alloc an encoding context\n");
        exit(1);
    }
    ost->enc = c;

    printf("Using video codec %s\n", avcodec_get_name(codec_id));

    c->codec_id = codec_id;
    c->bit_rate = 4000000;
    /* Resolution must be a multiple of two. */
    c->width  = STREAM_WIDTH;
    c->height = STREAM_HEIGHT;
    /* timebase: This is the fundamental unit of time (in seconds) in terms
        * of which frame timestamps are represented. For fixed-fps content,
        * timebase should be 1/framerate and timestamp increments should be
        * identical to 1. */
    ost->st->time_base = {1, STREAM_FRAME_RATE};
    c->time_base       = ost->st->time_base;
    c->gop_size = 12; /* emit one intra frame every twelve frames at most */

    if (nvenc && USE_D3D11) {
        const std::string hw_device_name = "d3d11va";
        AVHWDeviceType    device_type    = av_hwdevice_find_type_by_name(hw_device_name.c_str());

        // set up hw device context
        AVBufferRef *hw_device_ctx;
        // const char*  device = "0"; // Default GPU (may be integrated in the case of switchable graphics!)
        const char*  device = "1";
        ret = av_hwdevice_ctx_create(&hw_device_ctx, device_type, device, nullptr, 0);

        if (ret < 0) {
            fprintf(stderr, "Could not create hwdevice context; %s", av_err2str(ret));
        }

        set_hwframe_ctx(c, hw_device_ctx);
        c->pix_fmt = AV_PIX_FMT_D3D11;
    } else if (nvenc && !USE_D3D11)
        c->pix_fmt = AV_PIX_FMT_BGR0;
    else
        c->pix_fmt = STREAM_PIX_FMT;

    if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
        /* just for testing, we also add B-frames */
        c->max_b_frames = 2;
    }

    if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
        /* Needed to avoid using macroblocks in which some coeffs overflow.
            * This does not happen with normal video, it just happens here as
            * the motion of the chroma plane does not match the luma plane. */
        c->mb_decision = 2;
    }

    /* Some formats want stream headers to be separate. */
    if (oc->oformat->flags & AVFMT_GLOBALHEADER)
        c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}


    


  • How to debug "Error in pull function & Input buffer exhausted" ?

    15 septembre 2021, par user16909319

    Introduction :

    


    I'm working on a Discord Music Bot for personal use only. Basically, when I play a 2min song with the bot, it plays the song for 1min 30secs, then skips it and plays the next one in the queue. The error is shown below :

    


    


    Error in Pull Function
    
IO error : Error number -10054 occurred
    
[mov,mp4,m4a,3gp,3g2,mj2 @ 0000018f86a6f4c0] Packet corrupt (stream = 0, dts = 11154432).
    
Input buffer exhausted before END element found
    
Invalid Data found when processing Input
    
[mov,mp4,m4a,3gp,3g2,mj2 @ 0000018f86a6f4c0] stream 0, offset 0x3e805c : partial file

    


    


    The code block which I think is causing the problem :

    


    Search song method

    


        async def search_song(self, amount, song, get_url=False):
        info = await self.bot.loop.run_in_executor(None, lambda: youtube_dl.YoutubeDL(ytdl_format_options).extract_info(
            f"ytsearch{amount}:{song}", download=False, ie_key="YoutubeSearch"))
        if len(info["entries"]) == 0: return None

        return [entry["webpage_url"] for entry in info["entries"]] if get_url else info


    


    Play_song method

    


        async def play_song(self, ctx, song):
        url = pafy.new(song).getbestaudio().url
        ctx.voice_client.play(discord.PCMVolumeTransformer(discord.FFmpegPCMAudio(url, executable="C:/ffmpeg/bin/ffmpeg.exe")),
                              after=lambda error: self.bot.loop.create_task(self.check_queue(ctx)))
        ctx.voice_client.source.volume = 0.5


    


    Formatting Options I provided :

    


    ytdl_format_options = {
    'format': 'bestaudio/best',
    'outtmpl': '%(extractor)s-%(id)s-%(title)s.%(ext)s',
    'restrictfilenames': True,
    'noplaylist': True,
    'nocheckcertificate': True,
    'ignoreerrors': True,
    'logtostderr': False,
    'quiet': True,
    'no_warnings': True,
    'default_search': 'auto',
    'source_address': '0.0.0.0'
}


    


    Solutions that I've tried :

    


      

    • Running it on both Replit and locally.
    • 


    • Redownloading FFmpeg
    • 


    • Ensuring FFmpeg, pafy, and youtube_dl are all up to date.
    • 


    


    Things to Note :

    


      

    • Playing a 2mins song, it stops after 1min 30 seconds and displays the error above. (75% of the song)
    • 


    • Playing a 1hr song, it still continues after 10 minutes.
    • 


    


    I do not have much experience in this yet so I'm not entirely sure where in my code is actually causing this issue or other ways which I can use to test and fix the issue.