Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (48)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (8491)

  • Downloading earlier segments from a live m3u8 playlist

    25 juillet 2016, par Zafer Cesur

    I have an .m3u8 URI from an online live-stream. As far as I know, live playlists use a sliding window instead of containing all the segments. My questions are,


    1) Is it possible to find out what the length of the window is (time or frame-wise) ? My intention is to use the playlist I have to download a live-stream starting from an earlier time.

    2) If yes, how do I get the earlier segments, i.e., how do I specify where I want to start downloading from ? I tried something like ffmpeg -ss -00:00:10 -i "in.m3u8" out.mp4, but it did not work.

    I do not have much experience in video-encoding or live-streaming, and I would appreciate any direction ! The file that I am dealing with is printed below.


    #EXTM3U
    #EXT-X-TWITCH-INFO:NODE="video-edge-913b2c.jfk03",MANIFEST-NODE="video-edge-913b2c.jfk03",SERVER-TIME="1469462316.46",USER-IP="...",SERVING-ID="...",CLUSTER="jfk03",ABS="false",BROADCAST-ID="22500458080",STREAM-TIME="17374.4599299",MANIFEST-CLUSTER="jfk03"
    #EXT-X-MEDIA:TYPE=VIDEO,GROUP-ID="chunked",NAME="Source",AUTOSELECT=YES,DEFAULT=YES
    #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=3566000,RESOLUTION=1920x1080,CODECS="avc1.4D402A,mp4a.40.2",VIDEO="chunked"
    http://video-edge-913b2c.jfk03.hls.ttvnw.net/hls-7e8a7c/imaqtpie_22500458080_490173831/chunked/index-live.m3u8?token=id=...,bid=...,exp=1469548716,node=video-edge-913b2c.jfk03,nname=video-edge-913b2c.jfk03,fmt=chunked&sig=...
    #EXT-X-MEDIA:TYPE=VIDEO,GROUP-ID="high",NAME="High",AUTOSELECT=YES,DEFAULT=YES
    #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1760000,RESOLUTION=1280x720,CODECS="avc1.66.31,mp4a.40.2",VIDEO="high"
    http://video-edge-913b2c.jfk03.hls.ttvnw.net/hls-7e8a7c/imaqtpie_22500458080_490173831/high/index-live.m3u8?token=id=...,bid=...,exp=1469548716,node=video-edge-913b2c.jfk03,nname=video-edge-913b2c.jfk03,fmt=high&sig=...
    ...
  • Converting uint8_t data to AVFrame with FFmpeg

    30 octobre 2017, par J.Lefebvre

    I am currently working in C++ with the Autodesk 3DStudio Max 2014 SDK (toolset 100) and the Ffmpeg library in Visual Studio 2015 and trying to convert a DIB (Device Independent Bitmap) to uint8_t pointer array and then convert these data to an AVFrame.

    I don’t have any errors, but my video is still black and without meta data.
    (no time display, etc)

    I made approximatively the same with a Visual Studio Console application to convert jpeg image sequence from disk and this is working fine.
    (The only difference is that instead of converting jpeg to AVFrame with the Ffmpeg library, I try to convert raw data to an AVFrame.)

    So I think the problem could be either on the DIB conversion to the uint8_t data or the uint8_t data to the AVFrame.
    (The second is more plausible, because I used the SFML library to display a window with my rgb uint8_t* data for debuging and it is working fine.)

    I first initialize the ffmpeg library :

    This function is called once at the beginning.

    int Converter::Initialize(AVCodecID codec_id, int width, int height, int fps, const char *filename)
    {
       avcodec_register_all();
       av_register_all();

       AVCodec *codec;
       inputFrame = NULL;
       codecContext = NULL;
       pkt = NULL;
       file = NULL;
       outputFilename = new char[strlen(filename)]();
       *outputFilename = '\0';
       strcpy(outputFilename, filename);

       int ret;

       //Initializing AVCodecContext and getting PixelFormat supported by encoder
       codec = avcodec_find_encoder(codec_id);
       if (!codec)
           return 1;

       AVPixelFormat pixFormat = codec->pix_fmts[0];
       codecContext = avcodec_alloc_context3(codec);
       if (!codecContext)
           return 1;

       codecContext->bit_rate = 400000;
       codecContext->width = width;
       codecContext->height = height;
       codecContext->time_base.num = 1;
       codecContext->time_base.den = fps;
       codecContext->gop_size = 10;
       codecContext->max_b_frames = 1;
       codecContext->pix_fmt = pixFormat;

       if (codec_id == AV_CODEC_ID_H264)
           av_opt_set(codecContext->priv_data, "preset", "slow", 0);

       //Actually opening the encoder
       if (avcodec_open2(codecContext, codec, NULL) < 0)
           return 1;

       file = fopen(outputFilename, "wb");
       if (!file)
           return 1;

       inputFrame = av_frame_alloc();
       inputFrame->format = codecContext->pix_fmt;
       inputFrame->width = codecContext->width;
       inputFrame->height = codecContext->height;

       ret = av_image_alloc(inputFrame->data, inputFrame->linesize, codecContext->width, codecContext->height, codecContext->pix_fmt, 32);

       if (ret < 0)
           return 1;

       return 0;
    }

    Then for each frame, I get the DIB and convert to a uint8_t* it with this function :

    uint8_t* Util::ToUint8_t(RGBQUAD *data, int width, int height)
    {
       uint8_t* buf = (uint8_t*)data;

       int imageSize = width * height;
       size_t rgbquad_size = sizeof(RGBQUAD);
       size_t total_bytes = imageSize * rgbquad_size;
       uint8_t * pCopyBuffer = new uint8_t[total_bytes];

       for (int x = 0; x < width; x++)
       {
           for (int y = 0; y < height; y++)
           {
               int index = (x + width * y) * rgbquad_size;
               int invertIndex = (x + width* (height - y - 1)) * rgbquad_size;

               //BGRA to RGBA
               pCopyBuffer[index] = buf[invertIndex + 2];
               pCopyBuffer[index + 1] = buf[invertIndex + 1];
               pCopyBuffer[index + 2] = buf[invertIndex];
               pCopyBuffer[index + 3] = 0xFF;
           }
       }

       return pCopyBuffer;
    }

    void GetDIBBuffer(Interface* ip, BITMAPINFO *bmi, uint8_t** outBuffer)
    {
       int size;

       ViewExp& view = ip->GetActiveViewExp();

       view.getGW()->getDIB(NULL, &size);

       bmi = (BITMAPINFO *)malloc(size);
       BITMAPINFOHEADER *bmih = (BITMAPINFOHEADER *)bmi;
       view.getGW()->getDIB(bmi, &size);

       uint8_t * pCopyBuffer = Util::ToUint8_t(bmi->bmiColors, bmih->biWidth, bmih->biHeight);

       *outBuffer = pCopyBuffer;
    }

    This function is used to get the DIB :

    void GetViewportDIB(Interface* ip, BITMAPINFO *bmi, BITMAPINFOHEADER *bmih, BitmapInfo biFile, Bitmap *map)
    {
       int size;

       if (!biFile.Name()[0])
           return;

       ViewExp& view = ip->GetActiveViewExp();

       view.getGW()->getDIB(NULL, &size);

       bmi = (BITMAPINFO *)malloc(size);
       bmih = (BITMAPINFOHEADER *)bmi;

       view.getGW()->getDIB(bmi, &size);

       biFile.SetWidth((WORD)bmih->biWidth);
       biFile.SetHeight((WORD)bmih->biHeight);
       biFile.SetType(BMM_TRUE_32);

       map = TheManager->Create(&biFile);
       map->OpenOutput(&biFile);
       map->FromDib(bmi);
       map->Write(&biFile);
       map->Close(&biFile);
    }

    And after the conversion to AVFrame and video encoding :

    The EncodeFromMem function is call each frame.

    int Converter::EncodeFromMem(const char *outputDir, int frameNumber, uint8_t* data)
    {
       int ret;

       inputFrame->pts = frameNumber;
       EncodeFrame(data, codecContext, inputFrame, &pkt, file);

       return 0;
    }

    static void RgbToYuv(uint8_t *rgb, AVCodecContext *c, AVFrame *frame)
    {
       struct SwsContext *swsCtx = NULL;
       const int in_linesize[1] = { 3 * c->width };// RGB stride
       swsCtx = sws_getCachedContext(swsCtx, c->width, c->height, AV_PIX_FMT_RGB24, c->width, c->height, AV_PIX_FMT_YUV420P, 0, 0, 0, 0);
       sws_scale(swsCtx, (const uint8_t * const *)&rgb, in_linesize, 0, c->height, frame->data, frame->linesize);
    }

    static void EncodeFrame(uint8_t *rgb, AVCodecContext *c, AVFrame *frame, AVPacket **pkt, FILE *file)
    {
       int ret, got_output;

       RgbToYuv(rgb, c, frame);

       *pkt = av_packet_alloc();
       av_init_packet(*pkt);
       (*pkt)->data = NULL;
       (*pkt)->size = 0;

       ret = avcodec_encode_video2(c, *pkt, frame, &got_output);
       if (ret < 0)
       {
           fprintf(stderr, "Error encoding frame/n");
           exit(1);
       }
       if (got_output)
       {
           fwrite((*pkt)->data, 1, (*pkt)->size, file);
           av_packet_unref(*pkt);
       }
    }

    To finish I have a function that write the packets and free the memory :
    This function is called once at the end of the time range.

    int Converter::Finalize()
    {
       int ret, got_output;
       uint8_t endcode[] = { 0, 0, 1, 0xb7 };

       /* get the delayed frames */
       do
       {
           fflush(stdout);
           ret = avcodec_encode_video2(codecContext, pkt, NULL, &got_output);
           if (ret < 0)
           {
               fprintf(stderr, "Error encoding frame/n");
               return 1;
           }
           if (got_output)
           {
               fwrite(pkt->data, 1, pkt->size, file);
               av_packet_unref(pkt);
           }
       } while (got_output);

       fwrite(endcode, 1, sizeof(endcode), file);
       fclose(file);

       avcodec_close(codecContext);
       av_free(codecContext);

       av_frame_unref(inputFrame);
       av_frame_free(&inputFrame);
       //av_freep(&inputFrame->data[0]); //Crash

       delete outputFilename;
       outputFilename = 0;

       return 0;
    }

    EDIT :

    I modify my RgbToYuv function and create another one to convert back the yuv frame to an rgb one.

    This not really solve the problem, but maybe focus the problem on the conversion from YuvToRgb.

    This is the result of the conversion from YUV to RGB :

     ![YuvToRgb result] : https://img42.com/kHqpt+

    static void YuvToRgb(AVCodecContext *c, AVFrame *frame)
    {
       struct SwsContext *img_convert_ctx = sws_getContext(c->width, c->height, AV_PIX_FMT_YUV420P, c->width, c->height, AV_PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);
       AVFrame * rgbPictInfo = av_frame_alloc();
       avpicture_fill((AVPicture*)rgbPictInfo, *(frame)->data, AV_PIX_FMT_RGB24, c->width, c->height);
       sws_scale(img_convert_ctx, frame->data, frame->linesize, 0, c->height, rgbPictInfo->data, rgbPictInfo->linesize);

       Util::DebugWindow(c->width, c->height, rgbPictInfo->data[0]);
    }
    static void RgbToYuv(uint8_t *rgb, AVCodecContext *c, AVFrame *frame)
    {
       AVFrame * rgbPictInfo = av_frame_alloc();
       avpicture_fill((AVPicture*)rgbPictInfo, rgb, AV_PIX_FMT_RGBA, c->width, c->height);

       struct SwsContext *swsCtx = sws_getContext(c->width, c->height, AV_PIX_FMT_RGBA, c->width, c->height, AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);
       avpicture_fill((AVPicture*)frame, rgb, AV_PIX_FMT_YUV420P, c->width, c->height);    
       sws_scale(swsCtx, rgbPictInfo->data, rgbPictInfo->linesize, 0, c->height, frame->data, frame->linesize);

       YuvToRgb(c, frame);
    }
  • ytdl python "KeyError : formats"

    7 juillet 2022, par Mondumkreisung

    Im trying to make a discord music bot for personal use, since groovy and rythm got shut down.
It's working okay-ish I guess, but im having a problem with ytdl.
typing "-play" and an url is working just like intended, but i cant type "-play 'song name'".
Typing "-play example" gives me this :

    


    [download] Downloading playlist: example
[youtube:search] query "example": Downloading page 1
[youtube:search] playlist example: Downloading 1 videos
[download] Downloading video 1 of 1
[youtube] CLXt3yh2g0s: Downloading webpage
Ignoring exception in command play:
[download] Finished downloading playlist: example
Traceback (most recent call last):
  File "C:\Users\Dennis\PycharmProjects\groovy's true successor\venv\lib\site-packages\discord\ext\commands\core.py", line 85, in wrapped
    ret = await coro(*args, **kwargs)
  File "C:\Users\Dennis\PycharmProjects\groovy's true successor\voice.py", line 53, in play
    url2 = info['formats'][0]['url']
KeyError: 'formats'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\Dennis\PycharmProjects\groovy's true successor\venv\lib\site-packages\discord\ext\commands\bot.py", line 939, in invoke
    await ctx.command.invoke(ctx)
  File "C:\Users\Dennis\PycharmProjects\groovy's true successor\venv\lib\site-packages\discord\ext\commands\core.py", line 863, in invoke
    await injected(*ctx.args, **ctx.kwargs)
  File "C:\Users\Dennis\PycharmProjects\groovy's true successor\venv\lib\site-packages\discord\ext\commands\core.py", line 94, in wrapped
    raise CommandInvokeError(exc) from exc
discord.ext.commands.errors.CommandInvokeError: Command raised an exception: KeyError: 'formats'


    


    im fairly new to coding, so im sorry if somethings weird to understand.

    


    okay, so : typing -play with an url works fine, but typing -play with the song name doesnt. its only searching for the first word, downloads the first searchresult and then "crashes".

    


    so "-play Rick Astley - Never Gonna Give You Up" for example only searches for "Rick" and then it says something about KeyError : 'formats'
Here is my code :

    


    @client.command()
async def play(ctx, url):
    channel = ctx.author.voice.channel
    voice = discord.utils.get(client.voice_clients, guild=ctx.guild)
    if voice and voice.is_connected():
        pass
    else:
        await channel.connect()

    ffmpeg_opts = {'before_options': '-reconnect 1 -reconnect_streamed 1 -reconnect_delay_max 5', 'options': '-vn'}
    ydl_opts = {'format': "bestaudio/best", 'default_search': 'auto'}
    vc = ctx.voice_client

    with youtube_dl.YoutubeDL(ydl_opts) as ydl:
        info = ydl.extract_info(url, download=False)
        url2 = info['formats'][0]['url']
        source = await discord.FFmpegOpusAudio.from_probe(url2, **ffmpeg_opts)
        vc.play(source)