Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (105)

  • Pas question de marché, de cloud etc...

    10 avril 2011

    Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
    sur le web 2.0 et dans les entreprises qui en vivent.
    Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
    Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
    le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
    Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (6065)

  • configure : use check_lib2 for cuda and cuvid

    12 novembre 2016, par Hendrik Leppkes
    configure : use check_lib2 for cuda and cuvid
    

    Fixes building for Windows x86 with MSVC using the link libraries distributed with the CUDA SDK.

    check_lib2 is required here because it includes the header to get the full signature of the
    function, including the stdcall calling convention and all of its arguments, which enables
    the linker to determine the fully qualified object name and resolve it through the import
    library, since the CUDA SDK libraries do not include un-qualified aliases.

    • [DH] configure
  • Segfault while trying to fill the yuv image for rtsp streaming

    21 septembre 2016, par tankyx

    I am capturing the video stream from a window, and I want to restream it to my rtsp proxy server. However, it seems I can’t write the frame properly, but I can show the said frame in a SDL window. Here is my code :

    int StreamHandler::storeStreamData()
    {
    // Allocate video frame
    pFrame = av_frame_alloc();

    // Allocate an AVFrame structure
    pFrameRGB = av_frame_alloc();
    if (pFrameRGB == NULL)
       throw myExceptions("Error : Can't alloc the frame.");

    // Determine required buffer size and allocate buffer
    numBytes = avpicture_get_size(AV_PIX_FMT_YUV420P, pCodecCtx->width,
       pCodecCtx->height);
    buffer = (uint8_t *)av_malloc(numBytes * sizeof(uint8_t));

    // Assign appropriate parts of buffer to image planes in pFrameRGB
    avpicture_fill((AVPicture *)pFrameRGB, buffer, AV_PIX_FMT_YUV420P,
       pCodecCtx->width, pCodecCtx->height);

    //InitSdlDrawBack();

    // initialize SWS context for software scaling

    sws_ctx = sws_getContext(pCodecCtx->width,
       pCodecCtx->height,
       pCodecCtx->pix_fmt,
       pCodecCtx->width,
       pCodecCtx->height,
       pCodecCtx->pix_fmt,
       SWS_LANCZOS,
       NULL,
       NULL,
       NULL
    );

    SetPixelArray();
    FfmpegEncoder enc("rtsp://127.0.0.1:1935/live/myStream");

    i = 0;
    while (av_read_frame(pFormatCtx, &packet) >= 0) {
       if (packet.stream_index == videoindex) {
           // Decode video frame
           avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
           if (frameFinished) {
               i++;
               //DrawFrame();

               sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
                   pFrame->linesize, 0, pCodecCtx->height,
                   pFrameRGB->data, pFrameRGB->linesize);
               enc.encodeFrame(pFrameRGB, i);
           }
       }
       // Free the packet that was allocated by av_read_frame
       av_free_packet(&packet);
    }
    // Free the RGB image
    av_free(buffer);
    av_frame_free(&pFrameRGB);

    // Free the YUV frame
    av_frame_free(&pFrame);

    // Close the codecs
    avcodec_close(pCodecCtx);
    avcodec_close(pCodecCtxOrig);

    // Close the video file
    avformat_close_input(&pFormatCtx);

    return 0;
    }

    void StreamHandler::SetPixelArray()
    {
    yPlaneSz = pCodecCtx->width * pCodecCtx->height;
    uvPlaneSz = pCodecCtx->width * pCodecCtx->height / 4;
    yPlane = (Uint8*)malloc(yPlaneSz);
    uPlane = (Uint8*)malloc(uvPlaneSz);
    vPlane = (Uint8*)malloc(uvPlaneSz);
    if (!yPlane || !uPlane || !vPlane)
       throw myExceptions("Error : Can't create pixel array.");

    uvPitch = pCodecCtx->width / 2;
    }

    Here I fill the YUV image and write the packet.

    void FfmpegEncoder::encodeFrame(AVFrame * frame, int frameCount)
    {
    AVPacket    pkt = { 0 };
    int         got_pkt;

    av_init_packet(&pkt);
    frame->pts = frameCount;

    FillYuvImage(frame, frameCount, this->pCodecCtx->width, this->pCodecCtx->height);

    if (avcodec_encode_video2(this->pCodecCtx, &pkt, frame, &got_pkt) < 0)
       throw myExceptions("Error: failed to encode the frame. FfmpegEncoder.cpp l:61\n");

    //if the frame is well encoded
    if (got_pkt) {
       pkt.stream_index = this->st->index;
       pkt.pts = av_rescale_q_rnd(pkt.pts, this->pCodecCtx->time_base, this->st->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
       if (av_write_frame(this->outFormatCtx, &pkt) < 0)
           throw myExceptions("Error: failed to write video frame. FfmpegEncoder.cpp l:68\n");
    }
    }

    void FfmpegEncoder::FillYuvImage(AVFrame * pict, int frame_index, int width, int height)
    {
    int x, y, i;

    i = frame_index;

    for (y = 0; y < height; y++)
    {
       for (x = 0; x < width / 2; x++)
           pict->data[0][y * pict->linesize[0] + x] = x + y + i * 3;
    }
    for (y = 0; y < height; y++)
    {
       for (x = 0; x < width / 2; x++)
       {
           pict->data[1][y * pict->linesize[1] + x] = 128 + y + i * 2;
           pict->data[2][y * pict->linesize[2] + x] = 64 + y + i * 5; //segault here
       }
    }
    }

    The "FillYuvImage" method is copied from a FFMPEG example, but It does not work for me. If I don’t call it, the "av_write_frame" function won’t work (segfault too).

    EDIT : Here is my output context and codec initialization.

    FfmpegEncoder::FfmpegEncoder(char *url)
    {
    AVRational      tmp_time_base;
    AVDictionary*   options = NULL;

    this->pCodec = avcodec_find_encoder(AV_CODEC_ID_H264);
    if (this->pCodec == NULL)
       throw myExceptions("Error: Can't initialize the encoder. FfmpegEncoder.cpp l:9\n");

    this->pCodecCtx = avcodec_alloc_context3(this->pCodec);

    //Alloc output context
    if (avformat_alloc_output_context2(&outFormatCtx, NULL, "rtsp", url) < 0)
       throw myExceptions("Error: Can't alloc stream output. FfmpegEncoder.cpp l:17\n");

    this->st = avformat_new_stream(this->outFormatCtx, this->pCodec);

    if (this->st == NULL)
       throw myExceptions("Error: Can't create stream . FfmpegEncoder.cpp l:22\n");

    av_dict_set(&options, "vprofile", "main", 0);
    av_dict_set(&options, "tune", "zerolatency", 0);

    tmp_time_base.num = 1;
    tmp_time_base.den = 60;

    //TODO : parse these values
    this->pCodecCtx->bit_rate = 3000000;
    this->pCodecCtx->width = 1280;
    this->pCodecCtx->height = 720;
    //This set the fps. 60fps at this point.
    this->pCodecCtx->time_base = tmp_time_base;
    //Add a intra frame every 12 frames
    this->pCodecCtx->gop_size = 12;
    this->pCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;

    //Open Codec, using the context + x264 options
    if (avcodec_open2(this->pCodecCtx, this->pCodec, &options) < 0)
       throw myExceptions("Error: Can't open the codec. FfmpegEncoder.cpp l:43\n");

    if (avcodec_copy_context(this->st->codec, this->pCodecCtx) != 0) {
       throw myExceptions("Error : Can't copy codec context. FfmpegEncoder.cpp : l.46");
    }

    av_dump_format(this->outFormatCtx, 0, url, 1);

    if (avformat_write_header(this->outFormatCtx, NULL) != 0)
       throw myExceptions("Error: failed to connect to RTSP server. FfmpegEncoder.cpp l:48\n");
    }
  • Cleaning up after av_frame_get_buffer

    4 novembre 2016, par Jason C

    There are two aspects of my question. I’m using libav, ffmpeg, 3.1.


    First, how do you appropriately dispose of a frame whose buffer has been allocated with av_frame_get_buffer ? E.g. :

    AVFrame *frame = av_frame_alloc();
    frame->width = ...;
    frame->height = ...;
    frame->format = ...;
    av_frame_get_buffer(frame, ...);

    Do any buffers have to be freed manually, beyond the call to av_frame_free(frame) ? The documentation doesn’t mention anything special, but in my experience the ffmpeg documentation often leaves out important details, or at least hides them in places far away from the obvious spots. I took a look at the code for av_frame_free and av_frame_unref but it branched out quite a bit and I couldn’t quite determine if it covered everything.


    Second, if something beyond av_frame_free needs to be done, then is there any catch-all way to clean up a frame if you don’t know how its data has been allocated ? For example, assuming someBuffer is already allocated with the appropriate size :

    AVFrame *frame2 = av_frame_alloc();
    frame2->width = ...;
    frame2->height = ...;
    frame2->format = ...;
    av_image_fill_arrays(frame2->data, frame2->linesize, someBuffer,
                        frame2->format, frame2->width, frame2->height, 1);

    Is there a way to free both frame and frame2 in the above examples using the exact same code ? That is frame and its data should be freed, and frame2 should be freed, but not someBuffer, since libav did not allocate it.