Recherche avancée

Médias (91)

Autres articles (41)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (5563)

  • Remove Black Frames from an overlayed Circled Video

    7 juillet 2017, par amanguel

    I have a video that I need to overlay on top of another video. The first video have parts with black frames that I don’t want to be overlayed and I also need to mask this video with a circle.

    In other words, I will be overlaying a few circled videos on top of a bigger rectangular video and I also don’t want to show black frames from the circled videos.

    Could you please help me.

    Thanks !!!!!

  • Libav AVFrame to Opencv Mat to AVPacket conversion

    14 mars 2018, par Davood Falahati

    I am new to libav and I am writing a video manipulation software which uses opencv as its heart. What I did is briefly as below :

    1- read the video packet

    2- decode the packet into AVFrame

    3- convert
    the AVFrame to CV Mat

    4- manipulate the Mat

    5- convert the CV Mat
    into AVFrame

    6- encode the AVFrame into AVPacket

    7- write the packet

    8- goto 1

    I read dranger tutorial in http://dranger.com/ffmpeg/tutorial01.html and I also used decoding_encoding example. I can read the video, extract video frames and convert them to CV Mat. My problem starts from converting from cv Mat to AVFrame and encode it to AVPacket.

    Would you please help me with this ?

    Here is my code :

    int main(int argc, char **argv)
    {
    AVOutputFormat *ofmt = NULL;
    AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
    AVPacket pkt;
    AVCodecContext    *pCodecCtx = NULL;
    AVCodec           *pCodec = NULL;
    AVFrame           *pFrame = NULL;
    AVFrame           *pFrameRGB = NULL;
    int videoStream=-1;
    int audioStream=-1;
    int               frameFinished;
    int               numBytes;
    uint8_t           *buffer = NULL;
    struct SwsContext *sws_ctx = NULL;
    FrameManipulation *mal_frame;

    const char *in_filename, *out_filename;
    int ret, i;
    if (argc < 3) {

       printf("usage: %s input output\n"
              "API example program to remux a media file with libavformat and libavcodec.\n"
              "The output format is guessed according to the file extension.\n"
              "\n", argv[0]);
       return 1;
    }
    in_filename  = arg[1];
    out_filename = arg[2];
    av_register_all();
    if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
       fprintf(stderr, "Could not open input file '%s'", in_filename);
       goto end;
    }

    if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
       fprintf(stderr, "Failed to retrieve input stream information");
       goto end;
    }

    av_dump_format(ifmt_ctx, 0, in_filename, 0);
    avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);

    if (!ofmt_ctx) {
       fprintf(stderr, "Could not create output context\n");
       ret = AVERROR_UNKNOWN;
       goto end;
    }

    ofmt = ofmt_ctx->oformat;

    for (i = 0; i < ifmt_ctx->nb_streams; i++) {
       AVStream *in_stream = ifmt_ctx->streams[i];
       AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);

       if(ifmt_ctx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO &&
          videoStream < 0) {
              videoStream=i;
       }

       if(ifmt_ctx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO &&
          audioStream < 0) {
               audioStream=i;
       }

       if (!out_stream) {
           fprintf(stderr, "Failed allocating output stream\n");
           ret = AVERROR_UNKNOWN;
           goto end;
       }

       ret = avcodec_copy_context(out_stream->codec, in_stream->codec);

       if (ret < 0) {
           fprintf(stderr, "Failed to copy context from input to output stream codec context\n");
           goto end;
       }

       out_stream->codec->codec_tag = 0;

       if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
          out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
    }

    pCodec=avcodec_find_decoder(ifmt_ctx->streams[videoStream]->codec->codec_id);
    pCodecCtx = avcodec_alloc_context3(pCodec);

    if(avcodec_copy_context(pCodecCtx, ifmt_ctx->streams[videoStream]->codec) != 0) {
     fprintf(stderr, "Couldn't copy codec context");
     return -1; // Error copying codec context
    }

    // Open codec
    if(avcodec_open2(pCodecCtx, pCodec, NULL)<0)
      return -1; // Could not open codec

    // Allocate video frame
    pFrame=av_frame_alloc();

    // Allocate an AVFrame structure
    pFrameRGB=av_frame_alloc();

    // Determine required buffer size and allocate buffer
    numBytes=avpicture_get_size(AV_PIX_FMT_RGB24, ifmt_ctx->streams[videoStream]->codec->width,
                    ifmt_ctx->streams[videoStream]->codec->height);

    buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

    // Assign appropriate parts of buffer to image planes in pFrameRGB
    // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
    // of AVPicture
    avpicture_fill((AVPicture *)pFrameRGB, buffer, AV_PIX_FMT_BGR24,
           ifmt_ctx->streams[videoStream]->codec->width, ifmt_ctx->streams[videoStream]->codec->height);

    av_dump_format(ofmt_ctx, 0, out_filename, 1);

    if (!(ofmt->flags & AVFMT_NOFILE)) {
       ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
       if (ret < 0) {
           fprintf(stderr, "Could not open output file '%s'", out_filename);
           goto end;
       }
    }

    ret = avformat_write_header(ofmt_ctx, NULL);
    if (ret < 0) {
       fprintf(stderr, "Error occurred when opening output file\n");
       goto end;
    }

    // Assign appropriate parts of buffer to image planes in pFrameRGB
    // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
    // of AVPicture

    avpicture_fill((AVPicture *)pFrameRGB, buffer, AV_PIX_FMT_BGR24,
                      ifmt_ctx->streams[videoStream]->codec->width,
                      ifmt_ctx->streams[videoStream]->codec->height);

    // initialize SWS context for software scaling
    sws_ctx = sws_getContext(
                ifmt_ctx->streams[videoStream]->codec->width,
                ifmt_ctx->streams[videoStream]->codec->height,
                ifmt_ctx->streams[videoStream]->codec->pix_fmt,
                ifmt_ctx->streams[videoStream]->codec->width,
                ifmt_ctx->streams[videoStream]->codec->height,
                AV_PIX_FMT_BGR24,
                SWS_BICUBIC,
                NULL,
                NULL,
                NULL
                );
    // Loop through packets
    while (1) {

       AVStream *in_stream, *out_stream;
       ret = av_read_frame(ifmt_ctx, &pkt);
       if(pkt.stream_index==videoStream)

        // Decode video frame
         avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &pkt);

         if(frameFinished) {
                   sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
                   pFrame->linesize, 0, pCodecCtx->height,
                   pFrameRGB->data, pFrameRGB->linesize);
                   cv::Mat img= mal_frame->process(
                             pFrameRGB,pFrame->width,pFrame->height);
    /* My problem is Here ------------*/


       avpicture_fill((AVPicture*)pFrameRGB,
                        img.data,
                        PIX_FMT_BGR24,
                        outStream->codec->width,
                        outStream->codec->height);

       pFrameRGB->width =  ifmt_ctx->streams[videoStream]->codec->width;
       pFrameRGB->height = ifmt_ctx->streams[videoStream]->codec->height;

               avcodec_encode_video2(ifmt_ctx->streams[videoStream]->codec ,
                                                        &pkt , pFrameRGB , &gotPacket);
    /*
    I get this error
    [swscaler @ 0x14b58a0] bad src image pointers
    [swscaler @ 0x14b58a0] bad src image pointers
    */

    /* My Problem Ends here ---------- */

       }

       if (ret < 0)

           break;

       in_stream  = ifmt_ctx->streams[pkt.stream_index];

       out_stream = ofmt_ctx->streams[pkt.stream_index];



       //log_packet(ifmt_ctx, &pkt, "in");

       /* copy packet */

       pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base,

                                  AV_ROUND_NEAR_INF);



       pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF);

       pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);

       pkt.pos = -1;

       log_packet(ofmt_ctx, &pkt, "out");

       ret = av_interleaved_write_frame(ofmt_ctx, &pkt);

       if (ret < 0) {

           fprintf(stderr, "Error muxing packet\n");

           break;

       }

       av_free_packet(&pkt);

    }

    av_write_trailer(ofmt_ctx);

    end:

    avformat_close_input(&ifmt_ctx);

    /* close output */

    if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))

       avio_closep(&ofmt_ctx->pb);

    avformat_free_context(ofmt_ctx);

    if (ret < 0 && ret != AVERROR_EOF) {

       return 1;

    }

    return 0;

    }

    When I run this code, I get unknown fatal error in this part :

      /* My problem is Here ------------*/


       avpicture_fill((AVPicture*)pFrameRGB,
                        img.data,
                        PIX_FMT_BGR24,
                        outStream->codec->width,
                        outStream->codec->height);

       pFrameRGB->width =  ifmt_ctx->streams[videoStream]->codec->width;
       pFrameRGB->height = ifmt_ctx->streams[videoStream]->codec->height;

               avcodec_encode_video2(ifmt_ctx->streams[videoStream]->codec ,
                                                        &pkt , pFrameRGB , &gotPacket);
    /*
    I get this error
    [swscaler @ 0x14b58a0] bad src image pointers
    [swscaler @ 0x14b58a0] bad src image pointers
    */

    /* My Problem Ends here ---------- */

    Here is where I want to convert back cv Mat to AVFrame and encode it to AVPacket. I appreciate your help.

  • Display ffmpeg frames on opgel texture

    24 octobre 2014, par naki

    I am using Dranger tutorial01 (ffmpeg) to decode the video and get AVI frames. I want to use OpenGL to display the video.

    http://dranger.com/ffmpeg/tutorial01.html

    The main function is as follows :

    int main (int argc, char** argv) {
    // opengl stuff
    glutInit(&argc, argv);
    glutInitDisplayMode(GLUT_RGBA);
    glutInitWindowSize(800, 600);
    glutCreateWindow("Hello GL");

    glutReshapeFunc(changeViewport);
    glutDisplayFunc(render);

    GLenum err = glewInit();
    if(GLEW_OK !=err){
       fprintf(stderr, "GLEW error");
       return 1;
    }

    glClear(GL_COLOR_BUFFER_BIT);


    glEnable(GL_TEXTURE_2D);
    GLuint texture;
    glGenTextures(1, &texture); //Make room for our texture
    glBindTexture(GL_TEXTURE_2D, texture);

    //ffmpeg stuff

    AVFormatContext *pFormatCtx = NULL;
    int             i, videoStream;
    AVCodecContext  *pCodecCtx = NULL;
    AVCodec         *pCodec = NULL;
    AVFrame         *pFrame = NULL;
    AVFrame         *pFrameRGB = NULL;
    AVPacket        packet;
    int             frameFinished;
    int             numBytes;
    uint8_t         *buffer = NULL;

    AVDictionary    *optionsDict = NULL;


    if(argc < 2) {
    printf("Please provide a movie file\n");
    return -1;
    }
    // Register all formats and codecs

    av_register_all();

    // Open video file
    if(avformat_open_input(&pFormatCtx, argv[1], NULL, NULL)!=0)
      return -1; // Couldn't open file

    // Retrieve stream information

    if(avformat_find_stream_info(pFormatCtx, NULL)<0)
    return -1; // Couldn't find stream information

    // Dump information about file onto standard error
    av_dump_format(pFormatCtx, 0, argv[1], 0);

    // Find the first video stream

    videoStream=-1;
    for(i=0; inb_streams; i++)
    if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
     videoStream=i;
     break;
    }
    if(videoStream==-1)
    return -1; // Didn't find a video stream

    // Get a pointer to the codec context for the video stream
    pCodecCtx=pFormatCtx->streams[videoStream]->codec;

    // Find the decoder for the video stream
    pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
    if(pCodec==NULL) {
      fprintf(stderr, "Unsupported codec!\n");
      return -1; // Codec not found
    }
    // Open codec
    if(avcodec_open2(pCodecCtx, pCodec, &optionsDict)<0)
      return -1; // Could not open codec

    // Allocate video frame
    pFrame=av_frame_alloc();

    // Allocate an AVFrame structure
    pFrameRGB=av_frame_alloc();
    if(pFrameRGB==NULL)
    return -1;

    // Determine required buffer size and allocate buffer
    numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,
                 pCodecCtx->height);
    buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

    struct SwsContext      *sws_ctx = sws_getContext(pCodecCtx->width,
              pCodecCtx->height, pCodecCtx->pix_fmt, 800,
              600, PIX_FMT_RGB24, SWS_BICUBIC, NULL,
              NULL, NULL);


    // Assign appropriate parts of buffer to image planes in pFrameRGB
    // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
    // of AVPicture
    avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
        pCodecCtx->width, pCodecCtx->height);

    // Read frames and save first five frames to disk
    i=0;
    while(av_read_frame(pFormatCtx, &packet)>=0) {


    // Is this a packet from the video stream?
    if(packet.stream_index==videoStream) {
     // Decode video frame
     avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished,
              &packet);

     // Did we get a video frame?
     if(frameFinished) {
    // Convert the image from its native format to RGB
     /*  sws_scale
       (
           sws_ctx,
           (uint8_t const * const *)pFrame->data,
           pFrame->linesize,
           0,
           pCodecCtx->height,
           pFrameRGB->data,
           pFrameRGB->linesize
       );
      */
    sws_scale(sws_ctx, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
     // additional opengl
       glBindTexture(GL_TEXTURE_2D, texture);

           //gluBuild2DMipmaps(GL_TEXTURE_2D, 3, pCodecCtx->width, pCodecCtx->height, GL_RGB, GL_UNSIGNED_INT, pFrameRGB->data[0]);
      // glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, 840, 460, GL_RGB, GL_UNSIGNED_BYTE, pFrameRGB->data[0]);

           glTexImage2D(GL_TEXTURE_2D,                //Always GL_TEXTURE_2D
               0,                            //0 for now
               GL_RGB,                       //Format OpenGL uses for image
               pCodecCtx->width, pCodecCtx->height,  //Width and height
               0,                            //The border of the image
               GL_RGB, //GL_RGB, because pixels are stored in RGB format
               GL_UNSIGNED_BYTE, //GL_UNSIGNED_BYTE, because pixels are stored
                               //as unsigned numbers
               pFrameRGB->data[0]);               //The actual pixel data
     // additional opengl end  

    // Save the frame to disk
    if(++i<=5)
     SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height,
           i);
     }
    }

    glColor3f(1,1,1);
    glBindTexture(GL_TEXTURE_2D, texture);
    glBegin(GL_QUADS);
       glTexCoord2f(0,1);
       glVertex3f(0,0,0);

       glTexCoord2f(1,1);
       glVertex3f(pCodecCtx->width,0,0);

       glTexCoord2f(1,0);
       glVertex3f(pCodecCtx->width, pCodecCtx->height,0);

       glTexCoord2f(0,0);
       glVertex3f(0,pCodecCtx->height,0);

    glEnd();
    // Free the packet that was allocated by av_read_frame
    av_free_packet(&packet);
    }


     // Free the RGB image
    av_free(buffer);
    av_free(pFrameRGB);

    // Free the YUV frame
    av_free(pFrame);

    // Close the codec
    avcodec_close(pCodecCtx);

    // Close the video file
    avformat_close_input(&pFormatCtx);

    return 0;
    }

    Unfortunately i could not find my solution here

    ffmpeg video to opengl texture

    The program compiles but does not show any video on the texture. Just a OpenGL window is created.