Recherche avancée

Médias (1)

Mot : - Tags -/censure

Autres articles (10)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

  • À propos des documents

    21 juin 2013, par

    Que faire quand un document ne passe pas en traitement, dont le rendu ne correspond pas aux attentes ?
    Document bloqué en file d’attente ?
    Voici une liste d’actions ordonnée et empirique possible pour tenter de débloquer la situation : Relancer le traitement du document qui ne passe pas Retenter l’insertion du document sur le site MédiaSPIP Dans le cas d’un média de type video ou audio, retravailler le média produit à l’aide d’un éditeur ou un transcodeur. Convertir le document dans un format (...)

Sur d’autres sites (3587)

  • Impossible to convert between the formats supported by the filter '...' - Error reinitializing filters

    14 novembre 2023, par Fabien Biller

    I am using this ffmpeg command(values removed for simplicity)

    


    ffmpeg -hwaccel cuvid -c:v h264_cuvid -y -ss 1 -i "FILE0001.MOV" -ss 0 -i "GOPR0621.MP4" -filter_complex 
[0:v][1:v]
  midequalizer
[al];
[al]
  yadif
  lenscorrection
  scale
[vl];
[1:v]
  lenscorrection
  scale
[vr];
[vl][vr]
  hstack=shortest=1 
-an -c:v h264_nvenc -preset slow "output.mp4"


    


    on a machine with a cuda graphics card.

    


    I get

    


    ffmpeg version N-90979-g08032331ac Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 7.3.0 (GCC)
  configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth
  libavutil      56. 18.100 / 56. 18.100
  libavcodec     58. 19.100 / 58. 19.100
  libavformat    58. 13.101 / 58. 13.101
  libavdevice    58.  4.100 / 58.  4.100
  libavfilter     7. 21.100 /  7. 21.100
  libswscale      5.  2.100 /  5.  2.100
  libswresample   3.  2.100 /  3.  2.100
  libpostproc    55.  2.100 / 55.  2.100
[mov,mp4,m4a,3gp,3g2,mj2 @ 00000254a8afc0c0] st: 0 edit list: 1 Missing key frame while searching for timestamp: 6006
[mov,mp4,m4a,3gp,3g2,mj2 @ 00000254a8afc0c0] st: 0 edit list 1 Cannot find an index entry before timestamp: 6006.
....
Stream mapping:
  Stream #0:0 (h264_cuvid) -> midequalizer:in0
  Stream #1:0 (h264) -> midequalizer:in1
  Stream #1:0 (h264) -> lenscorrection
  hstack -> Stream #0:0 (h264_nvenc)
  
Impossible to convert between the formats supported by the filter 'graph 0 input from stream 0:0' and the filter 'auto_scaler_0'
Error reinitializing filters!


    


    The same command without CUDA works, ie

    


    ffmpeg -y -ss 1 -i "FILE0001.MOV" -ss 0 -i "GOPR0621.MP4" -filter_complex 
[0:v][1:v]
  midequalizer
[al];
[al]
  yadif
  lenscorrection
  scale
[vl];
[1:v]
  lenscorrection
  scale
[vr];
[vl][vr]
  hstack=shortest=1 
-an "output.mp4"


    


    How do I make it work on a Windows 10 machine with cuda ?

    


  • Combining JavaCV and openCV

    13 juin 2014, par mister-viper

    I have the following problem :
    I have an Android application which uses native OpenCV code. In a first step, the frames which were edited by OpenCV came from the camera. Then they were processed and drawn on the display.

    However, my requirements now have changed. The frames which have to be edited come from a video file stored on the SD card. They must be processed by the openCV code and then stored in a new video file.

    After reading some a lot of stuff, I recognized that Android has no built-in stuff for correctly reading a video file frame by frame and allowing to process the frames while doing so. On a computer OpenCV has the VideoCapture function. But this does not work on Android as openCV has no ffmpeg that comes with it.

    After reading more stuff, I found that JavaCV comes with an FFMPEGFrameGrabber and also an FFMPEGFrameRecorder. So, I implemented everything which now allows me to grab single frames from a video, obtain an IplImage frame and store this frame in a new video.

    Now the problem :
    During obtaining and storing the IplImage frame must be processed using the original OpenCV code as it is not feasible to port the complete code to JavaCV.

    So in a first place I wrote a small test JNI function which gets the address of a MAT object and draws a small circle on it.

    extern "C" {
    JNIEXPORT void JNICALL Java_de_vion_postprocessing_step2_EyeTracking_editFrame(
       JNIEnv*, jobject, jlong thiz, jlong addrRgba) {
    //Convert the mat addresses into the objects
    Mat& rgbFrame = *(Mat*) addrRgba;

    Point2i scaledSmoothPoint(100,100);
    circle(rgbFrame, scaledSmoothPoint, 20, YELLOW, -1);
    }

    As I read that IplImage extends CvArr I just call the function within in my code as follows :

    captured_frame = grabber.grab();
    if (captured_frame == null) {
       // no new frames
       break;
    }
    editFrame(captured_frame .address());

    However, I now get the following error :

    06-12 18:58:23.135: E/cv::error()(6498): OpenCV Error: Assertion failed (cn <= 4) in
                       void cv::scalarToRawData(const Scalar&, void*, int, int), file
                       /home/reports/ci/slave_desktop/50-SDK/opencv/modules/core/src/matrix.cpp, line 845
    06-12 18:58:23.135: A/libc(6498): Fatal signal 6 (SIGABRT) at 0x00001962 (code=-6),
                       thread 6526 (AsyncTask #1)

    Finally, me question :
    How can I process the IplImage frame using nativeOpenCV and finally store this IplImage frame then in the video recorder.

    I am also open to new Ideas which do not necessarily require JavaCV as long as I do not have to write the FrameGrabber and FrameRecorder my self.

    Best regards,
    André

  • Problems using libavfilter for adding overlay to frames

    12 novembre 2024, par Michael Werner

    On Windows 11 OS with latest libav (full build) a C/C++ app reads YUV420P frames from a frame grabber card.

    


    I want to draw a bitmap (BGR24) overlay image from file on every frame via libavfilter. First I convert the BGR24 overlay image via format filter to YUV420P. Then I feed the YUV420P frame from frame grabber and the YUV420P overlay into the overlay filter.

    


    Everything seems to be fine but when I try to get the frame out of the filter graph I always get an "Resource is temporary not available" (EAGAIN) return code, independent on how many frames I put into the graph.

    


    The frames from the frame grabber card are fine, I could encode them or write them to a .yuv file. The overlay frame looks fine too.

    


    My current initialization code looks like below. It does not report any errors or warnings but when I try to get the filtered frame out of the graph via av_buffersink_get_frame I always get an EAGAIN return code.

    


    Here is my current initialization code :

    


    int init_overlay_filter(AVFilterGraph** graph, AVFilterContext** src_ctx, AVFilterContext** overlay_src_ctx,
                        AVFilterContext** sink_ctx)
{
    AVFilterGraph* filter_graph;
    AVFilterContext* buffersrc_ctx;
    AVFilterContext* overlay_buffersrc_ctx;
    AVFilterContext* buffersink_ctx;
    AVFilterContext* overlay_ctx;
    AVFilterContext* format_ctx;
    const AVFilter *buffersrc, *buffersink, *overlay_buffersrc, *overlay_filter, *format_filter;
    int ret;

    // Create the filter graph
    filter_graph = avfilter_graph_alloc();
    if (!filter_graph)
    {
        fprintf(stderr, "Unable to create filter graph.\n");
        return AVERROR(ENOMEM);
    }

    // Create buffer source filter for main video
    buffersrc = avfilter_get_by_name("buffer");
    if (!buffersrc)
    {
        fprintf(stderr, "Unable to find buffer filter.\n");
        return AVERROR_FILTER_NOT_FOUND;
    }

    // Create buffer source filter for overlay image
    overlay_buffersrc = avfilter_get_by_name("buffer");
    if (!overlay_buffersrc)
    {
        fprintf(stderr, "Unable to find buffer filter.\n");
        return AVERROR_FILTER_NOT_FOUND;
    }

    // Create buffer sink filter
    buffersink = avfilter_get_by_name("buffersink");
    if (!buffersink)
    {
        fprintf(stderr, "Unable to find buffersink filter.\n");
        return AVERROR_FILTER_NOT_FOUND;
    }

    // Create overlay filter
    overlay_filter = avfilter_get_by_name("overlay");
    if (!overlay_filter)
    {
        fprintf(stderr, "Unable to find overlay filter.\n");
        return AVERROR_FILTER_NOT_FOUND;
    }

    // Create format filter
    format_filter = avfilter_get_by_name("format");
    if (!format_filter) 
    {
        fprintf(stderr, "Unable to find format filter.\n");
        return AVERROR_FILTER_NOT_FOUND;
    }

    // Initialize the main video buffer source
    char args[512];
    snprintf(args, sizeof(args),
             "video_size=1920x1080:pix_fmt=yuv420p:time_base=1/25:pixel_aspect=1/1");
    ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in", args, NULL, filter_graph);
    if (ret < 0)
    {
        fprintf(stderr, "Unable to create buffer source filter for main video.\n");
        return ret;
    }

    // Initialize the overlay buffer source
    snprintf(args, sizeof(args),
             "video_size=165x165:pix_fmt=bgr24:time_base=1/25:pixel_aspect=1/1");
    ret = avfilter_graph_create_filter(&overlay_buffersrc_ctx, overlay_buffersrc, "overlay_in", args, NULL,
                                       filter_graph);
    if (ret < 0)
    {
        fprintf(stderr, "Unable to create buffer source filter for overlay.\n");
        return ret;
    }

    // Initialize the format filter to convert overlay image to yuv420p
    snprintf(args, sizeof(args), "pix_fmts=yuv420p");
    ret = avfilter_graph_create_filter(&format_ctx, format_filter, "format", args, NULL, filter_graph);

    if (ret < 0) 
    {
        fprintf(stderr, "Unable to create format filter.\n");
        return ret;
    }

    // Initialize the buffer sink
    ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out", NULL, NULL, filter_graph);
    if (ret < 0)
    {
        fprintf(stderr, "Unable to create buffer sink filter.\n");
        return ret;
    }

    // Initialize the overlay filter
    ret = avfilter_graph_create_filter(&overlay_ctx, overlay_filter, "overlay", "W-w:H-h:enable='between(t,0,20)':format=yuv420", NULL, filter_graph);
    if (ret < 0)
    {
        fprintf(stderr, "Unable to create overlay filter.\n");
        return ret;
    }

    // Connect the filters
    ret = avfilter_link(overlay_buffersrc_ctx, 0, format_ctx, 0);

    if (ret >= 0)
    {
        ret = avfilter_link(buffersrc_ctx, 0, overlay_ctx, 0);
    }
    else
    {
        fprintf(stderr, "Unable to configure filter graph.\n");
        return ret;
    }


    if (ret >= 0) 
    {
        ret = avfilter_link(format_ctx, 0, overlay_ctx, 1);
    }
    else
    {
        fprintf(stderr, "Unable to configure filter graph.\n");
        return ret;
    }

    if (ret >= 0) 
    {
        if ((ret = avfilter_link(overlay_ctx, 0, buffersink_ctx, 0)) < 0)
        {
            fprintf(stderr, "Unable to link filter graph.\n");
            return ret;
        }
    }
    else
    {
        fprintf(stderr, "Unable to configure filter graph.\n");
        return ret;
    }

    // Configure the filter graph
    if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
    {
        fprintf(stderr, "Unable to configure filter graph.\n");
        return ret;
    }

    *graph = filter_graph;
    *src_ctx = buffersrc_ctx;
    *overlay_src_ctx = overlay_buffersrc_ctx;
    *sink_ctx = buffersink_ctx;

    return 0;
}


    


    Feeding the filter graph is done this way :

    


    av_buffersrc_add_frame_flags(buffersrc_ctx, pFrameGrabberFrame, AV_BUFFERSRC_FLAG_KEEP_REF)
av_buffersink_get_frame(buffersink_ctx, filtered_frame)


    


    av_buffersink_get_frame returns always EAGAIN, no matter how many frames I feed into the graph. The frames (from framegrabber and the overlay frame) itself are looking fine.

    


    I did set libav logging level to maximum but I do not see any warnings or errors or helpful, related information in the log.

    


    Here the log output related to the filter configuration :

    


    [in @ 00000288ee494f40] Setting 'video_size' to value '1920x1080'
[in @ 00000288ee494f40] Setting 'pix_fmt' to value 'yuv420p'
[in @ 00000288ee494f40] Setting 'time_base' to value '1/25'
[in @ 00000288ee494f40] Setting 'pixel_aspect' to value '1/1'
[in @ 00000288ee494f40] w:1920 h:1080 pixfmt:yuv420p tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown
[overlay_in @ 00000288ff1013c0] Setting 'video_size' to value '165x165'
[overlay_in @ 00000288ff1013c0] Setting 'pix_fmt' to value 'bgr24'
[overlay_in @ 00000288ff1013c0] Setting 'time_base' to value '1/25'
[overlay_in @ 00000288ff1013c0] Setting 'pixel_aspect' to value '1/1'
[overlay_in @ 00000288ff1013c0] w:165 h:165 pixfmt:bgr24 tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown
[format @ 00000288ff1015c0] Setting 'pix_fmts' to value 'yuv420p'
[overlay @ 00000288ff101880] Setting 'x' to value 'W-w'
[overlay @ 00000288ff101880] Setting 'y' to value 'H-h'
[overlay @ 00000288ff101880] Setting 'enable' to value 'between(t,0,20)'
[overlay @ 00000288ff101880] Setting 'format' to value 'yuv420'
[auto_scale_0 @ 00000288ff101ec0] w:iw h:ih flags:'' interl:0
[format @ 00000288ff1015c0] auto-inserting filter 'auto_scale_0' between the filter 'overlay_in' and the filter 'format'
[auto_scale_1 @ 00000288ee4a4cc0] w:iw h:ih flags:'' interl:0
[overlay @ 00000288ff101880] auto-inserting filter 'auto_scale_1' between the filter 'format' and the filter 'overlay'
[AVFilterGraph @ 00000288ee495c80] query_formats: 5 queried, 6 merged, 6 already done, 0 delayed
[auto_scale_0 @ 00000288ff101ec0] w:165 h:165 fmt:bgr24 csp:gbr range:pc sar:1/1 -> w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 flags:0x00000004
[auto_scale_1 @ 00000288ee4a4cc0] w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 -> w:165 h:165 fmt:yuva420p csp:unknown range:unknown sar:1/1 flags:0x00000004
[overlay @ 00000288ff101880] main w:1920 h:1080 fmt:yuv420p overlay w:165 h:165 fmt:yuva420p
[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Selected 1/25 time base
[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Sync level 2