Recherche avancée

Médias (91)

Autres articles (55)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • MediaSPIP en mode privé (Intranet)

    17 septembre 2013, par

    À partir de la version 0.3, un canal de MediaSPIP peut devenir privé, bloqué à toute personne non identifiée grâce au plugin "Intranet/extranet".
    Le plugin Intranet/extranet, lorsqu’il est activé, permet de bloquer l’accès au canal à tout visiteur non identifié, l’empêchant d’accéder au contenu en le redirigeant systématiquement vers le formulaire d’identification.
    Ce système peut être particulièrement utile pour certaines utilisations comme : Atelier de travail avec des enfants dont le contenu ne doit pas (...)

Sur d’autres sites (8234)

  • Error in making animation through ffmpeg (python3.9)

    20 avril 2024, par Taehyung Ghim

    When I try to make 2D animation map for cow tracking(matching 2 camera views) through ffmpeg, following error occurs.

    


    
raise subprocess.CalledProcessError(subprocess.CalledProcessError: Command '['ffmpeg', '-f', 'rawvideo', '-vcodec', 'rawvideo', '-s', '4000x4000', '-pix_fmt', 'rgba', '-r', '5', '-loglevel', 'error', '-i', 'pipe:', '-vcodec', 'h264', '-pix_fmt', 'yuv420p', '-metadata', 'artist=Me', '-y', '../out_detect/run7/TRACKS_ANIMATION_fused.mp4']' returned non-zero exit status 1.



    


    Following is the full error :

    


    &#xA;Plotting the last 1800.9391813674797 frames.&#xA;INFO:Animation.save using <class>&#xA;INFO:MovieWriter._run: running command: ffmpeg -f rawvideo -vcodec rawvideo -s 4000x4000 -pix_fmt rgba -r 5 -loglevel error -i pipe: -vcodec h264 -pix_fmt yuv420p -metadata artist=Me -y ../out_detect/run7/TRACKS_ANIMATION_fused.mp4&#xA;WARNING:MovieWriter stderr:&#xA;[libopenh264 @ 0x55b93df81fc0] [OpenH264] this = 0x0x55b93df8ef10, Error:ParamValidationExt(), width > 0, height > 0, width * height &lt;= 9437184, invalid 4000 x 4000 in dependency layer settings!&#xA;[libopenh264 @ 0x55b93df81fc0] [OpenH264] this = 0x0x55b93df8ef10, Error:WelsInitEncoderExt(), ParamValidationExt failed return 2.&#xA;[libopenh264 @ 0x55b93df81fc0] [OpenH264] this = 0x0x55b93df8ef10, Error:CWelsH264SVCEncoder::Initialize(), WelsInitEncoderExt failed.&#xA;[libopenh264 @ 0x55b93df81fc0] Initialize failed&#xA;Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height&#xA;   &#xA;&#xA; Traceback (most recent call last):&#xA;      File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 236, in saving&#xA;        yield self&#xA;      File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 1095, in save&#xA;        writer.grab_frame(**savefig_kwargs)&#xA;      File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 353, in grab_frame&#xA;        self.fig.savefig(self._proc.stdin, format=self.frame_format,&#xA;      File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/figure.py", line 3012, in savefig&#xA;        self.canvas.print_figure(fname, **kwargs)&#xA;      File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/backend_bases.py", line 2314, in print_figure&#xA;        result = print_method(&#xA;      File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/backend_bases.py", line 1643, in wrapper&#xA;        return func(*args, **kwargs)&#xA;      File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/_api/deprecation.py", line 412, in wrapper&#xA;        return func(*inner_args, **inner_kwargs)&#xA;      File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/backends/backend_agg.py", line 486, in print_raw&#xA;        fh.write(renderer.buffer_rgba())&#xA;    BrokenPipeError: [Errno 32] Broken pipe&#xA;During handling of the above exception, another exception occurred:&#xA;Traceback (most recent call last):&#xA;  File "/home/rom/PycharmProjects/cow_tracking_package/tracking-cows/main.py", line 330, in <module>&#xA;    inference_tracking_video(opt=args, device=dev, detector=detector, keypoint_tfm=keypoint_tfm,&#xA;  File "/home/rom/PycharmProjects/cow_tracking_package/tracking-cows/tracking.py", line 325, in inference_tracking_video&#xA;    postprocess_tracking_results(track_args=track_args, cfg_postprocess=cfg_matching_parameters.POSTPROCESS,&#xA;  File "/home/rom/PycharmProjects/cow_tracking_package/tracking-cows/postprocessing/postprocess_results.py", line 90, in postprocess_tracking_results&#xA;    postprocess_trajectories(track_args=track_args, analysis_matching_cfg=cfg_analysis)&#xA;  File "/home/rom/PycharmProjects/cow_tracking_package/tracking-cows/postprocessing/postprocess_results.py", line 58, in postprocess_trajectories&#xA;    analyse_trajectories(analysis_arguments, full_width, full_height, video_fps, frame_rate_animation)&#xA;  File "/home/rom/PycharmProjects/cow_tracking_package/tracking-cows/postprocessing/trajectory_postprocess.py", line 115, in analyse_trajectories&#xA;    create_virtual_map_animation_final(opt.save_dir, final_matching_both_cams, color_dict3, full_width,&#xA;  File "/home/rom/PycharmProjects/cow_tracking_package/tracking-cows/output/output_plot_fused_trajectory_animation.py", line 236, in create_virtual_map_animation_final&#xA;    virtual_map_animation.save(traj_file_path, writer=writer)&#xA;  File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 1095, in save&#xA;    writer.grab_frame(**savefig_kwargs)&#xA;  File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/contextlib.py", line 137, in __exit__&#xA;    self.gen.throw(typ, value, traceback)&#xA;  File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 238, in saving&#xA;    self.finish()&#xA;  File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 344, in finish&#xA;    self._cleanup()  # Inline _cleanup() once cleanup() is removed.&#xA;  File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 375, in _cleanup&#xA;    raise subprocess.CalledProcessError(&#xA;subprocess.CalledProcessError: Command &#x27;[&#x27;ffmpeg&#x27;, &#x27;-f&#x27;, &#x27;rawvideo&#x27;, &#x27;-vcodec&#x27;, &#x27;rawvideo&#x27;, &#x27;-s&#x27;, &#x27;4000x4000&#x27;, &#x27;-pix_fmt&#x27;, &#x27;rgba&#x27;, &#x27;-r&#x27;, &#x27;5&#x27;, &#x27;-loglevel&#x27;, &#x27;error&#x27;, &#x27;-i&#x27;, &#x27;pipe:&#x27;, &#x27;-vcodec&#x27;, &#x27;h264&#x27;, &#x27;-pix_fmt&#x27;, &#x27;yuv420p&#x27;, &#x27;-metadata&#x27;, &#x27;artist=Me&#x27;, &#x27;-y&#x27;, &#x27;../out_detect/run7/TRACKS_ANIMATION_fused.mp4&#x27;]&#x27; returned non-zero exit status 1.&#xA;&#xA;</module></class>

    &#xA;

    ffmpeg version is 4.3 built with gcc 7.3.0. and OS is Ubuntu 20.04&#xA;and my conda env is below.

    &#xA;

    channels:&#xA;  - pytorch&#xA;  - defaults&#xA;dependencies:&#xA;  - _libgcc_mutex=0.1=main&#xA;  - _openmp_mutex=4.5=1_gnu&#xA;  - blas=1.0=mkl&#xA;  - bzip2=1.0.8=h7b6447c_0&#xA;  - ca-certificates=2021.10.26=h06a4308_2&#xA;  - certifi=2021.10.8=py39h06a4308_2&#xA;  - cudatoolkit=11.3.1=h2bc3f7f_2&#xA;  - ffmpeg=4.3=hf484d3e_0&#xA;  - freetype=2.11.0=h70c0345_0&#xA;  - giflib=5.2.1=h7b6447c_0&#xA;  - gmp=6.2.1=h2531618_2&#xA;  - gnutls=3.6.15=he1e5248_0&#xA;  - intel-openmp=2021.4.0=h06a4308_3561&#xA;  - jpeg=9d=h7f8727e_0&#xA;  - lame=3.100=h7b6447c_0&#xA;  - lcms2=2.12=h3be6417_0&#xA;  - ld_impl_linux-64=2.35.1=h7274673_9&#xA;  - libffi=3.3=he6710b0_2&#xA;  - libgcc-ng=9.3.0=h5101ec6_17&#xA;  - libgomp=9.3.0=h5101ec6_17&#xA;  - libiconv=1.15=h63c8f33_5&#xA;  - libidn2=2.3.2=h7f8727e_0&#xA;  - libpng=1.6.37=hbc83047_0&#xA;  - libstdcxx-ng=9.3.0=hd4cf53a_17&#xA;  - libtasn1=4.16.0=h27cfd23_0&#xA;  - libtiff=4.2.0=h85742a9_0&#xA;  - libunistring=0.9.10=h27cfd23_0&#xA;  - libuv=1.40.0=h7b6447c_0&#xA;  - libwebp=1.2.0=h89dd481_0&#xA;  - libwebp-base=1.2.0=h27cfd23_0&#xA;  - lz4-c=1.9.3=h295c915_1&#xA;  - mkl=2021.4.0=h06a4308_640&#xA;  - mkl-service=2.4.0=py39h7f8727e_0&#xA;  - mkl_fft=1.3.1=py39hd3c417c_0&#xA;  - mkl_random=1.2.2=py39h51133e4_0&#xA;  - ncurses=6.3=h7f8727e_2&#xA;  - nettle=3.7.3=hbbd107a_1&#xA;  - numpy=1.21.2=py39h20f2e39_0&#xA;  - numpy-base=1.21.2=py39h79a1101_0&#xA;  - olefile=0.46=pyhd3eb1b0_0&#xA;  - openh264=2.1.0=hd408876_0&#xA;  - openssl=1.1.1m=h7f8727e_0&#xA;  - pillow=8.4.0=py39h5aabda8_0&#xA;  - pip=21.2.4=py39h06a4308_0&#xA;  - python=3.9.7=h12debd9_1&#xA;  - pytorch=1.10.0=py3.9_cuda11.3_cudnn8.2.0_0&#xA;  - pytorch-mutex=1.0=cuda&#xA;  - readline=8.1=h27cfd23_0&#xA;  - setuptools=58.0.4=py39h06a4308_0&#xA;  - six=1.16.0=pyhd3eb1b0_0&#xA;  - sqlite=3.36.0=hc218d9a_0&#xA;  - tk=8.6.11=h1ccaba5_0&#xA;  - torchaudio=0.10.0=py39_cu113&#xA;  - torchvision=0.11.1=py39_cu113&#xA;  - typing_extensions=3.10.0.2=pyh06a4308_0&#xA;  - wheel=0.37.0=pyhd3eb1b0_1&#xA;  - xz=5.2.5=h7b6447c_0&#xA;  - zlib=1.2.11=h7b6447c_3&#xA;  - zstd=1.4.9=haebb681_0&#xA;  - pip:&#xA;    - absl-py==1.0.0&#xA;    - addict==2.4.0&#xA;    - cachetools==4.2.4&#xA;    - charset-normalizer==2.0.8&#xA;    - cloudpickle==2.0.0&#xA;    - cycler==0.11.0&#xA;    - cython==0.29.24&#xA;    - docutils==0.18.1&#xA;    - easydict==1.9&#xA;    - filterpy==1.4.5&#xA;    - fonttools==4.28.2&#xA;    - geohash2==1.1&#xA;    - google-auth==2.3.3&#xA;    - google-auth-oauthlib==0.4.6&#xA;    - grpcio==1.42.0&#xA;    - idna==3.3&#xA;    - imageio==2.13.5&#xA;    - importlib-metadata==4.8.2&#xA;    - joblib==1.1.0&#xA;    - kiwisolver==1.3.2&#xA;    - loguru==0.6.0&#xA;    - markdown==3.3.6&#xA;    - matplotlib==3.5.0&#xA;    - natsort==8.0.2&#xA;    - networkx==2.6.3&#xA;    - oauthlib==3.1.1&#xA;    - opencv-python==4.5.4.60&#xA;    - packaging==21.3&#xA;    - pandas==1.3.4&#xA;    - protobuf==3.19.1&#xA;    - pyasn1==0.4.8&#xA;    - pyasn1-modules==0.2.8&#xA;    - pycocotools==2.0.4&#xA;    - pyparsing==3.0.6&#xA;    - pyqt5==5.15.6&#xA;    - pyqt5-qt5==5.15.2&#xA;    - pyqt5-sip==12.9.0&#xA;    - python-dateutil==2.8.2&#xA;    - pytz==2021.3&#xA;    - pytz-deprecation-shim==0.1.0.post0&#xA;    - pywavelets==1.2.0&#xA;    - pyyaml==6.0&#xA;    - requests==2.26.0&#xA;    - requests-oauthlib==1.3.0&#xA;    - rsa==4.8&#xA;    - scikit-image==0.19.1&#xA;    - scikit-learn==1.0.2&#xA;    - scipy==1.7.3&#xA;    - seaborn==0.11.2&#xA;    - setuptools-scm==6.3.2&#xA;    - shapely==1.8.0&#xA;    - sklearn==0.0&#xA;    - split-folders==0.4.3&#xA;    - tabulate==0.8.9&#xA;    - tensorboard==2.7.0&#xA;    - tensorboard-data-server==0.6.1&#xA;    - tensorboard-plugin-wit==1.8.0&#xA;    - terminaltables==3.1.10&#xA;    - thop==0.0.31-2005241907&#xA;    - threadpoolctl==3.1.0&#xA;    - tifffile==2021.11.2&#xA;    - timm==0.4.12&#xA;    - tomli==1.2.2&#xA;    - tqdm==4.62.3&#xA;    - traja==0.2.8&#xA;    - tzdata==2021.5&#xA;    - tzlocal==4.1&#xA;    - urllib3==1.26.7&#xA;    - werkzeug==2.0.2&#xA;    - yacs==0.1.8&#xA;    - yapf==0.32.0&#xA;    - zipp==3.6.0&#xA;

    &#xA;

    I also installed ffmpy through conda.

    &#xA;

    It will be very grateful if anyone could help me.

    &#xA;

  • FFmpeg - MJPEG decoding gives inconsistent values

    28 décembre 2016, par ahmadh

    I have a set of JPEG frames which I am muxing into an avi, which gives me a mjpeg video. This is the command I run on the console :

    ffmpeg -y -start_number 0 -i %06d.JPEG -codec copy vid.avi

    When I try to demux the video using ffmpeg C api, I get frames which are slightly different in values. Demuxing code looks something like this :

    AVFormatContext* fmt_ctx = NULL;
    AVCodecContext* cdc_ctx = NULL;
    AVCodec* vid_cdc = NULL;
    int ret;
    unsigned int height, width;

    ....
    // read_nframes is the number of frames to read
    output_arr = new unsigned char [height * width * 3 *
                                   sizeof(unsigned char) * read_nframes];

    avcodec_open2(cdc_ctx, vid_cdc, NULL);

    int num_bytes;
    uint8_t* buffer = NULL;
    const AVPixelFormat out_format = AV_PIX_FMT_RGB24;

    num_bytes = av_image_get_buffer_size(out_format, width, height, 1);
    buffer = (uint8_t*)av_malloc(num_bytes * sizeof(uint8_t));

    AVFrame* vid_frame = NULL;
    vid_frame = av_frame_alloc();
    AVFrame* conv_frame = NULL;
    conv_frame = av_frame_alloc();

    av_image_fill_arrays(conv_frame->data, conv_frame->linesize, buffer,
                        out_format, width, height, 1);

    struct SwsContext *sws_ctx = NULL;
    sws_ctx = sws_getContext(width, height, cdc_ctx->pix_fmt,
                            width, height, out_format,
                            SWS_BILINEAR, NULL,NULL,NULL);

    int frame_num = 0;
    AVPacket vid_pckt;
    while (av_read_frame(fmt_ctx, &amp;vid_pckt) >=0) {
       ret = avcodec_send_packet(cdc_ctx, &amp;vid_pckt);
       if (ret &lt; 0)
           break;

       ret = avcodec_receive_frame(cdc_ctx, vid_frame);
       if (ret &lt; 0 &amp;&amp; ret != AVERROR(EAGAIN) &amp;&amp; ret != AVERROR_EOF)
           break;
       if (ret >= 0) {
           // convert image from native format to planar GBR
           sws_scale(sws_ctx, vid_frame->data,
                     vid_frame->linesize, 0, vid_frame->height,
                     conv_frame->data, conv_frame->linesize);

           unsigned char* r_ptr = output_arr +
               (height * width * sizeof(unsigned char) * 3 * frame_num);
           unsigned char* g_ptr = r_ptr + (height * width * sizeof(unsigned char));
           unsigned char* b_ptr = g_ptr + (height * width * sizeof(unsigned char));
           unsigned int pxl_i = 0;

           for (unsigned int r = 0; r &lt; height; ++r) {
               uint8_t* avframe_r = conv_frame->data[0] + r*conv_frame->linesize[0];
               for (unsigned int c = 0; c &lt; width; ++c) {
                   r_ptr[pxl_i] = avframe_r[0];
                   g_ptr[pxl_i]   = avframe_r[1];
                   b_ptr[pxl_i]   = avframe_r[2];
                   avframe_r += 3;
                   ++pxl_i;
               }
           }

           ++frame_num;

           if (frame_num >= read_nframes)
               break;
       }
    }

    ...

    In my experience around two-thirds of the pixel values are different, each by +-1 (in a range of [0,255]). I am wondering is it due to some decoding scheme FFmpeg uses for reading JPEG frames ? I tried encoding and decoding png frames, and it works perfectly fine. I am sure this is something to do with the libav decoding process because the MD5 values are consistent between the images and the video :

    ffmpeg -i %06d.JPEG -f framemd5 -
    ffmpeg -i vid.avi -f framemd5 -

    In short my goal is to get the same pixel by pixel values for each JPEG frame as I would I have gotten if I was reading the JPEG images directly. Here is the stand-alone bitbucket code I used. It includes cmake files to build code, and a couple of jpeg frames with the converted avi file to test this problem. (give ’—filetype png’ to test the png decoding).

  • Stacking 3 gmp4-files using Avisynth and VirtualDub, results in wrong colors and distorted files

    22 juin 2015, par Frederik Amann

    I need to conduct the seemingly simple task of stacking 3 files next to each other. They are all the same : .avi Container, 320x240, 4:3, 25 fps, GeoVision Advanced MPEG-4 GEO codec.
    I installed the GeoVision codec (http://www.geovision.com.tw/english/5_8.asp# - select "other utilities"), so my system (windows media player, media player classic) can play back the files. Also, I can open and work with them in Virtual Dub. I installed AviSynth and wrote the simple script for stacking them next to each other

    h1 = AVISource("Event20150423075842001.avi")

    h2 = AVISource("Event20150423075842002.avi")

    h3 = AVISource("Event20150423075848003.avi")

    StackHorizontal(h1, h2, h3)

    now, when I save it as .avs and then open it using VirtualDub, I see three videos nicely put next to each other, but the colors are weird and parts of the video are upside down and everything is just ..wrong - see Screenshot http://www.linkfile.de/download-46f71057ed130f9be29510f68ce4ee71.php. First I thought it has something to do with avisynth taking the wrong codec, so i forced it on gmp4 (as you can also see in the screenshot), but the result is the same. I have now also Avisynth+ installed, as well as VirtualDubMod.
    When I open the .avs in VDMod, I get "couldn’t locate decompressor for format YV24", but it still opens the video which looks a little better though (but when I make a direct stream copy and save it, then play it back in MPC it looks exactly the same as it looked on the first screenshot). So this error points me toward something related to the colorspace.
    Now my questions :

    • How can I find out in which format my files are ? YUV24, YUV12, ..?
    • And then, how can I tell Avisynth to use a format that VirtualDubMod can deal with ?
    • Or how can I make VirtualDub deal with YUV24 ? Am I just missing a codec ? Is my train of thought even slightly on the right track, or is my problem something totally different ?

    I also found this related thread : Editing/Decoding AVI files using system-installed proprietary codecs, but using avisynth and ffmpeg, I get similar results as with VirtualDub.

    I can’t use the solution of converting all my files first and then do the stacking in a second step - because the actual files I have to work with are about 180 videos, each like 8hours long and the time it would consume would stand in no relation to my possibilities..

    I really have looked for solutions during the past week, and I think I’m close, but I sadly just don’t know enough about programming to be able to solve it on my own.. so I also want to excuse for any apparent stupidities in my explanation ;)
    I’m very thankfull for any help

    Have a good time everybody

    EDIT :
    So I have some more Info, and an example file, which I can’t link in this post -.- because I -again- have not enough reputation, very nice. I will try to comment and post the links :)

    Here is what the info() command brought me :
    Colorspace : YV24,
    FieldBased (Separated) Video : NO,
    Parity : Bottom Field First,
    Video Pitch : 320 bytes,
    Audio : NO,
    CPU detected : x87 MMX ISSE SSE4.1 SSSE3