
Recherche avancée
Médias (91)
-
Valkaama DVD Cover Outside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Valkaama DVD Cover Inside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
1,000,000
27 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Demon Seed
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Four of Us are Dying
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (55)
-
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
MediaSPIP en mode privé (Intranet)
17 septembre 2013, parÀ partir de la version 0.3, un canal de MediaSPIP peut devenir privé, bloqué à toute personne non identifiée grâce au plugin "Intranet/extranet".
Le plugin Intranet/extranet, lorsqu’il est activé, permet de bloquer l’accès au canal à tout visiteur non identifié, l’empêchant d’accéder au contenu en le redirigeant systématiquement vers le formulaire d’identification.
Ce système peut être particulièrement utile pour certaines utilisations comme : Atelier de travail avec des enfants dont le contenu ne doit pas (...)
Sur d’autres sites (8234)
-
Error in making animation through ffmpeg (python3.9)
20 avril 2024, par Taehyung GhimWhen I try to make 2D animation map for cow tracking(matching 2 camera views) through ffmpeg, following error occurs.



raise subprocess.CalledProcessError(subprocess.CalledProcessError: Command '['ffmpeg', '-f', 'rawvideo', '-vcodec', 'rawvideo', '-s', '4000x4000', '-pix_fmt', 'rgba', '-r', '5', '-loglevel', 'error', '-i', 'pipe:', '-vcodec', 'h264', '-pix_fmt', 'yuv420p', '-metadata', 'artist=Me', '-y', '../out_detect/run7/TRACKS_ANIMATION_fused.mp4']' returned non-zero exit status 1.




Following is the full error :



Plotting the last 1800.9391813674797 frames.
INFO:Animation.save using <class>
INFO:MovieWriter._run: running command: ffmpeg -f rawvideo -vcodec rawvideo -s 4000x4000 -pix_fmt rgba -r 5 -loglevel error -i pipe: -vcodec h264 -pix_fmt yuv420p -metadata artist=Me -y ../out_detect/run7/TRACKS_ANIMATION_fused.mp4
WARNING:MovieWriter stderr:
[libopenh264 @ 0x55b93df81fc0] [OpenH264] this = 0x0x55b93df8ef10, Error:ParamValidationExt(), width > 0, height > 0, width * height <= 9437184, invalid 4000 x 4000 in dependency layer settings!
[libopenh264 @ 0x55b93df81fc0] [OpenH264] this = 0x0x55b93df8ef10, Error:WelsInitEncoderExt(), ParamValidationExt failed return 2.
[libopenh264 @ 0x55b93df81fc0] [OpenH264] this = 0x0x55b93df8ef10, Error:CWelsH264SVCEncoder::Initialize(), WelsInitEncoderExt failed.
[libopenh264 @ 0x55b93df81fc0] Initialize failed
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
 

 Traceback (most recent call last):
 File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 236, in saving
 yield self
 File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 1095, in save
 writer.grab_frame(**savefig_kwargs)
 File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 353, in grab_frame
 self.fig.savefig(self._proc.stdin, format=self.frame_format,
 File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/figure.py", line 3012, in savefig
 self.canvas.print_figure(fname, **kwargs)
 File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/backend_bases.py", line 2314, in print_figure
 result = print_method(
 File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/backend_bases.py", line 1643, in wrapper
 return func(*args, **kwargs)
 File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/_api/deprecation.py", line 412, in wrapper
 return func(*inner_args, **inner_kwargs)
 File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/backends/backend_agg.py", line 486, in print_raw
 fh.write(renderer.buffer_rgba())
 BrokenPipeError: [Errno 32] Broken pipe
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
 File "/home/rom/PycharmProjects/cow_tracking_package/tracking-cows/main.py", line 330, in <module>
 inference_tracking_video(opt=args, device=dev, detector=detector, keypoint_tfm=keypoint_tfm,
 File "/home/rom/PycharmProjects/cow_tracking_package/tracking-cows/tracking.py", line 325, in inference_tracking_video
 postprocess_tracking_results(track_args=track_args, cfg_postprocess=cfg_matching_parameters.POSTPROCESS,
 File "/home/rom/PycharmProjects/cow_tracking_package/tracking-cows/postprocessing/postprocess_results.py", line 90, in postprocess_tracking_results
 postprocess_trajectories(track_args=track_args, analysis_matching_cfg=cfg_analysis)
 File "/home/rom/PycharmProjects/cow_tracking_package/tracking-cows/postprocessing/postprocess_results.py", line 58, in postprocess_trajectories
 analyse_trajectories(analysis_arguments, full_width, full_height, video_fps, frame_rate_animation)
 File "/home/rom/PycharmProjects/cow_tracking_package/tracking-cows/postprocessing/trajectory_postprocess.py", line 115, in analyse_trajectories
 create_virtual_map_animation_final(opt.save_dir, final_matching_both_cams, color_dict3, full_width,
 File "/home/rom/PycharmProjects/cow_tracking_package/tracking-cows/output/output_plot_fused_trajectory_animation.py", line 236, in create_virtual_map_animation_final
 virtual_map_animation.save(traj_file_path, writer=writer)
 File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 1095, in save
 writer.grab_frame(**savefig_kwargs)
 File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/contextlib.py", line 137, in __exit__
 self.gen.throw(typ, value, traceback)
 File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 238, in saving
 self.finish()
 File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 344, in finish
 self._cleanup() # Inline _cleanup() once cleanup() is removed.
 File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 375, in _cleanup
 raise subprocess.CalledProcessError(
subprocess.CalledProcessError: Command '['ffmpeg', '-f', 'rawvideo', '-vcodec', 'rawvideo', '-s', '4000x4000', '-pix_fmt', 'rgba', '-r', '5', '-loglevel', 'error', '-i', 'pipe:', '-vcodec', 'h264', '-pix_fmt', 'yuv420p', '-metadata', 'artist=Me', '-y', '../out_detect/run7/TRACKS_ANIMATION_fused.mp4']' returned non-zero exit status 1.

</module></class>


ffmpeg version is 4.3 built with gcc 7.3.0. and OS is Ubuntu 20.04
and my conda env is below.


channels:
 - pytorch
 - defaults
dependencies:
 - _libgcc_mutex=0.1=main
 - _openmp_mutex=4.5=1_gnu
 - blas=1.0=mkl
 - bzip2=1.0.8=h7b6447c_0
 - ca-certificates=2021.10.26=h06a4308_2
 - certifi=2021.10.8=py39h06a4308_2
 - cudatoolkit=11.3.1=h2bc3f7f_2
 - ffmpeg=4.3=hf484d3e_0
 - freetype=2.11.0=h70c0345_0
 - giflib=5.2.1=h7b6447c_0
 - gmp=6.2.1=h2531618_2
 - gnutls=3.6.15=he1e5248_0
 - intel-openmp=2021.4.0=h06a4308_3561
 - jpeg=9d=h7f8727e_0
 - lame=3.100=h7b6447c_0
 - lcms2=2.12=h3be6417_0
 - ld_impl_linux-64=2.35.1=h7274673_9
 - libffi=3.3=he6710b0_2
 - libgcc-ng=9.3.0=h5101ec6_17
 - libgomp=9.3.0=h5101ec6_17
 - libiconv=1.15=h63c8f33_5
 - libidn2=2.3.2=h7f8727e_0
 - libpng=1.6.37=hbc83047_0
 - libstdcxx-ng=9.3.0=hd4cf53a_17
 - libtasn1=4.16.0=h27cfd23_0
 - libtiff=4.2.0=h85742a9_0
 - libunistring=0.9.10=h27cfd23_0
 - libuv=1.40.0=h7b6447c_0
 - libwebp=1.2.0=h89dd481_0
 - libwebp-base=1.2.0=h27cfd23_0
 - lz4-c=1.9.3=h295c915_1
 - mkl=2021.4.0=h06a4308_640
 - mkl-service=2.4.0=py39h7f8727e_0
 - mkl_fft=1.3.1=py39hd3c417c_0
 - mkl_random=1.2.2=py39h51133e4_0
 - ncurses=6.3=h7f8727e_2
 - nettle=3.7.3=hbbd107a_1
 - numpy=1.21.2=py39h20f2e39_0
 - numpy-base=1.21.2=py39h79a1101_0
 - olefile=0.46=pyhd3eb1b0_0
 - openh264=2.1.0=hd408876_0
 - openssl=1.1.1m=h7f8727e_0
 - pillow=8.4.0=py39h5aabda8_0
 - pip=21.2.4=py39h06a4308_0
 - python=3.9.7=h12debd9_1
 - pytorch=1.10.0=py3.9_cuda11.3_cudnn8.2.0_0
 - pytorch-mutex=1.0=cuda
 - readline=8.1=h27cfd23_0
 - setuptools=58.0.4=py39h06a4308_0
 - six=1.16.0=pyhd3eb1b0_0
 - sqlite=3.36.0=hc218d9a_0
 - tk=8.6.11=h1ccaba5_0
 - torchaudio=0.10.0=py39_cu113
 - torchvision=0.11.1=py39_cu113
 - typing_extensions=3.10.0.2=pyh06a4308_0
 - wheel=0.37.0=pyhd3eb1b0_1
 - xz=5.2.5=h7b6447c_0
 - zlib=1.2.11=h7b6447c_3
 - zstd=1.4.9=haebb681_0
 - pip:
 - absl-py==1.0.0
 - addict==2.4.0
 - cachetools==4.2.4
 - charset-normalizer==2.0.8
 - cloudpickle==2.0.0
 - cycler==0.11.0
 - cython==0.29.24
 - docutils==0.18.1
 - easydict==1.9
 - filterpy==1.4.5
 - fonttools==4.28.2
 - geohash2==1.1
 - google-auth==2.3.3
 - google-auth-oauthlib==0.4.6
 - grpcio==1.42.0
 - idna==3.3
 - imageio==2.13.5
 - importlib-metadata==4.8.2
 - joblib==1.1.0
 - kiwisolver==1.3.2
 - loguru==0.6.0
 - markdown==3.3.6
 - matplotlib==3.5.0
 - natsort==8.0.2
 - networkx==2.6.3
 - oauthlib==3.1.1
 - opencv-python==4.5.4.60
 - packaging==21.3
 - pandas==1.3.4
 - protobuf==3.19.1
 - pyasn1==0.4.8
 - pyasn1-modules==0.2.8
 - pycocotools==2.0.4
 - pyparsing==3.0.6
 - pyqt5==5.15.6
 - pyqt5-qt5==5.15.2
 - pyqt5-sip==12.9.0
 - python-dateutil==2.8.2
 - pytz==2021.3
 - pytz-deprecation-shim==0.1.0.post0
 - pywavelets==1.2.0
 - pyyaml==6.0
 - requests==2.26.0
 - requests-oauthlib==1.3.0
 - rsa==4.8
 - scikit-image==0.19.1
 - scikit-learn==1.0.2
 - scipy==1.7.3
 - seaborn==0.11.2
 - setuptools-scm==6.3.2
 - shapely==1.8.0
 - sklearn==0.0
 - split-folders==0.4.3
 - tabulate==0.8.9
 - tensorboard==2.7.0
 - tensorboard-data-server==0.6.1
 - tensorboard-plugin-wit==1.8.0
 - terminaltables==3.1.10
 - thop==0.0.31-2005241907
 - threadpoolctl==3.1.0
 - tifffile==2021.11.2
 - timm==0.4.12
 - tomli==1.2.2
 - tqdm==4.62.3
 - traja==0.2.8
 - tzdata==2021.5
 - tzlocal==4.1
 - urllib3==1.26.7
 - werkzeug==2.0.2
 - yacs==0.1.8
 - yapf==0.32.0
 - zipp==3.6.0



I also installed ffmpy through conda.


It will be very grateful if anyone could help me.


-
FFmpeg - MJPEG decoding gives inconsistent values
28 décembre 2016, par ahmadhI have a set of JPEG frames which I am muxing into an avi, which gives me a mjpeg video. This is the command I run on the console :
ffmpeg -y -start_number 0 -i %06d.JPEG -codec copy vid.avi
When I try to demux the video using ffmpeg C api, I get frames which are slightly different in values. Demuxing code looks something like this :
AVFormatContext* fmt_ctx = NULL;
AVCodecContext* cdc_ctx = NULL;
AVCodec* vid_cdc = NULL;
int ret;
unsigned int height, width;
....
// read_nframes is the number of frames to read
output_arr = new unsigned char [height * width * 3 *
sizeof(unsigned char) * read_nframes];
avcodec_open2(cdc_ctx, vid_cdc, NULL);
int num_bytes;
uint8_t* buffer = NULL;
const AVPixelFormat out_format = AV_PIX_FMT_RGB24;
num_bytes = av_image_get_buffer_size(out_format, width, height, 1);
buffer = (uint8_t*)av_malloc(num_bytes * sizeof(uint8_t));
AVFrame* vid_frame = NULL;
vid_frame = av_frame_alloc();
AVFrame* conv_frame = NULL;
conv_frame = av_frame_alloc();
av_image_fill_arrays(conv_frame->data, conv_frame->linesize, buffer,
out_format, width, height, 1);
struct SwsContext *sws_ctx = NULL;
sws_ctx = sws_getContext(width, height, cdc_ctx->pix_fmt,
width, height, out_format,
SWS_BILINEAR, NULL,NULL,NULL);
int frame_num = 0;
AVPacket vid_pckt;
while (av_read_frame(fmt_ctx, &vid_pckt) >=0) {
ret = avcodec_send_packet(cdc_ctx, &vid_pckt);
if (ret < 0)
break;
ret = avcodec_receive_frame(cdc_ctx, vid_frame);
if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
break;
if (ret >= 0) {
// convert image from native format to planar GBR
sws_scale(sws_ctx, vid_frame->data,
vid_frame->linesize, 0, vid_frame->height,
conv_frame->data, conv_frame->linesize);
unsigned char* r_ptr = output_arr +
(height * width * sizeof(unsigned char) * 3 * frame_num);
unsigned char* g_ptr = r_ptr + (height * width * sizeof(unsigned char));
unsigned char* b_ptr = g_ptr + (height * width * sizeof(unsigned char));
unsigned int pxl_i = 0;
for (unsigned int r = 0; r < height; ++r) {
uint8_t* avframe_r = conv_frame->data[0] + r*conv_frame->linesize[0];
for (unsigned int c = 0; c < width; ++c) {
r_ptr[pxl_i] = avframe_r[0];
g_ptr[pxl_i] = avframe_r[1];
b_ptr[pxl_i] = avframe_r[2];
avframe_r += 3;
++pxl_i;
}
}
++frame_num;
if (frame_num >= read_nframes)
break;
}
}
...In my experience around two-thirds of the pixel values are different, each by +-1 (in a range of [0,255]). I am wondering is it due to some decoding scheme FFmpeg uses for reading JPEG frames ? I tried encoding and decoding png frames, and it works perfectly fine. I am sure this is something to do with the libav decoding process because the MD5 values are consistent between the images and the video :
ffmpeg -i %06d.JPEG -f framemd5 -
ffmpeg -i vid.avi -f framemd5 -In short my goal is to get the same pixel by pixel values for each JPEG frame as I would I have gotten if I was reading the JPEG images directly. Here is the stand-alone bitbucket code I used. It includes cmake files to build code, and a couple of jpeg frames with the converted avi file to test this problem. (give ’—filetype png’ to test the png decoding).
-
Stacking 3 gmp4-files using Avisynth and VirtualDub, results in wrong colors and distorted files
22 juin 2015, par Frederik AmannI need to conduct the seemingly simple task of stacking 3 files next to each other. They are all the same : .avi Container, 320x240, 4:3, 25 fps, GeoVision Advanced MPEG-4 GEO codec.
I installed the GeoVision codec (http://www.geovision.com.tw/english/5_8.asp# - select "other utilities"), so my system (windows media player, media player classic) can play back the files. Also, I can open and work with them in Virtual Dub. I installed AviSynth and wrote the simple script for stacking them next to each otherh1 = AVISource("Event20150423075842001.avi")
h2 = AVISource("Event20150423075842002.avi")
h3 = AVISource("Event20150423075848003.avi")
StackHorizontal(h1, h2, h3)now, when I save it as .avs and then open it using VirtualDub, I see three videos nicely put next to each other, but the colors are weird and parts of the video are upside down and everything is just ..wrong - see Screenshot http://www.linkfile.de/download-46f71057ed130f9be29510f68ce4ee71.php. First I thought it has something to do with avisynth taking the wrong codec, so i forced it on gmp4 (as you can also see in the screenshot), but the result is the same. I have now also Avisynth+ installed, as well as VirtualDubMod.
When I open the .avs in VDMod, I get "couldn’t locate decompressor for format YV24", but it still opens the video which looks a little better though (but when I make a direct stream copy and save it, then play it back in MPC it looks exactly the same as it looked on the first screenshot). So this error points me toward something related to the colorspace.
Now my questions :- How can I find out in which format my files are ? YUV24, YUV12, ..?
- And then, how can I tell Avisynth to use a format that VirtualDubMod can deal with ?
- Or how can I make VirtualDub deal with YUV24 ? Am I just missing a codec ? Is my train of thought even slightly on the right track, or is my problem something totally different ?
I also found this related thread : Editing/Decoding AVI files using system-installed proprietary codecs, but using avisynth and ffmpeg, I get similar results as with VirtualDub.
I can’t use the solution of converting all my files first and then do the stacking in a second step - because the actual files I have to work with are about 180 videos, each like 8hours long and the time it would consume would stand in no relation to my possibilities..
I really have looked for solutions during the past week, and I think I’m close, but I sadly just don’t know enough about programming to be able to solve it on my own.. so I also want to excuse for any apparent stupidities in my explanation ;)
I’m very thankfull for any helpHave a good time everybody
EDIT :
So I have some more Info, and an example file, which I can’t link in this post -.- because I -again- have not enough reputation, very nice. I will try to comment and post the links :)Here is what the info() command brought me :
Colorspace : YV24,
FieldBased (Separated) Video : NO,
Parity : Bottom Field First,
Video Pitch : 320 bytes,
Audio : NO,
CPU detected : x87 MMX ISSE SSE4.1 SSSE3