
Recherche avancée
Autres articles (61)
-
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Possibilité de déploiement en ferme
12 avril 2011, parMediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)
Sur d’autres sites (8467)
-
Salty Game Music
31 mai 2011, par Multimedia Mike — GeneralHave you heard of Google’s Native Client (NaCl) project ? Probably not. Basically, it allows native code modules to run inside a browser (where ‘browser’ is defined pretty narrowly as ‘Google Chrome’ in this case). Programs are sandboxed so they aren’t a security menace (or so the whitepapers claim) but are allowed to access a variety of APIs including video and audio. The latter API is significant because sound tends to be forgotten in all the hullabaloo surrounding non-Flash web technologies. At any rate, enjoy NaCl while you can because I suspect it won’t be around much longer.
After my recent work upgrading some old music synthesis programs to user more modern audio APIs, I got the idea to try porting the same code to run under NaCl in Chrome (first Nosefart, then Game Music Emu/GME). In this exercise, I met with very limited success. This blog post documents some of the pitfalls in my excursion.
Infrastructure
People who know me know that I’m rather partial — to put it gently — to straight-up C vs. C++. The NaCl SDK is heavily skewed towards C++. However, it does provide a Python tool called init_project.py which can create the skeleton of a project and can do so in C with the'-c'
option :./init_project.py -c -n saltynosefart
This generates something that can be built using a simple ‘make’. When I added Nosefart’s C files, I learned that the project Makefile has places for project-necessary CFLAGS but does not honor them. The problem is that the generated Makefile includes a broader system Makefile that overrides the CFLAGS in the project Makefile. Going into the system Makefile and changing
"CFLAGS ="
->"CFLAGS +="
solves this problem.Still, maybe I’m the first person to attempt building something in Native Client so I’m the first person to notice this ?
Basic Playback
At least the process to create an audio-enabled NaCl app is well-documented. Too bad it doesn’t seem to compile as advertised. According to my notes on the matter, I filled inPPP_InitializeModule()
with the appropriate boilerplate as outlined in the docs but got a linker error concerning get_browser_interface().Plan B : C++
Obviously, the straight C stuff is very much a second-class citizen in this NaCl setup. Fortunately, there is already that fully functional tone generator example program in the limited samples suite. Plan B is to copy that project and edit it until it accepts Nosefart/GME audio instead of a sine wave.The build system assumes all C++ files should have .cc extensions. I have to make some fixes so that it will accept .cpp files (either that, or rename all .cpp to .cc, but that’s not very clean).
Making Noise
You’ll be happy to know that I did successfully swap out the tone generator for either Nosefart or GME. Nosefart has a slightly fickle API that requires revving the emulator frame by frame and generating a certain number of audio samples. GME’s API is much easier to work with in this situation — just tell it how many samples it needs to generate and give it a pointer to a buffer. I played NES and SNES music play through this ad-hoc browser plugin, and I’m confident all the other supported formats would have worked if I went through the bother of converting the music data files into C headers to be included in the NaCl executable binaries (dynamically loading data via the network promised to be a far more challenging prospect reserved for phase 3 of the project).Portable ?
I wouldn’t say so. I developed it on Linux and things ran fine there. I tried to run the same binaries on the Windows version of Chrome to no avail. It looks like it wasn’t even loading the .nexe files (NaCl executables).Thinking About The (Lack Of A) Future
As I was working on this project, I noticed that the online NaCl documentation materialized explicit banners warning that my NaCl binaries compiled for Chrome 11 won’t work for Chrome 12 and that I need to code to the newly-released 0.3 SDK version. Not a fuzzy feeling. I also don’t feel good that I’m working from examples using bleeding edge APIs that feature deprecation as part of their naming convention, e.g., pp::deprecated::ScriptableObject().Ever-changing API + minimal API documentation + API that only works in one browser brand + requiring end user to explicitly enable feature = … well, that’s why I didn’t bother to release any showcase pertaining to this little experiment. Would have been neat, but I strongly suspect that this is yet another one of those APIs that Google decides to deprecate soon.
See Also :
-
FFmpeg with Nvidia GPU - full HW transcode with 50i to 50p deinterlacing
5 janvier 2018, par Jernej StopinšekI’m trying to do a full hardware transcode of an udp stream to hls
with 50i to 50p deinterlacing.I’m using ffmpeg and Nvidia GPU.
Since HLS requires deinterlacing
I would like to deinterlace an interlaced source stream and preserve
as much smooth motion and picture quality as possible.My hardware, software and driver info :
GPU : Tesla P100-PCIE-12GB
Nvidia Driver Version : 387.26
Cuda compilation tools, release 9.1, V9.1.85
FFmpeg from git on 20171218
ffmpeg version N-89520-g3f88744067 Copyright (c) 2000-2017 the FFmpeg
developers built with gcc 6.3.0 (Debian 6.3.0-18) 20170516
configuration : —enable-gpl
—enable-cuda-sdk —enable-libx264 —enable-libx265 —enable-nonfree —enable-libnpp —enable-opengl —enable-opencl —enable-libfreetype —enable-openssl —enable-libzvbi —enable-libfontconfig —enable-libfreetype —enable-libfribidi —extra-cflags=-I/usr/local/cuda/include —extra-ldflags=-L/usr/local/cuda/lib64 —arch=x86_64libavutil 56. 6.100 / 56. 6.100
libavcodec 58. 8.100 / 58.
8.100
libavformat 58. 3.100 / 58. 3.100
libavdevice 58. 0.100 / 58. 0.100
libavfilter 7. 7.100 / 7. 7.100
libswscale 5.
0.101 / 5. 0.101
libswresample 3. 0.101 / 3. 0.101
libpostproc 55. 0.100 / 55. 0.100Input stream info :
ffmpeg -t 00:05:00 -i udp://xxx.xxx.xxx.xxx:xxxx -map 0:0 -vf idet -c rawvideo -y -f rawvideo /dev/null
Input #0, mpegts, from ’udp ://xxx.xxx.xxx.xxx:xxxx’ :
Duration :
N/A, start : 49634.159411, bitrate : N/A
Program xxxxx
Metadata : service_name :
service_provider : Stream
#0:0[0x44d] : Video : h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709, top first), 1920x1080 [SAR 1:1 DAR 16:9], 25 fps, 50 tbr, 90k
tbn, 50 tbc
Stream #0:10x19de : Audio : mp2 ([3][0][0][0] /
0x0003), 48000 Hz, stereo, s16p, 192 kb/s
Stream
#0:20x19e1 : Subtitle : dvb_subtitle ([6][0][0][0] / 0x0006)Output #0, rawvideo, to ’/dev/null’ :
Metadata :
encoder :
Lavf58.3.100
Stream #0:0 : Video : rawvideo (I420 / 0x30323449),
yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 622080 kb/s, 25 fps, 25
tbn, 25 tbc
Metadata :
encoder : Lavc58.8.100 rawvideo
frame= 7538 fps= 25 q=-0.0 Lsize=22896675kB time=00:05:01.52
bitrate=622080.0kbits/s dup=38 drop=0 speed=1.02x
video:22896675kB audio:0kB subtitle:0kB other streams:0kB global
headers:0kB muxing overhead : 0.000000%
[Parsed_idet_0 @
0x56370b3c5080] Repeated Fields : Neither : 7458 Top : 24 Bottom : 18
[Parsed_idet_0 @ 0x56370b3c5080] Single frame detection : TFF : 281 BFF :
13 Progressive : 5639 Undetermined : 1567
[Parsed_idet_0 @
0x56370b3c5080] Multi frame detection : TFF : 380 BFF : 0 Progressive :
7120 Undetermined : 0
This is my command for adaptive hardware deinterlacing. It gives great results with picture, but sound is out of sync.
ffmpeg -y -err_detect ignore_err -loglevel debug -vsync -1 -hwaccel cuvid -hwaccel_device 1 -c:v h264_cuvid -deint adaptive -r:v 50 -gpu:v 1 -i "udp://xxx.xxx.xxx.xxx:xxxx=?overrun_nonfatal=1&fifo_size=84450&buffer_size=33554432" -map 0:0 -map 0:1 -c:a aac -b:a 196k -c:v h264_nvenc -flags -global_header+cgop -gpu:v 1 -g:v 50 -bf:v 4 -coder:v cabac -b_adapt:v false -b:v 5184000 -minrate:v 5184000 -maxrate:v 5184000 -bufsize:v 2488320 -rc:v cbr_hq -2pass:v true -rc-lookahead:v 25 -no-scenecut:v 1 -profile:v high -preset:v slow -color_range:v 1 -color_trc:v 1 -color_primaries:v 1 -colorspace:v 1 -f hls -hls_time 5 -hls_list_size 3 -start_number 0 -hls_flags delete_segments /srv/hls/program_01/1080p/index.m3u8
If I add option "-drop_second_field 1" to h264_cuvid and remove -r:v 50 from input and put it to h264_nvenc - then transcoded stream has synced audio, but I think I’m losing quality due to drop_second_field option.
ffmpeg -y -err_detect ignore_err -loglevel debug -vsync -1 -hwaccel cuvid -hwaccel_device 1 -c:v h264_cuvid -deint adaptive -drop_second_field 1 -gpu:v 1 -i "udp://xxx.xxx.xxx.xxx:xxxx=?overrun_nonfatal=1&fifo_size=84450&buffer_size=33554432" -map 0:0 -map 0:1 -c:a aac -b:a 196k -c:v h264_nvenc -flags -global_header+cgop -gpu:v 1 -g:v 50 -r:v 50 -bf:v 4 -coder:v cabac -b_adapt:v false -b:v 5184000 -minrate:v 5184000 -maxrate:v 5184000 -bufsize:v 2488320 -rc:v cbr_hq -2pass:v true -rc-lookahead:v 25 -no-scenecut:v 1 -profile:v high -preset:v slow -color_range:v 1 -color_trc:v 1 -color_primaries:v 1 -colorspace:v 1 -f hls -hls_time 5 -hls_list_size 3 -start_number 0 -hls_flags delete_segments /srv/hls/program_01/1080p/index.m3u8
Could someone please point me in the right direction how to properly deinterlace with cuvid and minimal possible loss of quality ?
-
ffmpeg capture from ip camera video in h264 stream [closed]
23 mars 2023, par Иванов ИванI can't read the frames from the camera and then write them to a video file (any). The fact is that I even get crooked frames, they seem to have violated the coordinates of the position of each point, the video is crooked, distorted


c++ code.


https://drive.google.com/file/d/1W2sZMR5D5pvVmnhiQyhiaQhC9frhdeII/view?usp=sharing


#define INBUF_SIZE 4096


 //writing the minimal required header for a pgm file format
 //portable graymap format-> https://en.wikipedia.org/wiki/Netpbm_format#PGM_example
 fprintf (f, "P5\n%d %d\n%d\n", xsize, ysize, 255);

 //writing line by line
 for (i = 0; i /contains data on a configuration of media content, such as bitrate, 
 //frame rate, sampling frequency, channels, height and many other things.
 AVCodecContext * AVCodecContext_ = NULL;
 AVCodecParameters * AVCodecParametr_ = NULL;
 FILE * f;
 //This structure describes decoded (raw) audio- or video this.
 AVFrame * frame;
 uint8_t inbuf [INBUF_SIZE + AV_INPUT_BUFFER_PADDING_SIZE];
 uint8_t * data;
 size_t data_size;
 int ret;
 int eof;
 AVFormatContext * AVfc = NULL;
 int ERRORS;
 //AVCodec * codec;
 char buf [1024];
 const char * FileName;
 
 //https://habr.com/ru/post/137793/
 //Stores the compressed one shot.
 AVPacket * pkt;
 
 //**********************************************************************
 //Beginning of reading video from the camera. 
 //**********************************************************************
 
 avdevice_register_all ();
 
 filename = "rtsp://admin: 754HG@192.168.1.75:554/11";
 //filename = "c:\\1.avi";
 outfilename = "C:\\2.MP4";
 
 //We open a flow of video (it is the file or the camera). 
 ERRORS = avformat_open_input (& AVfc, filename, NULL, NULL);
 if (ERRORS <0) {
 fprintf (stderr, "ffmpeg: could not open file \n");
 return-1;
 }
 
 //After opening, we can print out information on the video file (iformat = the name of a format; 
 //duration = duration). But as I connected the camera to me wrote: Duration: N/A, 
 //start: 0.000000, bitrate: N/A
 printf ("Format %s, duration %lld us", AVfc-> iformat-> long_name, AVfc-> duration);
 
 
 ERRORS = avformat_find_stream_info (AVfc, NULL);
 if (ERRORS <0) {
 fprintf (stderr, "ffmpeg: Unable to find stream info\n");
 return-1;
 }
 
 
 int CountStream;
 
 //We learn quantity of streams. 
 CountStream = AVfc-> nb_streams;
 
 //Let's look for the codec. 
 int video_stream;
 for (video_stream = 0; video_stream nb_streams; ++ video_stream) {
 if (AVfc-> streams[video_stream]-> codecpar-> codec_type == AVMEDIA_TYPE_VIDEO) {
 break;
 }
 
 }
 
 if (video_stream == AVfc-> nb_streams) {
 fprintf (stderr, "ffmpeg: Unable to find video stream\n");
 return-1;
 }
 
 //Here we define a type of the codec, for my camera it is equal as AV_CODEC_ID_HEVC (This that in what is broadcast by my camera)
 codec = avcodec_find_decoder(AVfc-> streams [video_stream]-> codecpar-> codec_id);
 //--------------------------------------------------------------------------------------
 
 //Functions for inquiry of opportunities of libavcodec,
 AVCodecContext_ = avcodec_alloc_context3(codec);
 if (! AVCodecContext _) {
 fprintf (stderr, "Was not succeeded to allocate a video codec context, since it not poddrerzhivayetsya\n");
 exit(1);
 }
 
 //This function is used for initialization 
 //AVCodecContext of video and audio of the codec. The announcement of avcodec_open2 () is in libavcodecavcodec.h
 //We open the codec. 
 
 ERRORS = avcodec_open2 (AVCodecContext _, codec, NULL);
 if (ERRORS <0) {
 fprintf (stderr, "ffmpeg: It is not possible to open codec \n");
 return-1;
 }
 
 //It for processing of a sound - a reserve.
 //swr_alloc_set_opts ()
 //swr_init (); 
 
 //To output all information on the video file. 
 av_dump_format (AVfc, 0, argv[1], 0);
 
 //=========================================================================================
 //Further, we receive frames. before we only received all infomration about the entering video.
 //=========================================================================================
 
 //Now we are going to read packages from a stream and to decode them in shots, but at first 
 //we need to mark out memory for both components (AVPacket and AVFrame).
 frame = av_frame_alloc ();
 
 if (! frame) {
 fprintf (stderr, "Is not possible to mark out memory for video footage \n");
 exit(1);
 }
 //We mark out memory for a package 
 pkt = av_packet_alloc ();
 //We define a file name for saving the picture.
 const char * FileName1 = "C:\\Users\\Павел\\Desktop\\NyFile.PGM";
 //Data reading if they is. 
 while (av_read_frame (AVfc, pkt)> = 0) {
 //It is a package from a video stream? Because there is still a soundtrack.
 if (pkt-> stream_index == video_stream) {
 int ret;
 
 //Transfer of the raw package data as input data in the decoder
 ret = avcodec_send_packet (AVCodecContext _, pkt);
 if (ret <0 | | ret == AVERROR(EAGAIN) | | ret == AVERROR_EOF) {
 std:: cout <<"avcodec_send_packet:" <<ret while="while"> = 0) {
 
 //Returns the decoded output data from the decoder or the encoder
 ret = avcodec_receive_frame (AVCodecContext _, frame);
 if (ret == AVERROR(EAGAIN) | | ret == AVERROR_EOF) {
 //std:: cout <<"avcodec_receive_frame:" <<ret cout="cout"> of frame_number </============================================================================================
 
 //Experimentally - we will keep a shot in the picture. 
 
 save_gray_frame(frame-> data [0], frame-> linesize [0], frame-> width, frame-> height, (char *) FileName1);
 }
 }
 }
 
 //av_parser_close(parser);
 avcodec_free_context (& AVCodecContext _);
 av_frame_free (& frame);
 av_packet_free (& pkt);
 
 return 0;
</ret></ret>