
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (77)
-
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
Sur d’autres sites (5141)
-
Why is there an audio delay on recording video stream with ffmpeg ?
25 décembre 2023, par mqwertyI am trying to record video and audio stream (Line in Microphone Analog Audio) which are streaming from broadcaster computer with those parameters in the recorder computer ;


ffmpeg record parameters :


/usr/bin/ffmpeg -y -buffer_size max -thread_queue_size 8192 -i udp://225.0.5.11:1026 -buffer_size max -thread_queue_size 8192 -i udp://225.0.5.11:1032 -map 0:v -map 1:a -metadata title=COMPUTER-01_metadata_file -metadata creation_time="2023-12-25 13:25:29" -threads 0 -c:v copy -c:a copy -movflags +faststart -f segment -segment_time 01:00:00 -segment_atclocktime 1 -reset_timestamps 1 -strftime 1 -segment_format mp4 -t 120 test_record_video_with_audio_%Y-%m-%d_%H-%M-%S.mp4



The ffmpeg started and finished successfully, but when I open the recorded video with mpv like (mpv test_record_video_with_audio.mp4), I realized that there is a 5-6 seconds delay in audio. How can I prevent the delay of audio in the recorded mp4 file without using offset ? My last option is setting offset but I think that it is not safe according to any changes in network or etc.


FFMPEG version on both computer :


ffmpeg version 4.2.9 Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 8 (GCC)



BROADCASTER COMPUTER :


sysctl.conf :


No added configurations.



ethtool output :


Supported ports: [ TP ]
Supported link modes: 100baseT/Full
 1000baseT/Full
 10000baseT/Full
 2500baseT/Full
 5000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 100baseT/Full
 1000baseT/Full
 10000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 10000Mb/s
Duplex: Full
Auto-negotiation: on
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
MDI-X: Unknown
Supports Wake-on: d
Wake-on: d
 Current message level: 0x00000007 (7)
 drv probe link
Link detected: yes



ffmpeg video stream :


ffmpeg -fflags +genpts -f x11grab -framerate 30 -video_size uhd2160 -i :0 -c:v hevc_nvenc -preset fast -pix_fmt bgr0 -b:v 3M -g 25 -an -f mpegts udp://225.0.5.11:1026



ffmpeg audio stream :


ffmpeg -f alsa -i hw:0,0 -c:a aac -ar 48000 -b:a 1024K -ab 512k -f rtp_mpegts rtp://225.0.5.11:1032



nvidia-smi :


| NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA T400 4GB Off | 00000000:5B:00.0 Off | N/A |
| 38% 38C P8 N/A / 31W | 207MiB / 4096MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA RTX A4000 Off | 00000000:9E:00.0 Off | Off |
| 41% 59C P2 41W / 140W | 766MiB / 16376MiB | 17% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
 
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 3227 G /usr/libexec/Xorg 114MiB |
| 0 N/A N/A 3423 G /usr/bin/gnome-shell 87MiB |
| 1 N/A N/A 3227 G /usr/libexec/Xorg 285MiB |
| 1 N/A N/A 3423 G /usr/bin/gnome-shell 91MiB |
| 1 N/A N/A 3762 C ffmpeg 372MiB |
+---------------------------------------------------------------------------------------+



lscpu output :




Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 5220R CPU @ 2.20GHz
BIOS Model name: Intel(R) Xeon(R) Gold 5220R CPU @ 2.20GHz
Stepping: 7
CPU MHz: 2200.000
CPU max MHz: 4000.0000
CPU min MHz: 1000.0000
BogoMIPS: 4400.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 36608K
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95



OS : CentOS Stream release 8



RECORDER COMPUTER :


sysctl.conf :


net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_rmem= 4096 87380 16777216
net.ipv4.tcp_wmem= 4096 65536 16777216
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_no_metrics_save = 0
net.core.netdev_max_backlog = 50000
net.core.optmem_max=25165824



lscpu output :


Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 5318Y CPU @ 2.10GHz
BIOS Model name: Intel(R) Xeon(R) Gold 5318Y CPU @ 2.10GHz
Stepping: 6
CPU MHz: 3400.000
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 36864K



ethtool output :


Supported ports: [ TP ]
 Supported link modes: 1000baseT/Full
 10000baseT/Full
 Supported pause frame use: Symmetric Receive-only
 Supports auto-negotiation: Yes
 Supported FEC modes: Not reported
 Advertised link modes: 1000baseT/Full
 10000baseT/Full
 Advertised pause frame use: Symmetric
 Advertised auto-negotiation: Yes
 Advertised FEC modes: Not reported
 Speed: 10000Mb/s
 Duplex: Full
 Auto-negotiation: on
 Port: Twisted Pair
 PHYAD: 12
 Transceiver: internal
 MDI-X: Unknown
 Supports Wake-on: d
 Wake-on: d
 Current message level: 0x00002081 (8321)
 drv tx_err hw
 Link detected: yes



No NVIDIA Graphic Driver


OS : CentOS Stream release 8



I tried audio encoding while recording like :


"-c:a", "aac", 
"-ar", "48000", 
"-b:a", "128k",



I also tried :


"aresample=async=1"




Unfortunately,these did not have any improvements on preventing latency in audio.


-
How to read a UDP stream and forward it as SRT ?
27 décembre 2020, par andrea-fDuring this holidays I started a small hobby project to learn how SRT works. I got a simple Android app set up with NodeMediaClient (https://github.com/NodeMedia/NodeMediaClient-Android/tree/master/nodemediaclient/src) which publishes an UDP stream, which I read by :


private final Object txFrameLock = new Object();
 [...]

 t = new Thread(new Runnable() {
 public void run() {

 byte[] message = new byte[MAX_UDP_DATAGRAM_LEN];

 try {
 socket = new DatagramSocket(UDP_SERVER_PORT);
 while (!Thread.currentThread().isInterrupted()) {
 while (!socket.isClosed()) {
 DatagramPacket packet = new DatagramPacket(message, message.length);
 socket.receive(packet);
 
 ByteBuffer tsUDPPack = ByteBuffer.wrap(packet.getData());
 int ret = parseTSPack(tsUDPPack);
 
 Log.i("srtstreaming SRT packet sent", String.format("%d", ret));
 }
 synchronized (txFrameLock) {
 try {
 txFrameLock.wait(10);
 //Thread.sleep(500);

 } catch (InterruptedException ie) {
 t.interrupt();
 }
 }
 }

 } catch (SocketException e) {
 e.printStackTrace();
 } catch (IOException e) {
 e.printStackTrace();
 } finally {
 if (socket != null) {
 socket.close();
 }
 }
 }
 });
 t.start();



parseTSPack looks like :


private int parseTSPack(ByteBuffer tsUDPPack)
 {
 byte[] ts_pack = new byte[TS_PACK_LEN];
 if (tsUDPPack.remaining() != TS_UDP_PACK_LEN) {
 Log.i(TAG, "srtestreaming ts udp len is not 1316.");
 return 0;
 }
 int count = 0;
 while (tsUDPPack.remaining() > 0) {
 tsUDPPack.get(ts_pack);
 int ret = mSrt.send(ts_pack);
 count++;
 Log.i("srtstreaming ts packets ", String.format("%d", count));
 }
 return count;
 }



Then the JNI implementation takes care of opening the stream (
srt://192.168.1.238:4200?streamid=test/live/1234
) and then handling themSrt.send(ts_pack)
call to forward the packets to SRT caller.
On the receiving side I am using :ffmpeg -v 9 -loglevel 99 -report -re -i 'srt://192.168.1.238:4200?streamid=test/live/1234&mode=listener' -c copy -copyts -f mpegts ./srt_recording_ffmpeg.ts
the video arrives broken and no frames can be decoded.
In FFMPEG output I am getting something along the lines :

[mpegts @ 0x7f9a0f008200] Probe: 176, score: 1, dvhs_score: 1, fec_score: 1 
[mpegts @ 0x7f9a0f008200] Probe: 364, score: 2, dvhs_score: 1, fec_score: 1 
[mpegts @ 0x7f9a0f008200] Probe: 552, score: 3, dvhs_score: 1, fec_score: 1 
[mpegts @ 0x7f9a0f008200] Probe: 740, score: 3, dvhs_score: 1, fec_score: 1 
[mpegts @ 0x7f9a0f008200] Packet corrupt (stream = 1, dts = 980090).
[mpegts @ 0x7f9a0f008200] rfps: 7.583333 0.011807
[mpegts @ 0x7f9a0f008200] rfps: 8.250000 0.014541
[...]
[mpegts @ 0x7f9a10809000] Non-monotonous DTS in output stream 0:1; previous: 1193274, current: 1189094; changing to 1193275. This may result in incorrect timestamps in the output file.
[mpegts @ 0x7f9a10809000] Non-monotonous DTS in output stream 0:1; previous: 1199543, current: 1199543; changing to 1199544. This may result in incorrect timestamps in the output file.



Any ideas how to correctly forward the udp packets to the SRT library ?


-
angular cli 6 fluent-ffmpeg cannot get to run ? Core node plugins fs,ps,child_process runtime errors
9 septembre 2018, par user6041243Just created new angular cli app 6. Did a npm install ffmpeg. First is says it can’t resolve (child_process, fs, path). Add those to browser with false in package.json. Builds but then get runtime errors in ffmpeg when I run it http://localhost:4200 in browser when it tries to do a method of fs or path because function doesn’t exist. Just wanting to know how to get ffmpeg or fluent-ffmpeg working in an angular cli project. Has anyone does this ? Any examples.
Seems like the fs, os, path, child_process aren’t available in browser ? Can ffmpeg node plugin be run in browser. Would like to run a statis web app with this on local machine running local ffmpeg executable.
Can this be done ?