
Recherche avancée
Médias (91)
-
Les Miserables
9 décembre 2019, par
Mis à jour : Décembre 2019
Langue : français
Type : Textuel
-
VideoHandle
8 novembre 2019, par
Mis à jour : Novembre 2019
Langue : français
Type : Video
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
-
Un test - mauritanie
3 avril 2014, par
Mis à jour : Avril 2014
Langue : français
Type : Textuel
-
Pourquoi Obama lit il mes mails ?
4 février 2014, par
Mis à jour : Février 2014
Langue : français
-
IMG 0222
6 octobre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Image
Autres articles (101)
-
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;
Sur d’autres sites (7658)
-
ffmpeg concat image with video, but output extra long
11 octobre 2020, par MorrisMy goal : To concat an image (foo.jpg) with a video (bar.mp4, 3 seconds long). Show foo.jpg for 2 seconds only. Output video should be just around 5 seconds long.


I used :


ffmpeg -loop 1 -t 2 -framerate 1 -i foo.jpg -f lavfi -t 2 -i anullsrc -i bar.mp4 -filter_complex "[0][1][2:v][2:a] concat=n=2:v=1:a=1 [vpre][a];[vpre]fps=24,scale=32:24[v]" -map "[v]" -map "[a]" out.mp4



I think the command means :
Loop foo.jpg for 2 seconds at framerate of 1 frame per second. At the same time, add silent audio track to the 2-second foo.jpg video.
Then concat with bar.mp4.
Make final output framerate 24 fps. Scale it to 32x24 dimension (intentionally tiny for testing).


Expected : output to be about 5 seconds long in total.


Reality : output is 3 minutes and 38 seconds long. The first 5 seconds is perfect. After that, video just stays silent with the 5th-second frame frozen until the end.


My research shows that it might be related to
-video_track_timescale


This command also fails with same over-long result (I added
-video_track_timescale 600
) :

ffmpeg -loop 1 -t 2 -framerate 1 -i foo.jpg -f lavfi -t 2 -i anullsrc -i bar.mp4 -filter_complex "[0][1][2:v][2:a] concat=n=2:v=1:a=1 [vpre][a];[vpre]fps=24,scale=32:24[v]" -map "[v]" -map "[a]" -video_track_timescale 600 out.mp4



Additional info about file
bar.mp4
:

$ ffmpeg -i bar.mp4 
ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers
 built with Apple clang version 11.0.3 (clang-1103.0.32.62)
 configuration: --prefix=/usr/local/Cellar/ffmpeg/4.3.1_1 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack
 libavutil 56. 51.100 / 56. 51.100
 libavcodec 58. 91.100 / 58. 91.100
 libavformat 58. 45.100 / 58. 45.100
 libavdevice 58. 10.100 / 58. 10.100
 libavfilter 7. 85.100 / 7. 85.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 7.100 / 5. 7.100
 libswresample 3. 7.100 / 3. 7.100
 libpostproc 55. 7.100 / 55. 7.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'bar.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.29.100
 Duration: 00:00:01.94, start: 0.000000, bitrate: 641 kb/s
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, smpte170m/unknown/smpte170m), 480x360 [SAR 1:1 DAR 4:3], 354 kb/s, 24.58 fps, 24.58 tbr, 113734695.00 tbn, 49.16 tbc (default)
 Metadata:
 handler_name : VideoHandler
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 280 kb/s (default)
 Metadata:
 handler_name : SoundHandler
At least one output file must be specified



-
How to process remote audio/video stream on WebRTC server in real-time ? [closed]
7 septembre 2020, par Kartik RokdeI'm new to audio/video streaming. I'm using AntMedia Pro for audio/video conferencing. There will be 5-8 hosts who will be speaking and the expected audience size would be 15-20k (need to mention this as it won't be a P2P conferencing, but an MCU architecture).


I want to give a feature where a user can request for "convert voice to female / robot / whatever", which would let the user hear the manipulated voice in the conference.


From what I know is that I want to do a real-time processing on the server to be able to do this. I want to intercept the stream on the server, and do some processing (change the voice) on each of the tracks, and stream it back to the requestor.


The first challenge I'm facing is how to get the stream and/or the individual tracks on the server ?


I did some research on how to process remote WebRTC streams, real-time on the server. I came across some keywords like
RTMP ingestion
,ffmpeg
.

Here are a few questions I went through, but didn't find answers that I'm looking for :


- 

- Receive webRTC video stream using python opencv in real-time
- Extract frames as images from an RTMP stream in real-time
- android stream real time video to streaming server








I need help in receiving real-time stream on the server (any technology - preferable Python, Golang) and streaming it back.


-
mencoder. Encoding from multiple input image files compatible with web browser (No video support and MIME type) [duplicate]
23 juin 2020, par iblasiI have multiple JPG files that I want to use to make a TimeLapse video compatible with the web browser to upload it on my web page.
Create a video with
mencoder
from multiple images is explained in some webpages such us here, that shows how to create a video.

ls -Ltr my_Pics/*.jpg >files.txt
mencoder -nosound -ovc lavc -lavcopts vcodec=mpeg4 -o video.avi -mf type=jpeg:fps=4 mf://@files.txt



The video is set with no sound and to have one picture every 250ms (4 fps).
These command lines create an AVI video that I can see correctly with the VLC video tool. However, if I try to open it in a web browser it shows an error :




No Video with Supported Format and MIME type found




So, based on other similar comments (as here), I tryed to use
ffmpeg
renaming all my files as ffmpeg requires a number serial format. But it happens the same, that I can see it in VLC but not in the browser.

ffmpeg -r 4 -i ./output/%04d.jpg -vcodec libx264 video.mp4



Based on research made on internet I am quite sure that it is due the the encoding and/or container. I tryed multiple options of codecs nd containers existing on documentation (here) but still not able to find a way to work.


If, once I create the video, I use the VLC tool to manually convert the video to ".m4v" I was able to create a video that the web browser recognizes. But I would like to do it with command lines to automate it.