
Recherche avancée
Autres articles (54)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Librairies et logiciels spécifiques aux médias
10 décembre 2010, parPour un fonctionnement correct et optimal, plusieurs choses sont à prendre en considération.
Il est important, après avoir installé apache2, mysql et php5, d’installer d’autres logiciels nécessaires dont les installations sont décrites dans les liens afférants. Un ensemble de librairies multimedias (x264, libtheora, libvpx) utilisées pour l’encodage et le décodage des vidéos et sons afin de supporter le plus grand nombre de fichiers possibles. Cf. : ce tutoriel ; FFMpeg avec le maximum de décodeurs et (...) -
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)
Sur d’autres sites (7400)
-
ffmpeg - embed metadata that updates regularly in video
11 juillet 2018, par ketilI have a video that was recorded with an ROV, or an underwater drone if you will. There video is stored in raw H.264, and lots of data is logged during a dive, like temperature, depth, pitch/roll/yaw, etc. Each log entry is timestamped with seconds since epoch.
Copying the raw H264 into an mp4 container at the correct framerate is easy, but I’d like to create a video that displays some or all of the metadata. I’d like to automate the process, so that I can come back from a trip and run a conversion batch tool that applies the metadata from new dives into the new video recordings. I’m hoping to avoid manual steps.
What are my options to display details on the video ? Displaying text on the video is a common thing to do, but it isn’t as clear to me how I could update it every few seconds based on an epoch timestamp from the logs. If I use -vf and try to use frame ranges for when to display each value, that’ll be a very long filter. Would it help somehow if I generate an overlay video first ? I don’t see how that will be much easier, though.
Examples of some of the details I am hoping to embed are :
- depth
- temperature
- pitch, roll and yaw, perhaps by using "sprites" that can be rotated based on the logged rotation around each axis
Here’s a small sample of some of the logged data :
1531168847.588000000 depth: 5.7318845
1531168848.229000000 attitude.calc.roll: -178.67145730705903
1531168848.229000000 attitude.calc.pitch: 8.89832326120461
1531168848.598000000 pressure.fluid_pressure_: 1598.800048828125
1531168848.898000000 temp.water.temperature.temperature_: 13.180000305175781
1531168849.229000000 attitude.calc.roll: -177.03372656909875
1531168849.229000000 attitude.calc.pitch: 3.697970708364904
1531168849.605000000 pressure.fluid_pressure_: 1594.0999755859375
1531168850.235000000 attitude.calc.yaw: 19.87041354690573
1531168850.666000000 pressure.fluid_pressure_: 1593.300048828125The various values are logged at fairly irregular intervals, and they are not all updated at the same time.
I can massage the data into any necessary format. I also have a timestamp (epoch based) of when each recording started, so I can calculate approximate frame numbers as necessary. I have been searching for ways to apply the epoch timestamp to the video (PTS, RTCTIME/RTCSTART), without luck so far. If I manage that, I imagine use of the
enable
filter might be easier, but I’m still not sure very very long video filters are the way to go. -
PHP-FFMPEG On Progress doesn't display until process is finished when encoding a video
10 août 2022, par Ryan DIm running PHP-FFmpeg to encode videos which works great.


https://github.com/PHP-FFMpeg/PHP-FFMpeg



Using their example to get the current progress


$ffmpeg = FFMpeg\FFMpeg::create();
$video = $ffmpeg->open('test.mp4');

$format = new FFMpeg\Format\Video\X264();
$format->on('progress', function ($video, $format, $percentage) {
 echo "$percentage % transcoded";
});

$video->save($format, 'encoded.mp4');



The issue is the progress percentage doesn't display until the encoding is finished which doesn't really help out much. Id like to get the current percentage of encoding as it goes. Im just running this PHP file standalone, maybe I need to do an AJAX call or something to return the data ?


-
FFMPEG Keyframes Exception
3 avril 2017, par newToRacketI was hoping someone could help me, when pulling keyframes from a .mp4 file using FFMPEG in python I get this error :
C:\Python27\python.exe "C:/Coll. Detection/UAS_Detection/GUI.py"
in GUI main
('filePath', PyQt4.QtCore.QString(u'C:\\Users\\Razor\\Downloads\\video_test_512kb.mp4'), '\n')
File Path -> C:\Users\Razor\Downloads\video_test_512kb.mp4
SingleDetect C:\Users\Razor\Downloads\video_test_512kb.mp4
ffmpeg version N-84679-gd65b595 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 6.3.0 (GCC)
configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
libavutil 55. 51.100 / 55. 51.100
libavcodec 57. 86.103 / 57. 86.103
libavformat 57. 67.100 / 57. 67.100
libavdevice 57. 3.101 / 57. 3.101
libavfilter 6. 78.100 / 6. 78.100
libswscale 4. 3.101 / 4. 3.101
libswresample 2. 4.100 / 2. 4.100
libpostproc 54. 2.100 / 54. 2.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\Users\Razor\Downloads\video_test_512kb.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: mp41
creation_time : 1970-01-01T00:00:00.000000Z
title : test file mp4 - http://www.archive.org/details/Pbtestfilemp4videotestmp4
encoder : Lavf51.10.0
comment : license: http://creativecommons.org/licenses/by-nc-sa/2.5/
Duration: 00:00:16.27, start: 0.000000, bitrate: 562 kb/s
Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 320x240 [SAR 1:1 DAR 4:3], 512 kb/s, 15 fps, 15 tbr, 15 tbn, 30 tbc (default)
Metadata:
creation_time : 1970-01-01T00:00:00.000000Z
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 45 kb/s (default)
Metadata:
creation_time : 1970-01-01T00:00:00.000000Z
handler_name : SoundHandler
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
Press [q] to stop, [?] for help
[swscaler @ 0000000003500e00] deprecated pixel format used, make sure you did set range correctly
Output #0, image2, to 'tmp/%d.jpeg':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: mp41
comment : license: http://creativecommons.org/licenses/by-nc-sa/2.5/
title : test file mp4 - http://www.archive.org/details/Pbtestfilemp4videotestmp4
encoder : Lavf57.67.100
Stream #0:0(und): Video: mjpeg, yuvj420p(pc), 320x240 [SAR 1:1 DAR 4:3], q=2-31, 200 kb/s, 15 fps, 15 tbn, 15 tbc (default)
Metadata:
creation_time : 1970-01-01T00:00:00.000000Z
handler_name : VideoHandler
encoder : Lavc57.86.103 mjpeg
Side data:
cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
2017-04-02 18:36:07.832000
Traceback (most recent call last):
File "Detection.py", line 59, in <module>
draw_detections(frame, found)
File "Detection.py", line 32, in draw_detections
result = Result.objects.create(picture=img, x=x, y=y, w=w, h=h)
AttributeError: type object 'Result' has no attribute 'objects'
frame= 21 fps=0.0 q=19.3 Lsize=N/A time=00:00:16.06 bitrate=N/A speed= 191x
video:160kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Process finished with exit code 0
</module>I’m trying to run these keyframes through an opencv human shape recognition api. It saves close to 300 keyframes to the folder I specified but for some reason the program just hangs afterwords instead of running the open cv hogger api on the newly saved keyframes. Will post script if it will help. Thank you for any help you can provide !!!