
Recherche avancée
Autres articles (34)
-
Demande de création d’un canal
12 mars 2010, parEn fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...) -
Qu’est ce qu’un éditorial
21 juin 2013, parEcrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
Vous pouvez personnaliser le formulaire de création d’un éditorial.
Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.
Sur d’autres sites (5019)
-
MP4Box / FFMPEG concat loses audio after first clip
17 novembre 2017, par user1615343So I am certainly no expert when it comes to either of these tools, but I have a web-based project that’s executing commands on an Amazon Linux server to concatenate two video files that are uploaded.
Both files are converted to mp4s first using FFMPEG, and those play perfectly in a browser after conversion :
ffmpeg -i file1.mpg -c:v libx264 -crf 22 -c:a aac -strict -2 -movflags faststart file2.mp4
Then, I attempt to combine these two resulting mp4s into a single mp4. I tried using FFMPEG to do this but to no avail. Switching to try MP4Box got me much closer : the videos are concatenated together, but the audio stops playing at the end of the first clip, and the second clip is silent.
MP4Box -force-cat -keepsys -add file.mp4 -cat file2.mp4 out.mp4
I’ve tried varying versions of the above command with no better results. Any input is greatly appreciated.
EDIT : info on .mp4 files using
ffmpeg -i file1.mp4 -i file2.mp4
ffmpeg -i 1510189259715DogRunsintoGlassDoor_315a03a8e20acfc.mp4 -i
1510189273549NewhouseMoonMoonneverseenstairsbeforefunnydog_285a03a8e6aab25.mp4ffmpeg version N-61041-g52a2138 Copyright (c) 2000-2014 the FFmpeg
developersbuilt on Mar 2 2014 05:45:04 with gcc 4.6 (Debian 4.6.3-1)
configuration : —prefix=/root/ffmpeg-static/64bit
—extra-cflags=’-I/root/ffmpeg-static/64bit/include -static’ —extra-ldflags=’-L/root/ffmpeg-static/64bit/lib -static’ —extra-libs=’-lxml2 -lexpat -lfreetype’ —enable-static —disable-shared —disable-ffserver —disable-doc —enable-bzlib —enable-zlib —enable-postproc —enable-runtime-cpudetect —enable-libx264 —enable-gpl —enable-libtheora —enable-libvorbis —enable-libmp3lame —enable-gray —enable-libass —enable-libfreetype —enable-libopenjpeg —enable-libspeex —enable-libvo-aacenc —enable-libvo-amrwbenc —enable-version3 —enable-libvpxlibavutil 52. 66.100 / 52. 66.100
libavcodec 55. 52.102 / 55. 52.102
libavformat 55. 33.100 / 55. 33.100
libavdevice 55. 10.100 / 55. 10.100
libavfilter 4. 2.100 / 4. 2.100
libswscale 2. 5.101 / 2. 5.101
libswresample 0. 18.100 / 0. 18.100
libpostproc 52. 3.100 / 52. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from
’1510189259715DogRunsintoGlassDoor_315a03a8e20acfc.mp4’ :Metadata :
major_brand : isom
minor_version : 512
compatible_brands : isomiso2avc1mp41
encoder : Lavf55.33.100
Duration : 00:00:04.92, start : 0.023220, bitrate : 634 kb/s
Stream #0:0(und) : Video : h264 (High) (avc1 / 0x31637661), yuv420p,
360x360 [SAR 1:1 DAR 1:1], 501 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc
(default)Metadata :
handler_name : VideoHandler
Stream #0:1(und) : Audio : aac (mp4a / 0x6134706D), 44100 Hz, mono,
fltp, 132 kb/s (default)Metadata :
handler_name : SoundHandler
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from
’1510189273549NewhouseMoonMoonneverseenstairsbeforefunnydog_285a03a8e6aab25.mp4’ :Metadata :
major_brand : isom
minor_version : 512
compatible_brands : isomiso2avc1mp41
encoder : Lavf55.33.100
Duration : 00:00:18.79, start : 0.023220, bitrate : 455 kb/s
Stream #1:0(und) : Video : h264 (High) (avc1 / 0x31637661), yuv420p,
362x360 [SAR 1:1 DAR 181:180], 320 kb/s, 29.94 fps, 29.94 tbr, 11976
tbn, 59.88 tbc (default)Metadata :
handler_name : VideoHandler
Stream #1:1(eng) : Audio : aac (mp4a / 0x6134706D), 44100 Hz, stereo,
fltp, 129 kb/s (default)Metadata :
handler_name : SoundHandler
At least one output file must be specified
-
How do I extract color matrix from MP4 an x264 stream in Media Foundation
23 août 2016, par JulesI am playing a video (mp4 containing x264 encoded video stream) with a custom player using media foundation.
When I convert the YUV information into RGB I need to account for the color matrix and range used at encode time.
Some of my videos have this information, I can use MediaInfo.exe or FFMPEG to see that it is present.
However, for such videos if I look at the relevant Media Foundation properties (Extended Color Information) the properties are not present in the files.
So, somehow I need to find a way to access the information.
Media Foundation does provide access to MF_MT_MPEG4_SAMPLE_DESCRIPTION and MF_MT_MPEG_SEQUENCE_HEADER for the video stream but I can’t find descriptions of what these contain.
I noticed that the MF_MT_MPEG_SEQUENCE_HEADER is much longer for the videos with the information present and this (MPEG Headers Quick Reference) seems to suggest headers might contain the information I need.
I’m looking for Color Range (limited/full), Color Primaries, Transfer Characteristics and Matrix Coefficients (BT.709 etc).
I’d greatly appreciate any help finding this information from a Media Foundation video stream.
Thanks
Jules
Update - Sequence Header
The sequence header appears to be a subset of MPEG4 sample description, though I can’t find anything that indicates what either bits of data actually contains / doesn’t contain specifically.
The sequence header appears to contain data structured as an MP4 byte stream as described in the H264 Standards Document and includes the VUI (Video Usability Information - Annex E of document) which may then include the colour information I’m interested in.
Given that it’s a byte stream I need to know where it starts and whether there’s some existing code I could use to decode it.
In FFMPEG in libavcodec/h264_ps.c there is a function called ff_h264_decode_seq_parameter_set which ends up calling decode_vui_parameters. It seems possible that seq_parameter_set maps to MF_MT_MPEG_SEQUENCE_HEADER and it may be possible to use that code to decode the data.
If anyone one has any direct experience with decoding this data it would be very useful.
Thanks again
Update - Related posts
I found this How to decode sprop-parameter-sets in a H264 SDP ? and Possible Locations for Sequence/Picture Parameter Set(s) for H.264 Stream which are fairly helpful.
The sequence header would appear to be Sequence or picture parameter set (pps) and the parameters I want are the VUI extension subset.
Plus this post H.264 stream structure gives the high level of how the stream data is structured, and the MF_MT_MPEG_SEQUENCE_HEADER appears to start with a NAL 0x00 0x00 0x01 so I’m guessing it is a NAL containing the PPS.
-
Use deck.js as a remote presentation tool
8 janvier 2014, par silviadeck.js is one of the new HTML5-based presentation tools. It’s simple to use, in particular for your basic, every-day presentation needs. You can also create more complex slides with animations etc. if you know your HTML and CSS.
Yesterday at linux.conf.au (LCA), I gave a presentation using deck.js. But I didn’t give it from the lectern in the room in Perth where LCA is being held – instead I gave it from the comfort of my home office at the other end of the country.
I used my laptop with in-built webcam and my Chrome browser to give this presentation. Beforehand, I had uploaded the presentation to a Web server and shared the link with the organiser of my speaker track, who was on site in Perth and had set up his laptop in the same fashion as myself. His screen was projecting the Chrome tab in which my slides were loaded and he had hooked up the audio output of his laptop to the room speaker system. His camera was pointed at the audience so I could see their reaction.
I loaded a slide master URL :
http://html5videoguide.net/presentations/lca_2014_webrtc/?master
and the room loaded the URL without query string :
http://html5videoguide.net/presentations/lca_2014_webrtc/
.Then I gave my talk exactly as I would if I was in the same room. Yes, it felt exactly as though I was there, including nervousness and audience feedback.
How did we do that ? WebRTC (Web Real-time Communication) to the rescue, of course !
We used one of the modules of the rtc.io project called rtc-glue to add the video conferencing functionality and the slide navigation to deck.js. It was actually really really simple !
Here are the few things we added to deck.js to make it work :
- Code added to index.html to make the video connection work :
<meta name="rtc-signalhost" content="http://rtc.io/switchboard/">
<meta name="rtc-room" content="lca2014">
...
<video id="localV" rtc-capture="camera" muted></video>
<video id="peerV" rtc-peer rtc-stream="localV"></video>
...
<script src="glue.js"></script>
<script>
glue.config.iceServers = [{ url: 'stun:stun.l.google.com:19302' }];
</script>The iceServers config is required to punch through firewalls – you may also need a TURN server. Note that you need a signalling server – in our case we used
http://rtc.io/switchboard/
, which runs the code from rtc-switchboard. - Added glue.js library to deck.js :
Downloaded from https://raw.github.com/rtc-io/rtc-glue/master/dist/glue.js into the source directory of deck.js.
- Code added to index.html to synchronize slide navigation :
glue.events.once('connected', function(signaller) {
if (location.search.slice(1) !== '') {
$(document).bind('deck.change', function(evt, from, to) {
signaller.send('/slide', {
idx: to,
sender: signaller.id
});
});
}
signaller.on('slide', function(data) {
console.log('received notification to change to slide: ', data.idx);
$.deck('go', data.idx);
});
});This simply registers a callback on the slide master end to send a slide position message to the room end, and a callback on the room end that initiates the slide navigation.
And that’s it !
You can find my slide deck on GitHub.
Feel free to write your own slides in this manner – I would love to have more users of this approach. It should also be fairly simple to extend this to share pointer positions, so you can actually use the mouse pointer to point to things on your slides remotely. Would love to hear your experiences !
Note that the slides are actually a talk about the rtc.io project, so if you want to find out more about these modules and what other things you can do, read the slide deck or watch the talk when it has been published by LCA.
Many thanks to Damon Oehlman for his help in getting this working.
BTW : somebody should really fix that print style sheet for deck.js – I’m only ever getting the one slide that is currently showing.
- Code added to index.html to make the video connection work :