
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (32)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Soumettre améliorations et plugins supplémentaires
10 avril 2011Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...)
Sur d’autres sites (6936)
-
ffmpeg ... "Impossible to convert between the formats supported by the filter"
27 novembre 2017, par hydra3333Using the latest ffmpeg master built as at 2017.11.26, I’m having trouble deciphering what the error messages means and, more importantly, what to do about them.
Changing from -vf to -filter_complex did nothing (I had to try).
The main error message seems to beImpossible to convert between the formats supported by the filter
I have tried to insert "format=" and "scale" before/between/after yadif and unsharp_opencl but to no avail.
I wonder, could it be something to do with needing hwupload/hwdownload/hwmap or is that a red herring ?
What am I doing wrong ?
".\ffmpeg_3.latest_master.exe" -hide_banner -v verbose -init_hw_device opencl=ocl:1.0 -filter_hw_device ocl -i ".\test_01.mpg" -an -map_metadata -1 -sws_flags lanczos+accurate_rnd+full_chroma_int+full_chroma_inp -filter_complex "[0:v]yadif=0:0:0,scale=flags=lanczos+accurate_rnd+full_chroma_int+full_chroma_inp,unsharp_opencl=lx=3:ly=3:la=0.5:cx=3:cy=3:ca=0.5,setdar=dar=16/9" -r 25 -c:v h264_nvenc -preset slow -bf 2 -g 50 -refs 3 -rc:v vbr_hq -rc-lookahead:v 32 -cq 22 -qmin 16 -qmax 25 -coder cabac -movflags +faststart -profile:v high -level 4.1 -pixel_format yuv420p -y ".\test_01.newest.MP4"
[AVHWDeviceContext @ 0000020e229aad40] 1.0: NVIDIA CUDA / GeForce GTX 750 Ti
[AVHWDeviceContext @ 0000020e229aad40] DXVA2 to OpenCL mapping function found (clCreateFromDX9MediaSurfaceKHR).
[AVHWDeviceContext @ 0000020e229aad40] DXVA2 in OpenCL acquire function found (clEnqueueAcquireDX9MediaSurfacesKHR).
[AVHWDeviceContext @ 0000020e229aad40] DXVA2 in OpenCL release function found (clEnqueueReleaseDX9MediaSurfacesKHR).
[AVHWDeviceContext @ 0000020e229aad40] The cl_khr_d3d11_sharing extension is required for D3D11 to OpenCL mapping.
[AVHWDeviceContext @ 0000020e229aad40] D3D11 to OpenCL mapping not usable.
[mpeg @ 0000020e229ae240] max_analyze_duration 5000000 reached at 5000000 microseconds st:0
Input #0, mpeg, from '.\test_01.mpg':
Duration: 00:06:29.96, start: 0.240000, bitrate: 2799 kb/s
Stream #0:0[0x1e0]: Video: mpeg2video (Main), 1 reference frame, yuv420p(tv, top first, left), 720x576 [SAR 64:45 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream #0:1[0x1c0]: Audio: mp2, 48000 Hz, stereo, s16p, 256 kb/s
[Parsed_scale_1 @ 0000020e29951a20] w:iw h:ih flags:'lanczos+accurate_rnd+full_chroma_int+full_chroma_inp' interl:0
Stream mapping:
Stream #0:0 (mpeg2video) -> yadif
setdar -> Stream #0:0 (h264_nvenc)
Press [q] to stop, [?] for help
[Parsed_scale_1 @ 0000020e29cccda0] w:iw h:ih flags:'lanczos+accurate_rnd+full_chroma_int+full_chroma_inp' interl:0
[graph 0 input from stream 0:0 @ 0000020e29ccc980] w:720 h:576 pixfmt:yuv420p tb:1/90000 fr:25/1 sar:64/45 sws_param:flags=2
[auto_scaler_0 @ 0000020e29ccc580] w:iw h:ih flags:'bilinear' interl:0
[Parsed_unsharp_opencl_2 @ 0000020e29ccc4a0] auto-inserting filter 'auto_scaler_0' between the filter 'Parsed_scale_1' and the filter 'Parsed_unsharp_opencl_2'
Impossible to convert between the formats supported by the filter 'Parsed_scale_1' and the filter 'auto_scaler_0'
Error reinitializing filters!
Failed to inject frame into filter network: Function not implemented
Error while processing the decoded data for stream #0:0
Conversion failed! -
How to use C++ to achieve the function of the command line "ffmpeg -f alsa -i hw:0 alsaout.wav" ? [closed]
31 janvier 2018, par OtakuFitnessMy supervisor asked me to write a c++ file to achieve the function of "ffmpeg -f alsa -i hw:0 alsaout.wav" by using ffmpeg and this file will be used in our future project. (Basically I need to use the file to record sound data)
I never code related to hardwares(I’m working on iOS & ARKit) so that I’m confused that is it a heavy job and, if possible, how could I achieve the goal step by step ?
Update :
I’m not sure why this problem has been closed, but if somebody is in trouble with FFMPEG, especially in terms of coding, then there is a useful example for you to learn as a newbie. And this example can definitely solve my problem, so take you time to go through it.
-
converting a "gif" to video using swift
3 décembre 2019, par James WoodrowI’ve looked around and found a few things here and there, mainly that I should be using AVAssetWriter to do this but I have 0 experience with this and video editing/creation so it doesn’t help me much since I can’t seem to find anything that does something I can modify easily (or not at my level of knowledge at least) so that it works as I intend it to.
I have an app which takes
n
photos everycft
(capture frame time which I get from a backend server) seconds (it’s a double for obvious reasons) I then display these frames using a UIImageView and the frames change everydft
(display frame time which I also get from a backend server and can be different fromcft
). Up until this point nothing complicated.now what is currently the workflow is that these frames are sent back to a server with any relevant information I want and then the server would use imagemagick to create a real gif file and ffmpeg to create a 15 seconds video using said gif.
the issue is this makes it so that my heroku server bills aren’t as low as I would like because of the limited memory on the dynos and the time it takes to generate these videos is of about 5-10 seconds I believe (not sure but it’s longer than I’d like)
So the idea I had was to make the app create the video since he already has all the information he needs for this, and then simply upload it with the rest of the frames and relevant data. Using bandwidth nowadays is much cheaper than buying extra processing power on a server.
- he has
n
frames to loop over - he has a float value representing how long each frame should last
dft
- he has a gpu or at least a much better cpu than the dynos heroku have to offer
I’ve also looked around to see if anyone made an extensive tutorial on how to use ffmpeg in swift but I still didn’t find anything at my level and I didn’t even find a tutorial per se, only some GitHub projects which were partially completed and/or without the original tutorial linked to understand the thought process.
I would appreciate any tips/code sample/tutorials on the subject.
I’m adding the ffmpeg command line equivalent to what I would love to be able to do (if I could use ffmpeg directly with iOS this could be nice too)
ffmpeg -framerate 100/13 -loop 1 -i frame%02d.png -c:v libx264 -r 100/13 -pix_fmt yuv420p -t 0:15 instagram.mp4
where basically I did
100 / (dft * 100)
for the input frame rate and just output at the same fps for 15 seconds. by the way if there are any ways to optimise this command to make it run faster without losing quality I might be able to keep the current way of functioning with heroku although I would still prefer some iOS solution. - he has