Recherche avancée

Médias (1)

Mot : - Tags -/ticket

Autres articles (77)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

Sur d’autres sites (8804)

  • ffmpeg ... "Impossible to convert between the formats supported by the filter"

    27 novembre 2017, par hydra3333

    Using the latest ffmpeg master built as at 2017.11.26, I’m having trouble deciphering what the error messages means and, more importantly, what to do about them.

    Changing from -vf to -filter_complex did nothing (I had to try).
    The main error message seems to be

    Impossible to convert between the formats supported by the filter

    I have tried to insert "format=" and "scale" before/between/after yadif and unsharp_opencl but to no avail.

    I wonder, could it be something to do with needing hwupload/hwdownload/hwmap or is that a red herring ?

    What am I doing wrong ?

    ".\ffmpeg_3.latest_master.exe" -hide_banner -v verbose -init_hw_device opencl=ocl:1.0 -filter_hw_device ocl -i ".\test_01.mpg" -an -map_metadata -1 -sws_flags lanczos+accurate_rnd+full_chroma_int+full_chroma_inp -filter_complex "[0:v]yadif=0:0:0,scale=flags=lanczos+accurate_rnd+full_chroma_int+full_chroma_inp,unsharp_opencl=lx=3:ly=3:la=0.5:cx=3:cy=3:ca=0.5,setdar=dar=16/9" -r 25 -c:v h264_nvenc -preset slow -bf 2 -g 50 -refs 3 -rc:v vbr_hq -rc-lookahead:v 32 -cq 22 -qmin 16 -qmax 25 -coder cabac -movflags +faststart -profile:v high -level 4.1 -pixel_format yuv420p -y ".\test_01.newest.MP4"
    [AVHWDeviceContext @ 0000020e229aad40] 1.0: NVIDIA CUDA / GeForce GTX 750 Ti
    [AVHWDeviceContext @ 0000020e229aad40] DXVA2 to OpenCL mapping function found (clCreateFromDX9MediaSurfaceKHR).
    [AVHWDeviceContext @ 0000020e229aad40] DXVA2 in OpenCL acquire function found (clEnqueueAcquireDX9MediaSurfacesKHR).
    [AVHWDeviceContext @ 0000020e229aad40] DXVA2 in OpenCL release function found (clEnqueueReleaseDX9MediaSurfacesKHR).
    [AVHWDeviceContext @ 0000020e229aad40] The cl_khr_d3d11_sharing extension is required for D3D11 to OpenCL mapping.
    [AVHWDeviceContext @ 0000020e229aad40] D3D11 to OpenCL mapping not usable.
    [mpeg @ 0000020e229ae240] max_analyze_duration 5000000 reached at 5000000 microseconds st:0
    Input #0, mpeg, from '.\test_01.mpg':
     Duration: 00:06:29.96, start: 0.240000, bitrate: 2799 kb/s
       Stream #0:0[0x1e0]: Video: mpeg2video (Main), 1 reference frame, yuv420p(tv, top first, left), 720x576 [SAR 64:45 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
       Stream #0:1[0x1c0]: Audio: mp2, 48000 Hz, stereo, s16p, 256 kb/s
    [Parsed_scale_1 @ 0000020e29951a20] w:iw h:ih flags:'lanczos+accurate_rnd+full_chroma_int+full_chroma_inp' interl:0
    Stream mapping:
     Stream #0:0 (mpeg2video) -> yadif
     setdar -> Stream #0:0 (h264_nvenc)
    Press [q] to stop, [?] for help
    [Parsed_scale_1 @ 0000020e29cccda0] w:iw h:ih flags:'lanczos+accurate_rnd+full_chroma_int+full_chroma_inp' interl:0
    [graph 0 input from stream 0:0 @ 0000020e29ccc980] w:720 h:576 pixfmt:yuv420p tb:1/90000 fr:25/1 sar:64/45 sws_param:flags=2
    [auto_scaler_0 @ 0000020e29ccc580] w:iw h:ih flags:'bilinear' interl:0
    [Parsed_unsharp_opencl_2 @ 0000020e29ccc4a0] auto-inserting filter 'auto_scaler_0' between the filter 'Parsed_scale_1' and the filter 'Parsed_unsharp_opencl_2'
    Impossible to convert between the formats supported by the filter 'Parsed_scale_1' and the filter 'auto_scaler_0'
    Error reinitializing filters!
    Failed to inject frame into filter network: Function not implemented
    Error while processing the decoded data for stream #0:0
    Conversion failed!
  • How to use C++ to achieve the function of the command line "ffmpeg -f alsa -i hw:0 alsaout.wav" ? [closed]

    31 janvier 2018, par OtakuFitness

    My supervisor asked me to write a c++ file to achieve the function of "ffmpeg -f alsa -i hw:0 alsaout.wav" by using ffmpeg and this file will be used in our future project. (Basically I need to use the file to record sound data)

    I never code related to hardwares(I’m working on iOS & ARKit) so that I’m confused that is it a heavy job and, if possible, how could I achieve the goal step by step ?

    Update :

    I’m not sure why this problem has been closed, but if somebody is in trouble with FFMPEG, especially in terms of coding, then there is a useful example for you to learn as a newbie. And this example can definitely solve my problem, so take you time to go through it.

  • converting a "gif" to video using swift

    3 décembre 2019, par James Woodrow

    I’ve looked around and found a few things here and there, mainly that I should be using AVAssetWriter to do this but I have 0 experience with this and video editing/creation so it doesn’t help me much since I can’t seem to find anything that does something I can modify easily (or not at my level of knowledge at least) so that it works as I intend it to.

    I have an app which takes n photos every cft (capture frame time which I get from a backend server) seconds (it’s a double for obvious reasons) I then display these frames using a UIImageView and the frames change every dft (display frame time which I also get from a backend server and can be different from cft). Up until this point nothing complicated.

    now what is currently the workflow is that these frames are sent back to a server with any relevant information I want and then the server would use imagemagick to create a real gif file and ffmpeg to create a 15 seconds video using said gif.

    the issue is this makes it so that my heroku server bills aren’t as low as I would like because of the limited memory on the dynos and the time it takes to generate these videos is of about 5-10 seconds I believe (not sure but it’s longer than I’d like)

    So the idea I had was to make the app create the video since he already has all the information he needs for this, and then simply upload it with the rest of the frames and relevant data. Using bandwidth nowadays is much cheaper than buying extra processing power on a server.

    • he has n frames to loop over
    • he has a float value representing how long each frame should last dft
    • he has a gpu or at least a much better cpu than the dynos heroku have to offer

    I’ve also looked around to see if anyone made an extensive tutorial on how to use ffmpeg in swift but I still didn’t find anything at my level and I didn’t even find a tutorial per se, only some GitHub projects which were partially completed and/or without the original tutorial linked to understand the thought process.

    I would appreciate any tips/code sample/tutorials on the subject.

    I’m adding the ffmpeg command line equivalent to what I would love to be able to do (if I could use ffmpeg directly with iOS this could be nice too)

    ffmpeg -framerate 100/13 -loop 1 -i frame%02d.png -c:v libx264 -r 100/13 -pix_fmt yuv420p -t 0:15 instagram.mp4

    where basically I did 100 / (dft * 100) for the input frame rate and just output at the same fps for 15 seconds. by the way if there are any ways to optimise this command to make it run faster without losing quality I might be able to keep the current way of functioning with heroku although I would still prefer some iOS solution.