
Recherche avancée
Autres articles (75)
-
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (4497)
-
[h264_nvenc : InitializeEncoder failed:supported only with separate color plane on this architecture
4 janvier 2021, par gpuguyI am learning ffmpeg for the first time. What I want is to see how it performs on CPU and on GPU when it comes to transcoding.


My machine is having a GT640 GPU card (GT640 is a Kepler card) and I am running a windows 7 , 64bit.
I issue the following command :


ffmpeg -i lec_2.mp4 -c:v h264_nvenc -profile high444p -pixel_format yuv444p -preset default output.mp4



But I am getting following errors :


[h264_nvenc @ 0000000000363a80] The selected preset is deprecated. Use p1 to p7
+ -tune or fast/medium/slow.
[h264_nvenc @ 0000000000363a80] InitializeEncoder failed: invalid param (8): 444
 is supported only with separate color plane on this architecture.
Error initializing output stream 0:0 -- Error while opening encoder for output s
tream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
[aac @ 0000000000365980] Qavg: 18941.322
[aac @ 0000000000365980] 2 frames left in the queue on closing
Conversion failed!



My questions are :


- 

-
From this output can I assume that nvenc is supported on my card since it does not complaint of no nvenc supported GPU ?


-
How should the above command be modified so as to make it work correctly








-
-
Is it possible to combine audio and video from ffmpeg-python without writing to files first ?
22 janvier 2021, par nullUserI'm using the ffmpeg-python library.


I have used the example code : https://github.com/kkroening/ffmpeg-python/tree/master/examples to asynchronously read in and process audio and video streams. The processing is custom and not something a built-in ffmpeg command can achieve (imagine something like tensorflow deep dreaming on both the audio and video). I then want to recombine the audio and video streams that I have created. Currently, the only way I can see to do it is to write both streams out to separate files (as is done e.g. in this answer : How to combine The video and audio files in ffmpeg-python), then use ffmpeg to combine them afterwards. This has the major disadvantage that the result cannot be streamed, i.e. the audio and video must be completely done processing before you can start playing the combined audio/video. Is there any way to combine them without going to files as an intermediate step ?


Technically, the fact that the streams were initially read in from ffmpeg is irrelevant. You may as well assume that I'm in the following situation :


def audio_stream():
 for i in range(10):
 yield bytes(44100 * 2 * 4) # one second of audio 44.1k sample rate, 2 channel, s32le format

def video_stream():
 for i in range(10):
 yield bytes(60 * 1080 * 1920 * 3) # one second of video 60 fps 1920x1080 rgb24 format

# how to write both streams of bytes to file without writing each one separately to a file first?



I would like to use
ffmpeg.concat
, but this requiresffmpeg.inputs
, which only accept filenames as inputs. Is there any other way ? Here are the docs : https://kkroening.github.io/ffmpeg-python/.

-
How to get ffmpeg configuration similar to that of youtube for 480p and 1080p video ? I have got a function but the output quality is too low
14 avril 2021, par prehistoricbeastHey guys I am learning to develop a website that converts videos to youtube quality (or close enough) 480p and 1080p, I am not much familiar with ffmpeg and struggling with its documentation.


I have these functions,


video_480p = subprocess.call([FFMPEG_PATH, '-i', input_file, '-codec:v', 'libx264', '-crf', '20', '-preset', 'medium',
 '-b:v', '1000k', '-maxrate', '1000k', '-bufsize', '2000k','-vf', 'scale=-2:480', '-codec:a', 'aac', '-b:a',
 '128k', '-strict', '-2', file_480p])



similarly I have another function,


new_video = subprocess.call([FFMPEG_PATH, '-i', input_file, '-codec:v', 'libx264', '-crf', '20', '-preset', 'medium',
 '-b:v', '1000k', '-maxrate', '1000k', '-bufsize', '2000k','-vf', 'scale=-2:1080', '-codec:a', 'aac', '-b:a',
 '128k', '-strict', '-2', output_file])



Both these functions, transcode the video, but returns low quality videos, Can anyone provide me with the right settings for 480p and 1080p which is similar or close to youtube quality ?


Thanks