
Recherche avancée
Autres articles (92)
-
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...) -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)
Sur d’autres sites (7059)
-
swscale : use 16-bit intermediate precision for RGB/XYZ conversion
16 décembre 2024, par Niklas Haasswscale : use 16-bit intermediate precision for RGB/XYZ conversion
The current logic uses 12-bit linear light math, which is woefully insufficient
and leads to nasty postarization artifacts. This patch simply switches the
internal logic to 16-bit precision.This raises the memory requirement of these tables from 32 kB to 272 kB.
All relevant FATE tests updated for improved accuracy.
Fixes : #4829
Signed-off-by : Niklas Haas <git@haasn.dev>
Sponsored-by : Sovereign Tech Fund- [DH] libswscale/swscale.c
- [DH] libswscale/swscale_internal.h
- [DH] libswscale/utils.c
- [DH] tests/ref/fate/filter-pixdesc-xyz12be
- [DH] tests/ref/fate/filter-pixdesc-xyz12le
- [DH] tests/ref/fate/filter-pixfmts-copy
- [DH] tests/ref/fate/filter-pixfmts-crop
- [DH] tests/ref/fate/filter-pixfmts-field
- [DH] tests/ref/fate/filter-pixfmts-fieldorder
- [DH] tests/ref/fate/filter-pixfmts-hflip
- [DH] tests/ref/fate/filter-pixfmts-il
- [DH] tests/ref/fate/filter-pixfmts-null
- [DH] tests/ref/fate/filter-pixfmts-scale
- [DH] tests/ref/fate/filter-pixfmts-transpose
- [DH] tests/ref/fate/filter-pixfmts-vflip
- [DH] tests/ref/pixfmt/gbrp-xyz12le
- [DH] tests/ref/pixfmt/gbrp10-xyz12le
- [DH] tests/ref/pixfmt/gbrp12-xyz12le
- [DH] tests/ref/pixfmt/rgb24-xyz12le
- [DH] tests/ref/pixfmt/rgb48-xyz12le
- [DH] tests/ref/pixfmt/xyz12le
- [DH] tests/ref/pixfmt/yuv444p-xyz12le
- [DH] tests/ref/pixfmt/yuv444p10-xyz12le
- [DH] tests/ref/pixfmt/yuv444p12-xyz12le
-
FFMPEG merge audio tracks into one and encode using NVENC
5 novembre 2019, par L0LockI often shoot films with several audio inputs, resulting in video files with multiple audio tracks supposed to be played all together at the same time.
I usually go through editing those files and there I do whatever I want with those files, but sometimes I would also like to just send the files right away online without editing, in which case I would enjoy FFMPEG’s fast & simple & quality encoding.But here’s the catch : most online video streaming services don’t support multiple audio tracks, so I have to merge them into one so we can hear everything.
I also want to upscale the video (it’s a little trick for the streaming service to trigger its higher quality encoding).
And finally, since it’s just an encoding meant to just be shared on a streaming service, I prefer a fast & light encoding over quality, which HEVC NVENC is good for.So far I’ve tried to use the amix advanced filter and I try to use the Lanczos filter for upscaling which seems to give a better result in my case.
The input file is quite simple :
Stream 0:0 : video track
Stream 0:1 : main audio recording
Stream 0:2 : secondary audio recordingThe audio tracks are at the correct volume and duration and position in time, so the only thing I need is really just to turn them into one track
ffmpeg -i "ow_raw.mp4" -filter_complex "[0:1][0:2]amix=inputs=2[a]" -map "0:0" -map "[a]" -c:v hevc_nvenc -preset fast -level 4.1 -pix_fmt yuv420p -vf scale=2560:1:flags=lanczos "ow_share.mkv" -y
But it doesn’t work :
ffmpeg version N-94905-g8efc9fcc56 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 9.1.1 (GCC) 20190807
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf
libavutil 56. 35.100 / 56. 35.100
libavcodec 58. 56.101 / 58. 56.101
libavformat 58. 32.104 / 58. 32.104
libavdevice 58. 9.100 / 58. 9.100
libavfilter 7. 58.102 / 7. 58.102
libswscale 5. 6.100 / 5. 6.100
libswresample 3. 6.100 / 3. 6.100
libpostproc 55. 6.100 / 55. 6.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'ow_raw.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: isommp42
creation_time : 2019-11-02T16:43:32.000000Z
date : 2019
Duration: 00:15:49.79, start: 0.000000, bitrate: 30194 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, smpte170m/smpte170m/bt470m), 1920x1080 [SAR 1:1 DAR 16:9], 29805 kb/s, 60 fps, 60 tbr, 90k tbn, 120 tbc (default)
Metadata:
creation_time : 2019-11-02T16:43:32.000000Z
handler_name : VideoHandle
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 196 kb/s (default)
Metadata:
creation_time : 2019-11-02T16:43:32.000000Z
handler_name : SoundHandle
Stream #0:2(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 184 kb/s (default)
Metadata:
creation_time : 2019-11-02T16:43:32.000000Z
handler_name : SoundHandle
Stream mapping:
Stream #0:1 (aac) -> amix:input0 (graph 0)
Stream #0:2 (aac) -> amix:input1 (graph 0)
Stream #0:0 -> #0:0 (h264 (native) -> hevc (hevc_nvenc))
amix (graph 0) -> Stream #0:1 (libvorbis)
Press [q] to stop, [?] for help
[hevc_nvenc @ 000002287e34a040] InitializeEncoder failed: invalid param (8)
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
Conversion failed! -
Intercept ffmpeg stdout with Process() in Swift
3 avril 2021, par TAFKASI've to admit that I'm quite inexperienced in Swift, but nonetheless I'm trying to build an OS X app to convert a video in a particular format and size with ffmpeg using Swift.
My goal is to have the ffmpeg stdout in a separate window to show the progress to the user.
Before posting here, I've read all that exist on the internet about the subject :-) and I've not found yet a solution to my problem of not having any output whatsoever in my textView but only in the Xcode console.
I've found this post here :
Real time NSTask output to NSTextView with Swift
that seems very promising but is not working anyway. I've tried to use the command /bin/sh in the example with the provided arguments in my code and it works like a charm. Probably it's me and my inexperience, but I think that is something related to the way ffmpeg output his progress, that won't work. It seems that even removing the -v and -stat options the ffmpeg command still output to the Xcode console but not in my TextField.
I hope that someone can shed a light in my swift coding darkness.
Thanks in advance


STEFANO


UPDATE - SOLVED


I had an EUREKA moment and I've assigned the pipe to the standarderror and voilà it worked


import Cocoa

var videoLoadURL = ""

class ViewController: NSViewController {
 @IBOutlet weak var filenameLabel: NSTextField!
 @IBOutlet weak var progressText: NSTextField!
 @IBOutlet weak var progressIndicator: NSProgressIndicator!

override func viewDidLoad() {
 super.viewDidLoad()

 // Do any additional setup after loading the view.
 progressIndicator.isHidden = true
 
}

@IBAction func loadVIdeo(_ sender: NSButton) {
 
 let openPanel = NSOpenPanel()
 openPanel.allowsMultipleSelection = false
 openPanel.canChooseFiles = true
 openPanel.runModal()
 
 if let path = openPanel.url?.lastPathComponent {
 
 filenameLabel.stringValue = path
 }
 
 if let path = openPanel.url?.absoluteURL {
 
 videoLoadURL = path.absoluteString

 }

}

@IBAction func convert(_ sender: NSButton) {
 
 progressIndicator.isHidden = false
 let savePanel = NSSavePanel()
 var videoSaveURL: String = ""
 savePanel.nameFieldStringValue = "Converted_\(filenameLabel.stringValue).mp4"
 savePanel.runModal()
 
 if let path = savePanel.url?.absoluteURL {
 
 videoSaveURL = path.absoluteString
 }

 let startLaunch: CFTimeInterval = CACurrentMediaTime()
 
 progressIndicator.startAnimation(self)
 
 let task = Process()
 task.launchPath = "/usr/local/bin/ffmpeg"
 task.arguments = ["-i", "\(videoLoadURL)", "-y", "-g", "1", "-crf", "29","-b", "0", "-pix_fmt", "yuv420p", "-strict", "-2", "\(videoSaveURL)"]
 let pipe = Pipe()
 task.standardError = pipe
 let outHandle = pipe.fileHandleForReading
 outHandle.waitForDataInBackgroundAndNotify()

 var observer1: NSObjectProtocol!
 observer1 = NotificationCenter.default.addObserver(forName: NSNotification.Name.NSFileHandleDataAvailable, object: outHandle, queue: nil, using: { notification -> Void in
 let data = outHandle.availableData
 if data.count > 0 {
 
 if let str = NSString(data: data, encoding: String.Encoding.utf8.rawValue) {
 
 self.progressText.stringValue = str as String
 
 }
 
 outHandle.waitForDataInBackgroundAndNotify()
 
 } else {
 
 print("EOF on stdout from process")
 NotificationCenter.default.removeObserver(observer1)
 }
 
 })

 var observer2: NSObjectProtocol!
 observer2 = NotificationCenter.default.addObserver(forName: Process.didTerminateNotification, object: task, queue: nil, using: { notification -> Void in
 
 print("terminated")
 NotificationCenter.default.removeObserver(observer2)

 })

 do {
 
 try task.run()
 
 }catch {
 
 print("error")
 }
 
 task.waitUntilExit()
 
 let elapsedTime: CFTimeInterval = CACurrentMediaTime() - startLaunch
 NSSound.beep()
 progressIndicator.stopAnimation(self)
 progressIndicator.isHidden = true
 if task.terminationStatus == 0 {
 
 let alertOK = NSAlert()
 alertOK.messageText = "Tutto bene"
 alertOK.addButton(withTitle:"OK")
 alertOK.addButton(withTitle: "Cancel")
 alertOK.informativeText = "Conversione eseguita in \(elapsedTime) secondi"
 alertOK.runModal()
 
 } else {
 
 let alertOK = NSAlert()
 alertOK.messageText = "Errore"
 alertOK.addButton(withTitle:"OK")
 alertOK.informativeText = "La conversione è fallita"
 alertOK.runModal()
 
 }
}



}