Recherche avancée

Médias (0)

Mot : - Tags -/metadatas

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (92)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Contribute to translation

    13 avril 2011

    You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
    To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
    MediaSPIP is currently available in French and English (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (7059)

  • swscale : use 16-bit intermediate precision for RGB/XYZ conversion

    16 décembre 2024, par Niklas Haas
    swscale : use 16-bit intermediate precision for RGB/XYZ conversion
    

    The current logic uses 12-bit linear light math, which is woefully insufficient
    and leads to nasty postarization artifacts. This patch simply switches the
    internal logic to 16-bit precision.

    This raises the memory requirement of these tables from 32 kB to 272 kB.

    All relevant FATE tests updated for improved accuracy.

    Fixes : #4829
    Signed-off-by : Niklas Haas <git@haasn.dev>
    Sponsored-by : Sovereign Tech Fund

    • [DH] libswscale/swscale.c
    • [DH] libswscale/swscale_internal.h
    • [DH] libswscale/utils.c
    • [DH] tests/ref/fate/filter-pixdesc-xyz12be
    • [DH] tests/ref/fate/filter-pixdesc-xyz12le
    • [DH] tests/ref/fate/filter-pixfmts-copy
    • [DH] tests/ref/fate/filter-pixfmts-crop
    • [DH] tests/ref/fate/filter-pixfmts-field
    • [DH] tests/ref/fate/filter-pixfmts-fieldorder
    • [DH] tests/ref/fate/filter-pixfmts-hflip
    • [DH] tests/ref/fate/filter-pixfmts-il
    • [DH] tests/ref/fate/filter-pixfmts-null
    • [DH] tests/ref/fate/filter-pixfmts-scale
    • [DH] tests/ref/fate/filter-pixfmts-transpose
    • [DH] tests/ref/fate/filter-pixfmts-vflip
    • [DH] tests/ref/pixfmt/gbrp-xyz12le
    • [DH] tests/ref/pixfmt/gbrp10-xyz12le
    • [DH] tests/ref/pixfmt/gbrp12-xyz12le
    • [DH] tests/ref/pixfmt/rgb24-xyz12le
    • [DH] tests/ref/pixfmt/rgb48-xyz12le
    • [DH] tests/ref/pixfmt/xyz12le
    • [DH] tests/ref/pixfmt/yuv444p-xyz12le
    • [DH] tests/ref/pixfmt/yuv444p10-xyz12le
    • [DH] tests/ref/pixfmt/yuv444p12-xyz12le
  • FFMPEG merge audio tracks into one and encode using NVENC

    5 novembre 2019, par L0Lock

    I often shoot films with several audio inputs, resulting in video files with multiple audio tracks supposed to be played all together at the same time.
    I usually go through editing those files and there I do whatever I want with those files, but sometimes I would also like to just send the files right away online without editing, in which case I would enjoy FFMPEG’s fast & simple & quality encoding.

    But here’s the catch : most online video streaming services don’t support multiple audio tracks, so I have to merge them into one so we can hear everything.

    I also want to upscale the video (it’s a little trick for the streaming service to trigger its higher quality encoding).
    And finally, since it’s just an encoding meant to just be shared on a streaming service, I prefer a fast & light encoding over quality, which HEVC NVENC is good for.

    So far I’ve tried to use the amix advanced filter and I try to use the Lanczos filter for upscaling which seems to give a better result in my case.

    The input file is quite simple :

    Stream 0:0 : video track
    Stream 0:1 : main audio recording
    Stream 0:2 : secondary audio recording

    The audio tracks are at the correct volume and duration and position in time, so the only thing I need is really just to turn them into one track

    ffmpeg -i "ow_raw.mp4" -filter_complex "[0:1][0:2]amix=inputs=2[a]" -map "0:0" -map "[a]" -c:v hevc_nvenc -preset fast -level 4.1 -pix_fmt yuv420p -vf scale=2560:1:flags=lanczos "ow_share.mkv" -y

    But it doesn’t work :

    ffmpeg version N-94905-g8efc9fcc56 Copyright (c) 2000-2019 the FFmpeg developers
     built with gcc 9.1.1 (GCC) 20190807
     configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf
     libavutil      56. 35.100 / 56. 35.100
     libavcodec     58. 56.101 / 58. 56.101
     libavformat    58. 32.104 / 58. 32.104
     libavdevice    58.  9.100 / 58.  9.100
     libavfilter     7. 58.102 /  7. 58.102
     libswscale      5.  6.100 /  5.  6.100
     libswresample   3.  6.100 /  3.  6.100
     libpostproc    55.  6.100 / 55.  6.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'ow_raw.mp4':
     Metadata:
       major_brand     : mp42
       minor_version   : 0
       compatible_brands: isommp42
       creation_time   : 2019-11-02T16:43:32.000000Z
       date            : 2019
     Duration: 00:15:49.79, start: 0.000000, bitrate: 30194 kb/s
       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, smpte170m/smpte170m/bt470m), 1920x1080 [SAR 1:1 DAR 16:9], 29805 kb/s, 60 fps, 60 tbr, 90k tbn, 120 tbc (default)
       Metadata:
         creation_time   : 2019-11-02T16:43:32.000000Z
         handler_name    : VideoHandle
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 196 kb/s (default)
       Metadata:
         creation_time   : 2019-11-02T16:43:32.000000Z
         handler_name    : SoundHandle
       Stream #0:2(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 184 kb/s (default)
       Metadata:
         creation_time   : 2019-11-02T16:43:32.000000Z
         handler_name    : SoundHandle
    Stream mapping:
     Stream #0:1 (aac) -> amix:input0 (graph 0)
     Stream #0:2 (aac) -> amix:input1 (graph 0)
     Stream #0:0 -> #0:0 (h264 (native) -> hevc (hevc_nvenc))
     amix (graph 0) -> Stream #0:1 (libvorbis)
    Press [q] to stop, [?] for help
    [hevc_nvenc @ 000002287e34a040] InitializeEncoder failed: invalid param (8)
    Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
    Conversion failed!
  • Intercept ffmpeg stdout with Process() in Swift

    3 avril 2021, par TAFKAS

    I've to admit that I'm quite inexperienced in Swift, but nonetheless I'm trying to build an OS X app to convert a video in a particular format and size with ffmpeg using Swift.&#xA;My goal is to have the ffmpeg stdout in a separate window to show the progress to the user.&#xA;Before posting here, I've read all that exist on the internet about the subject :-) and I've not found yet a solution to my problem of not having any output whatsoever in my textView but only in the Xcode console.&#xA;I've found this post here :&#xA;Real time NSTask output to NSTextView with Swift&#xA;that seems very promising but is not working anyway. I've tried to use the command /bin/sh in the example with the provided arguments in my code and it works like a charm. Probably it's me and my inexperience, but I think that is something related to the way ffmpeg output his progress, that won't work. It seems that even removing the -v and -stat options the ffmpeg command still output to the Xcode console but not in my TextField.&#xA;I hope that someone can shed a light in my swift coding darkness.&#xA;Thanks in advance

    &#xA;

    STEFANO

    &#xA;

    UPDATE - SOLVED

    &#xA;

    I had an EUREKA moment and I've assigned the pipe to the standarderror and voilà it worked

    &#xA;

    import Cocoa&#xA;&#xA;var videoLoadURL = ""&#xA;&#xA;class ViewController: NSViewController {&#xA;   @IBOutlet weak var filenameLabel: NSTextField!&#xA;   @IBOutlet weak var progressText: NSTextField!&#xA;   @IBOutlet weak var progressIndicator: NSProgressIndicator!&#xA;&#xA;override func viewDidLoad() {&#xA;    super.viewDidLoad()&#xA;&#xA;    // Do any additional setup after loading the view.&#xA;    progressIndicator.isHidden = true&#xA;    &#xA;}&#xA;&#xA;@IBAction func loadVIdeo(_ sender: NSButton) {&#xA;    &#xA;    let openPanel = NSOpenPanel()&#xA;    openPanel.allowsMultipleSelection = false&#xA;    openPanel.canChooseFiles = true&#xA;    openPanel.runModal()&#xA;    &#xA;    if let path = openPanel.url?.lastPathComponent {&#xA;        &#xA;        filenameLabel.stringValue = path&#xA;    }&#xA;    &#xA;    if let path = openPanel.url?.absoluteURL {&#xA;        &#xA;        videoLoadURL = path.absoluteString&#xA;&#xA;    }&#xA;&#xA;}&#xA;&#xA;@IBAction func convert(_ sender: NSButton) {&#xA;    &#xA;    progressIndicator.isHidden = false&#xA;    let savePanel = NSSavePanel()&#xA;    var videoSaveURL: String = ""&#xA;    savePanel.nameFieldStringValue = "Converted_\(filenameLabel.stringValue).mp4"&#xA;    savePanel.runModal()&#xA;    &#xA;    if let path = savePanel.url?.absoluteURL {&#xA;        &#xA;        videoSaveURL = path.absoluteString&#xA;    }&#xA;&#xA;    let startLaunch: CFTimeInterval = CACurrentMediaTime()&#xA;    &#xA;    progressIndicator.startAnimation(self)&#xA;    &#xA;    let task = Process()&#xA;    task.launchPath = "/usr/local/bin/ffmpeg"&#xA;    task.arguments = ["-i", "\(videoLoadURL)", "-y", "-g", "1", "-crf", "29","-b", "0", "-pix_fmt", "yuv420p", "-strict", "-2", "\(videoSaveURL)"]&#xA;    let pipe = Pipe()&#xA;    task.standardError = pipe&#xA;    let outHandle = pipe.fileHandleForReading&#xA;    outHandle.waitForDataInBackgroundAndNotify()&#xA;&#xA;    var observer1: NSObjectProtocol!&#xA;    observer1 = NotificationCenter.default.addObserver(forName: NSNotification.Name.NSFileHandleDataAvailable, object: outHandle, queue: nil, using: { notification -> Void in&#xA;        let data = outHandle.availableData&#xA;        if data.count > 0 {&#xA;            &#xA;            if let str = NSString(data: data, encoding: String.Encoding.utf8.rawValue) {&#xA;                &#xA;                self.progressText.stringValue = str as String&#xA;                &#xA;            }&#xA;            &#xA;            outHandle.waitForDataInBackgroundAndNotify()&#xA;            &#xA;        } else {&#xA;            &#xA;            print("EOF on stdout from process")&#xA;            NotificationCenter.default.removeObserver(observer1)&#xA;        }&#xA;        &#xA;    })&#xA;&#xA;    var observer2: NSObjectProtocol!&#xA;    observer2 = NotificationCenter.default.addObserver(forName: Process.didTerminateNotification, object: task, queue: nil, using: { notification -> Void in&#xA;        &#xA;        print("terminated")&#xA;        NotificationCenter.default.removeObserver(observer2)&#xA;&#xA;    })&#xA;&#xA;    do {&#xA;        &#xA;        try task.run()&#xA;        &#xA;    }catch {&#xA;        &#xA;        print("error")&#xA;    }&#xA;    &#xA;    task.waitUntilExit()&#xA;    &#xA;    let elapsedTime: CFTimeInterval = CACurrentMediaTime() - startLaunch&#xA;    NSSound.beep()&#xA;    progressIndicator.stopAnimation(self)&#xA;    progressIndicator.isHidden = true&#xA;    if task.terminationStatus == 0 {&#xA;        &#xA;        let alertOK = NSAlert()&#xA;        alertOK.messageText = "Tutto bene"&#xA;        alertOK.addButton(withTitle:"OK")&#xA;        alertOK.addButton(withTitle: "Cancel")&#xA;        alertOK.informativeText = "Conversione eseguita in \(elapsedTime) secondi"&#xA;        alertOK.runModal()&#xA;        &#xA;    } else {&#xA;        &#xA;        let alertOK = NSAlert()&#xA;        alertOK.messageText = "Errore"&#xA;        alertOK.addButton(withTitle:"OK")&#xA;        alertOK.informativeText = "La conversione &#xE8; fallita"&#xA;        alertOK.runModal()&#xA;        &#xA;    }&#xA;}&#xA;

    &#xA;

    }

    &#xA;