Recherche avancée

Médias (91)

Autres articles (60)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

Sur d’autres sites (4584)

  • Sidekiq not processing video meta data using stremio-ffmpeg in rails app

    22 juillet 2017, par Arnold

    I am trying to process different video resolutions in background using carrierwave backgrounder, carrierwave-video and sidekiq. everything is working well (all the versions are created in background) except the meta data captured using stremio-ffmpeg. I am completely stuck and can’t figure out why the meta data are not being processed.

    below are my sample codes

    video_uploader.rb

    require 'streamio-ffmpeg'
    class VideoUploader < CarrierWave::Uploader::Base
    include CarrierWave::Video  # for your video processing
    include CarrierWave::Video::Thumbnailer
    include ::CarrierWave::Backgrounder::Delay
    # Include RMagick or MiniMagick support:
    # include CarrierWave::RMagick
    # include CarrierWave::MiniMagick
    # Choose what kind of storage to use for this uploader:
    storage :file
    # storage :fog
    #byebug
    process :save_metadata
    # Override the directory where uploaded files will be stored.
    # This is a sensible default for uploaders that are meant to be mounted:
    def store_dir
    default_path = "/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
    # if FLAVOR == "uglive"
    #   default_path = "/uglive#{default_path}"
    # end
    "#{VIDEO_STORAGE}#{default_path}"
    end
    version :thumb do
    process thumbnail: [{format: 'png', quality: 10, size: 360, strip: false,
    square: false, logger: Rails.logger}]
    def full_filename for_file
     png_name for_file, version_name
    end
    end
    def png_name for_file, version_name
    #remove all accents
    I18n.transliterate(%Q{#{version_name}_#
    {for_file.chomp(File.extname(for_file))}.png})
    end
    #save the video size in the model
    def save_metadata
    video = FFMPEG::Movie.new(file.file)
    if video.valid?
     model.duration = video.duration #(duration of the video in seconds)
     model.size = video.size #size is bytes
     model.video_codec = video.video_codec # "h264"
     model.width = video.width # 640 (width of the video in pixels)
     model.height = video.height # 480 (height of the video in pixels)
     model.audio_codec = video.audio_codec # "aac"
    end
    end
    # Different Video Resolutions
    version :res_480p, :do_not_delay => true do
    process encode_video: [:mp4, resolution: "720X480"]
    end
    version :res_720p do
    process encode_video: [:mp4, resolution: "1280X720"]
    end
    version :res_360p do
    process encode_video: [:mp4, resolution: "640x360"]
    end
    version :res_240p do
    process encode_video: [:mp4, resolution: "426x240"]
    end

    video.rb i called the mount uploader and background_process method (carrierwave video gem)

    mount_uploader :file, VideoUploader
    process_in_background :file

    when i run bundle exec sidekiq -q carrierwavethe version are created in background but the save_metadata method is never processed, so the duration,size,width, height, video_codec and audio_codec are nil in the Video model.

    I am stuck on this the whole day. any help will be highly welcome

  • How can I reencode a video to match another's codec exactly ?

    24 janvier 2020, par Stephen Schrauger

    When I’m on vacation, I usually use our camcorder to record videos. Since they’re all the same format, I can use ffmpeg to concat them into one large, smooth video without re-encoding.

    However, sometimes I will use a phone or other camera to record a video (if the camcorder ran out of space/battery or was left at a hotel).

    I’d like to determine the codec, framerate, etc used by my camcorder and use those parameters to convert the phone vidoes into the same format. That way, I will be able to concatonate all the videos without re-encoding the camcorder videos.

    Using ffprobe, I found my camcorder has this encoding :

     Input #0, mpegts, from 'camcorderfile.MTS':
     Duration: 00:00:09.54, start: 1.936367, bitrate: 24761 kb/s
     Program 1
       Stream #0:0[0x1011]: Video: h264 (High) (HDPR / 0x52504448), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc
       Stream #0:1[0x1100]: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, fltp, 256 kb/s
       Stream #0:2[0x1200]: Subtitle: hdmv_pgs_subtitle ([144][0][0][0] / 0x0090), 1920x1080

    The phone (iPhone 5s) encoding is :

     Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'mov.MOV':
     Metadata:
       major_brand     : qt  
       minor_version   : 0
       compatible_brands: qt  
       creation_time   : 2017-01-02T03:04:05.000000Z
       com.apple.quicktime.location.ISO6709: +12.3456-789.0123+456.789/
       com.apple.quicktime.make: Apple
       com.apple.quicktime.model: iPhone 5s
       com.apple.quicktime.software: 10.2.1
       com.apple.quicktime.creationdate: 2017-01-02T03:04:05-0700
     Duration: 00:00:14.38, start: 0.000000, bitrate: 11940 kb/s
       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080, 11865 kb/s, 29.98 fps, 29.97 tbr, 600 tbn, 1200 tbc (default)
       Metadata:
         creation_time   : 2017-01-02T03:04:05.000000Z
         handler_name    : Core Media Data Handler
         encoder         : H.264
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 63 kb/s (default)
       Metadata:
         creation_time   : 2017-01-02T03:04:05.000000Z
         handler_name    : Core Media Data Handler
       Stream #0:2(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
       Metadata:
         creation_time   : 2017-01-02T03:04:05.000000Z
         handler_name    : Core Media Data Handler
       Stream #0:3(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
       Metadata:
         creation_time   : 2017-01-02T03:04:05.000000Z
         handler_name    : Core Media Data Handler

    I’m presuming that ffmpeg will automatically take any acceptable video format, and that I only need to figure out the output settings. I think I need to use -s 1920x1080 and -pix_fmt yuv420p for the output, but what other flags do I need in order to make the phone video into the same encoding as the camcorder video ?

    Can I get some pointers as to how I can translate the ffprobe output into the flags I need to give to ffmpeg ?

    Edit : Added the entire Input #0 for both media files.

  • Merge selected separate video's and put the variable into a ffmpeg merge command

    28 juin 2017, par Fearhunter

    I am using the following method to cut a stream in pieces and save mp4 files on my harddisk.

    Method :

    public Process SliceStream()
           {
               var url = model.Url;
               var cuttime = model.Cuttime;
               var output = model.OutputPath;

               return Process.Start(new ProcessStartInfo
                   {
                       FileName = "ffmpeg.exe",
                       Arguments = $"-i \"{url}\" -c copy -flags +global_header -f segment -segment_time \"{cuttime}\" -segment_format_options movflags=+faststart -reset_timestamps 1 \"{output}\"",
                       UseShellExecute = false,
                       RedirectStandardOutput = true
                   });

           }

    The var variables are the user’s inputs on the front-end. This method is working fine.

    Now I have the separate video files loaded in a list on the front end. I want to merge this videos files into one. I was looking to the following link :

    Concatenate two mp4 files using ffmpeg

    I see there a 3 ways to merge my seperate video files. What is the best way to merge my video’s ? I was thinking about to this scenario :

    1 : the user select te video’s he wants to merge
    2 : the user click on the button merge
    3 : this array variable I can use into the ffmpeg command in reverse order

    ffmpeg -i '{reversed array input}' -codec copy output