Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (55)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

Sur d’autres sites (4263)

  • Detecting video volume

    27 décembre 2016, par Johnathan Kanarek

    I’m streaming few RTMP streams through nginx and I want to check every few seconds what stream has the highest volume.
    Specifically these streams are of talking heads and I assume that usually only one of them is speaking at a time, and I’m trying to find which one.
    Since nginx can output hls (Apple http live streaming) I decided to check every few seconds the last segment of each stream using ffmpeg.
    Example :

    ffmpeg -f mp3 -i /my/path/camera67/123.ts -af "volumedetect" -f null /dev/null

    For some reason the max_volume is always zero (max_volume : 0.0 dB) and mean_volume seems meaningless regarding the volume.

    1. Do you have any idea why it’s always zero ?
    2. Is there a helpful way to understand mean_volume ?
    3. Can you think of a different tool that may give me the volume (e.g. mediainfo or ffprobe) ?

    I also tried :

    ffmpeg -f lavfi -i amovie=/my/path/camera67/123.ts,volumedetect

    This time I got :

    [mpegts @ 0x130bf40] start time for stream 1 is not set in estimate_timings_from_pts
    [mpegts @ 0x130bf40] Could not find codec parameters for stream 1 (Audio : aac ([15][0][0][0] / 0x000F), 0 channels, fltp) : unspecified sample rate
    Consider increasing the value for the ’analyzeduration’ and ’probesize’ options
    [Parsed_amovie_0 @ 0x130bcc0] No audio stream with index ’-1’ found
    [lavfi @ 0x130abc0] Error initializing filter ’amovie’ with args ’/my/path/camera67/123.ts’
    amovie=/my/path/camera67/123.ts,volumedetect : Invalid argument

    Any idea ?

    Thanks,
    T.

  • How to Simply Remove Duplicate Frames from a Video using ffmpeg

    29 janvier 2017, par Skeeve

    First of all, I’d preface this by saying I’m NO EXPERT with video manipulation,
    although I’ve been fiddling with ffmpeg for years (in a fairly limited way). Hence, I’m not too flash with all the language folk often use... and how it affects what I’m trying to do in my manipulations... but I’ll have a go with this anyway...

    I’ve checked a few links here, for example :
    ffmpeg - remove sequentially duplicate frames

    ...but the content didn’t really help me.

    I have some hundreds of video clips that have been created under both Windows and Linux using both ffmpeg and other similar applications. However, they have some problems with times in the video where the display is ’motionless’.

    As an example, let’s say we have some web site that streams a live video into, say, a Flash video player/plugin in a web browser. In this case, we’re talking about a traffic camera video stream, for example.

    There’s an instance of ffmpeg running that is capturing a region of the (Windows) desktop into a video file, viz :-

    ffmpeg -hide_banner -y -f dshow ^
         -i video="screen-capture-recorder" ^
         -vf "setpts=1.00*PTS,crop=448:336:620:360" ^
         -an -r 25 -vcodec libx264 -crf 0 -qp 0 ^
         -preset ultrafast SAMPLE.flv

    Let’s say the actual ’display’ that is being captured looks like this :-

    123456789 XXXXX 1234567 XXXXXXXXXXX 123456789 XXXXXXX
    ^---a---^ ^-P-^ ^--b--^ ^----Q----^ ^---c---^ ^--R--^

    ...where each character position represents a (sequence of) frame(s). Owing to a poor internet connection, a "single frame" can be displayed for an extended period (the ’X’ characters being an (almost) exact copy of the immediately previous frame). So this means we have segments of the captured video where the image doesn’t change at all (to the naked eye, anyway).

    How can we deal with the duplicate frames ?... and how does our approach change if the ’duplicates’ are NOT the same to ffmpeg but LOOK more-or-less the same to the viewer ?

    If we simply remove the duplicate frames, the ’pacing’ of the video is lost, and what used to take, maybe, 5 seconds to display, now takes a fraction of a second, giving a very jerky, unnatural motion, although there are no duplicate images in the video. This seems to be achievable using ffmpeg with an ’mp_decimate’ option, viz :-

        ffmpeg -i SAMPLE.flv ^                      ... (i)
           -r 25 ^
           -vf mpdecimate,setpts=N/FRAME_RATE/TB DEC_SAMPLE.mp4

    That reference I quoted uses a command that shows which frames ’mp_decimate’ will remove when it considers them to be ’the same’, viz :-

        ffmpeg -i SAMPLE.flv ^                      ... (ii)
           -vf mpdecimate ^
           -loglevel debug -f null -

    ...but knowing that (complicated formatted) information, how can we re-organize the video without executing multiple runs of ffmpeg to extract ’slices’ of video for re-combining later ?

    In that case, I’m guessing we’d have to run something like :-

    • user specifies a ’threshold duration’ for the duplicates
      (maybe run for 1 sec only)
    • determine & save main video information (fps, etc - assuming
      constant frame rate)
    • map the (frame/time where duplicates start)->no. of
      frames/duration of duplicates
    • if the duration of duplicates is less than the user threshold,
      don’t consider this period as a ’series of duplicate frames’
      and move on
    • extract the ’non-duplicate’ video segments (a, b & c in the
      diagram above)
    • create ’new video’ (empty) with original video’s specs
    • for each video segment
      extract the last frame of the segment
      create a short video clip with repeated frames of the frame
      just extracted (duration = user spec. = 1 sec)
      append (current video segment+short clip) to ’new video’
      and repeat

    ...but in my case, a lot of the captured videos might be 30 minutes long and have hundreds of 10 sec long pauses, so the ’rebuilding’ of the videos will take a long time using this method.

    This is why I’m hoping there’s some "reliable" and "more intelligent" way to use
    ffmepg (with/without the ’mp_decimate’ filter) to do the ’decimate’ function in only a couple of passes or so... Maybe there’s a way that the required segments could even be specified (in a text file, for example) and as ffmpeg runs it will
    stop/restart it’s transcoding at specified times/frame numbers ?

    Short of this, is there another application (for use on Windows or Linux) that could do what I’m looking for, without having to manually set start/stop points,
    extracting/combining video segments manually...?

    I’ve been trying to do all this with ffmpeg N-79824-gcaee88d under Win7-SP1 and (a different version I don’t currently remember) under Puppy Linux Slacko 5.6.4.

    Thanks a heap for any clues.

  • want to stream mobile camera using ffserver and ffmpeg

    6 janvier 2017, par Vinay Pandya

    First i will tell you my requirement than i will tell you what i have done.

    i am noob in media streaming i am learning and i am very confused about it.

    basically i want to do following thing

    1 : mobile app will stream video on server through URL (which is on my laptop)
    2 : My laptop should run ffserver/ffmpeg which store video stream which is coming from mobile app and allow other client to watch it (here i am talking about VLC as client).

    so this is my requirement.

    i ham running ffserver on my laptop

    my ff server config is like :

    HTTPPort 8090
    HTTPBindAddress 0.0.0.0

    RTSPPort 8091
    RTSPBindAddress 0.0.0.0


    MaxHTTPConnections 2000
    MaxClients 1000
    MaxBandwidth 1000
    CustomLog -

    #NoDaemon

    <feed>
       File /tmp/feed1.ffm
       FileMaxSize 200K
       ACL allow 127.0.0.1
    </feed>

    # if you want to use mpegts format instead of flv
    # then change "live.flv" to "live.ts"
    # and also change "Format flv" to "Format mpegts"
    <stream>
       Format flv
       Feed feed1.ffm

       VideoCodec libx264
       VideoFrameRate 30
       VideoBitRate 512
       VideoSize 320x240
       AVOptionVideo crf 23
       AVOptionVideo preset medium
       # for more info on crf/preset options, type: x264 --help
       AVOptionVideo flags +global_header

       AudioCodec aac
       Strict -2
       AudioBitRate 128
       AudioChannels 2
       AudioSampleRate 44100
       AVOptionAudio flags +global_header
    </stream>

    ##################################################################
    # Special streams
    ##################################################################
    <stream>
       Format status
       # Only allow local people to get the status
       ACL allow localhost
       ACL allow 192.168.0.0 192.168.255.255
    </stream>

    # Redirect index.html to the appropriate site
    <redirect>
       URL http://www.ffmpeg.org/
    </redirect>
    ##################################################################

    than i am adding following url to my mobile app to stream video.
    rtsp ://:8091/feed1.ffm
    my mobile app start streaming my mobile developer team said that.
    but i am not getting any log on ffserver, when i am stooping streaming the TEARDOWN request is comming

    [TEARDOWN] "rtsp://192.168.1.57:8091/feed1.ffm RTSP/1.0" 200 7034

    i have done this is so far, i dont know how to use ffmpeg with live streaming. please tell me some example for that.

    i am not able to watch that live stream on VLC client. also tell me what URL should i enter in VLC for streaming i have tried almost every url combination.

    and one more thing i want to do it with RTSP protocol.

    i think this info will help you to understand my requirement.