Recherche avancée

Médias (91)

Autres articles (33)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

Sur d’autres sites (4685)

  • Pipe FFMPEG MPEG-DASH livestream to AWS S3

    17 août 2019, par Alexander

    So I’m currently trying to livestream the rendering of a GPU-heavy video (renders about 1fps), encode it to a 30fps MPEG-DASH livestream and output this to AWS S3 so Shaka Player can display the live rendering.

    The first issue is that the livestream keeps looping, it doesn’t stop after the rendering for loop is done.

    I use a python script to pipe the output of the rendering to FFMPEG, and pipe the output of FFMPEG to the aws s3 cli like this :

    p1 = Popen(['ffmpeg', '-y', '-hwaccel', 'cuvid', '-f', 'image2pipe', '-r', '24', '-i', '-', '-c:v', 'h264_nvenc', '-b:v', '5M', '-f', 'dash', '-movflags', 'frag_keyframe+empty_moov', '-'], stdin=PIPE)#, shell=True) #'-method', 'PUT', 'https://example.s3.amazonaws.com/test1/test1.mpd'], stdin=PIPE)

    p2 = Popen(['aws', 's3', 'cp', '-', 's3://example/test1/test1.mpd'], stdin=p1.stdout)


    #The following commented aws s3 sync command uploads successfully to S3
    #but the issue here is that it stops after the syncing is done and its hacky
    #p1 = Popen(['ffmpeg', '-y', '-vsync', '0', '-hwaccel', 'cuvid', '-f', 'image2pipe', '-r', '24', '-i', '-', '-c:v', 'h264_nvenc', '-b:v', '5M', '-f', 'dash', '-movflags', 'frag_keyframe+empty_moov', 'test2.mpd'], stdin=PIPE)#, shell=True) #'-method', 'PUT', 'https://teststream.s3.amazonaws.com/test1/test1.mpd'], stdin=PIPE)
    #p2 = Popen(['aws', 's3', 'sync', '.', 's3://teststream/test1', '--exclude', '"*"', '--include', '"*.m4s"', '--include', '"*.mpd"'], stdin=PIPE)

    #pseudocode
    for ci,(content,contentName) in enumerate(content_loader):
       im = renderframe(content)
       im.save(p1.stdin, 'PNG')

    p1.stdin.close()
    p1.wait()
    p2.stdin.close()
    p2.wait()
  • ffmpeg/sox audio processing : Merging files with envelope changes

    2 octobre 2020, par March Hare

    So I have two audio files. One is a music bed with an intro that segues into a looping music clip (let's call this *1). The second is the voice over audio track (referenced as *2, length n).

    


    Audio *1 is fixed, while the voice over (*2) is downloaded about 3 times a day, and can vary in length. *1 is longer than we ever expect *2 to be.

    


    What I need to do is

    


      

    1. Alter the overall gain of *1 to -7.5 dB
    2. 


    3. Begin merging VO *2 at time m, while reducing the volume envelope of *1 by -11 dB. This is fixed based on the length of the intro.
    4. 


    5. Fade everything out to -∞ dB around the end of *2
    6. 


    7. Trim off the silence at the end.
For reference, the total length of the final track should be m+n.
    8. 


    


    Unfortunately, I'm not versed enough with ffmpeg or sox to know exactly what I'm after here, and a lot of the examples tend to do one thing or another and aren't always clear when combining happens. I didn't get a lot of prior notice about this coming down the pipeline, so I'd like to get something relatively quick. We're able to do all of this stuff nicely in Adobe Audition (and I can do something similar in Audacity), but the idea is to automate it. For our envelope adjustments, we were just using linear ramps rather than smoothsteps, and that sounded fine.

    


    The TLDR : The VO track *2 governs how long the file winds up being, while audio bed *1 needs to be ducked when *2 begins, and the whole thing faded out right when *2 ends.

    


    We also have an automation system (radio station automation, specialized for something different than I need), so in a pinch if we have to just cut off the audio at the end of *2, we can get the fadeout from the radio automation system.

    


    I've been using the information at this link to some effect (specifically the bit about ffmpeg volumes), but it still isn't dynamic enough for the situation.
Envelope pattern in SoX (Sound eXchange) or ffmpeg

    


    Anyone have any advice on this one ? I've got Sox and ffmpeg available, and if need be I can probably install other tools as well.

    


  • Capturing network video using opencv

    3 septembre 2014, par Subhendu Sinha Chaudhuri

    I am using ffserver and ffmpeg combination to capture web camera video and transmit it through my network.

    I want to capture this video using opencv and python from another computer.
    I can see the video (cam1.asf) in the browser of another computer. But my opencv + python code could not capture any frame.

    Code for ffserver

    HTTPPort 8090
    HTTPBindAddress 0.0.0.0
    MaxHTTPConnections 2000
    MaxClients 1000
    MaxBandWidth 2000

    <feed>
      File ./tmp/feed1.ffm
      FileMaxSize 1G
      ACL allow 127.0.0.1
    </feed>

    <stream>
     Feed feed1.ffm
     Format asf
     VideoCodec msmpeg4v2
     VideoFrameRate 30
     VideoSize vga
    </stream>

    FFmpeg

    $ffmpeg -f video4linux2 -i /dev/video0 192.168.1.3 /cam1.ffm

    This stream can be seen in the browser

    But with opencv code

    import sys
    import cv2.cv as cv
    import numpy

    video="http://http://192.168.1.3:8090/cam1.asf"
    capture =cv.CaptureFromFile(video)
    cv.NamedWindow('Video Stream', 1 )
    while True:
     # capture the current frame
     frame = cv.QueryFrame(capture)
     #if frame is None:
      # break
     #else:
       #detect(frame)
     cv.ShowImage('Video Stream', frame)
     if cv.WaitKey(10) == 27:
       print 'ESC pressed. Exiting ...'
       break

    I donot get any output in the stream

    My aim is to work with the web camera video both at the base station (ie where the web camera is connected) and also at the network location