Recherche avancée

Médias (1)

Mot : - Tags -/censure

Autres articles (53)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (7356)

  • Making animations by matplotlib and saving the video files

    20 janvier 2016, par Richard.L

    I have been studying the 1D wave equations and making the animation of the equation. But there are some problems when using the anim.save of animation to save the video file. I have already installed ffmpeg on my computer (a Windows machine) and set the environment variables. But it keeps telling me this :

    UserWarning: MovieWriter ffmpeg unavailable
    ...
    RuntimeError: Error creating movie, return code: 4 Try running with --verbose-debug

    enter image description here

    Here is my code :

    from numpy import zeros,linspace,sin,pi
    import matplotlib.pyplot as mpl
    from matplotlib import animation
    mpl.rcParams['animation.ffmpeg_path'] = 'C:\\ffmpeg\\bin\\ffmpeg.exe'

    def I(x):
       return sin(2*x*pi/L)

    def f(x,t):
       return sin(0.5*x*t)

    def solver0(I,f,c,L,n,dt,t):
       # f is a function of x and t, I is a function of x
       x = linspace(0,L,n+1)
       dx = L/float(n)
       if dt <= 0:
           dt = dx/float(c)
       C2 = (c*dt/dx)**2
       dt2 = dt*dt
       up = zeros(n+1)
       u = up.copy()
       um = up.copy()
       for i in range(0,n):
           u[i] = I(x[i])
       for i in range(1,n-1):
           um[i] = u[i]+0.5*C2*(u[i-1] - 2*u[i] + u[i+1]) + dt2*f(x[i],t)
       um[0] = 0
       um[n] = 0    
       while t <= tstop:
           t_old = t
           t += dt
           #update all inner points:
           for i in range(1,n-1):
               up[i] = -um[i] + 2*u[i] + C2*(u[i-1] - 2*u[i] + u[i+1]) + dt2*f(x[i],t_old)    
           #insert boundary conditions:
           up[0] = 0
           up[n] = 0
           #update data structures for next step
           um = u.copy()
           u = up.copy()  
       return u

    c = 1.0
    L = 10
    n = 100
    dt = 0
    tstop = 40

    fig = mpl.figure()
    ax = mpl.axes(xlim=(0,10),ylim=(-1.0,1.0))
    line, = ax.plot([],[],lw=2)

    def init():
       line.set_data([],[])
       return line,

    def animate(t):
       x = linspace(0,L,n+1)
       y = solver0(I, f, c, L, n, dt, t)
       line.set_data(x,y)
       return line,

    anim = animation.FuncAnimation(fig, animate, init_func=init,
                                  frames=200, interval=20, blit=True)

    anim.save('seawave_1d_ani.mp4',writer='ffmpeg',fps=30)
    mpl.show()

    I believe the problem is in the part of the animation instead of the three functions above. Please help me find what mistake I have made.

  • avutil/threadmessage : split the pthread condition in two

    1er décembre 2015, par Clément Bœsch
    avutil/threadmessage : split the pthread condition in two
    

    Fix a dead lock under certain conditions. Let’s assume we have a queue of 1
    message max, 2 senders, and 1 receiver.

    Scenario (real record obtained with debug added) :
    [...]
    SENDER #0 : acquired lock
    SENDER #0 : queue is full, wait
    SENDER #1 : acquired lock
    SENDER #1 : queue is full, wait
    RECEIVER : acquired lock
    RECEIVER : reading a msg from the queue
    RECEIVER : signal the cond
    RECEIVER : acquired lock
    RECEIVER : queue is empty, wait
    SENDER #0 : writing a msg the queue
    SENDER #0 : signal the cond
    SENDER #0 : acquired lock
    SENDER #0 : queue is full, wait
    SENDER #1 : queue is full, wait

    Translated :
    - initially the queue contains 1/1 message with 2 senders blocking on
    it, waiting to push another message.
    - Meanwhile the receiver is obtaining the lock, read the message,
    signal & release the lock. For some reason it is able to acquire the
    lock again before the signal wakes up one of the sender. Since it
    just emptied the queue, the reader waits for the queue to fill up
    again.
    - The signal finally reaches one of the sender, which writes a message
    and then signal the condition. Unfortunately, instead of waking up
    the reader, it actually wakes up the other worker (signal = notify
    the condition just for 1 waiter), who can’t push another message in
    the queue because it’s full.
    - Meanwhile, the receiver is still waiting. Deadlock.

    This scenario can be triggered with for example :
    tests/api/api-threadmessage-test 1 2 100 100 1 1000 1000

    One working solution is to make av_thread_message_queue_send,recv()
    call pthread_cond_broadcast() instead of pthread_cond_signal() so both
    senders and receivers are unlocked when work is done (be it reading or
    writing).

    This second solution replaces the condition with two : one to notify the
    senders, and one to notify the receivers. This prevents senders from
    notifying other senders instead of a reader, and the other way around.
    It also avoid broadcasting to everyone like the first solution, and is,
    as a result in theory more optimized.

    • [DH] libavutil/threadmessage.c
  • How to filter motion vectors ?

    2 août 2019, par vdletg

    My video is very noisy temporally. The video was taken under low light conditions at a high frame rate.

    Currently I’ve tried

    ffplay -flags2 +export_mvs -i test.mp4 -vf edgedetect=low=0.05:high=0.17,hqdn3d=4.0:3.0:6.0:4.5,codecview=mv=pf+bf+bb,"lutyuv=y='if(lt(val,19),0,val)'

    The motion vectors are tracking noise as in the near dark areas the vectors varying greatly in magnitude and angle.

    How do I decimate or filter the display motion vectors based on magnitude and/or location ?