Recherche avancée

Médias (0)

Mot : - Tags -/organisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (66)

  • L’espace de configuration de MediaSPIP

    29 novembre 2010, par

    L’espace de configuration de MediaSPIP est réservé aux administrateurs. Un lien de menu "administrer" est généralement affiché en haut de la page [1].
    Il permet de configurer finement votre site.
    La navigation de cet espace de configuration est divisé en trois parties : la configuration générale du site qui permet notamment de modifier : les informations principales concernant le site (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (6607)

  • ffmpeg can't transcode DVD ac3 audio stream, but VLC can play it

    21 février 2020, par RalphORama

    I’m attempting to transcode a DVD to a single MKV file. I’ve had success in the past with other DVDs, but I’m running into an error I haven’t seen before.

    First I concatenate the VOB files I want to transcode :

    cat VTS_02_1.VOB VTS_02_2.VOB VTS_02_3.VOB > WMAV.VOB

    ffprobe output :

    $ ffprobe -analyzeduration 100M -probesize 100M WMAV.VOB                                                                         Input #0, mpeg, from 'WMAV.VOB':
     Duration: 01:05:19.42, start: 0.300300, bitrate: 5686 kb/s
       Stream #0:0[0x1bf]: Data: dvd_nav_packet
       Stream #0:1[0x1e0]: Video: mpeg2video (Main), yuv420p(tv, smpte170m, top first), 720x480 [SAR 32:27 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc
       Stream #0:2[0x80]: Audio: ac3, 48000 Hz, stereo, fltp, 192 kb/s
    Unsupported codec with id 100357 for input stream 0

    Then I run this command to transcode the file :

    ffmpeg -analyzeduration 100M -probesize 100M \
     -i WMAV.VOB \
     -map 0:1 -map 0:2 \
     -c:v libx264 -preset slow -tune film -crf 21 \
     -c:a aac -b:a 192k \
     wmav.mkv

    However, when I include -c:a aac, I get thousands of errors like this :

    Error while decoding stream #0:2: Error number -16976906 occurred
    [ac3 @ 000002bd24d8eec0] expacc 127 is out-of-range
    [ac3 @ 000002bd24d8eec0] error decoding the audio block

    There doesn’t seem to be any issue with the audio stream since it plays back fine in VLC. The transcode succeeds if I use -c:a copy.

    What is causing this error and how could I fix the problem ?

  • What does FFmpeg expect me to send to the first rawvideo input pipe ?

    23 octobre 2023, par Somebody

    I'm using two named pipes, in order :

    


      

    1. video_pipe

      -f rawvideo
-video_size 1x1
-pix_fmt gray
-r 1


      


    2. 


    3. audio_pipe

      -f s16le
-ar 32000
-channels 1


      


    4. 


    


    I thought FFmpeg needed to read individual frames from a rawvideo pipe but I must be mistaken cause it doesn't start reading from the second pipe until I feed 11 bytes to the first pipe although, in the example given, a grayscale frame of one pixel is exactly one byte. I have experimenting by increasing video_size and here's the table I could infer :

    


    





    


    


    


    


    



    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    Actual frame size in bytes Bytes needed to be sent before to move on
    1 11
    2 17
    3 25
    4 33
    5 41
    6 49

    


    


    I can't just send multiple frames as I want to output a 1 second video.
I tested most of the parameters in this page : https://github.com/FFmpeg/FFmpeg/blob/ff5a3575fec2d49d5fae4ec1198a939e203314db/libavformat/options_table.h
but none of them solved it. (I also used "-re" with no luck).

    


    This is an example command in case you want to reproduce the issue :
ffmpeg -y -re -f rawvideo -video_size 1x1 -pixel_format gray -framerate 1 -i \\.\pipe\video_pipe -f s16le -ar 32000 -channels 1 -i \\.\pipe\audio_pipe -map 0:v:0 -map 1:a:0 out.mp4

    


    Any idea of how I could send the exact frame bytes amount instead of being forced to send way more bytes ?

    


  • How to set background transparency for animation with ffmpeg

    23 octobre 2023, par Jan Turowski

    I am creating animated physics graphs with a transparent background for later use in a NLE. On my old machine at work they display and render with background transparency just fine. The exact same code however loses background transparency in the ffmpeg render on both my Linux and my Windows machine at home. The animations are displayed just fine on all machines.

    


    As I first thought it was a Linux issue, I tried to run the code on my Windows machine expecting it to work again. Unfortunately it did not.

    


    Reduced code :

    


    import numpy as np
import matplotlib.pylab as plt
from matplotlib.animation import FuncAnimation
import matplotlib.animation as animation
from matplotlib.pyplot import figure
from matplotlib import style
import locale
from matplotlib.ticker import (MultipleLocator, AutoMinorLocator)
# Set to German locale to get comma decimal separater
locale.setlocale(locale.LC_NUMERIC, "de_DE")
# Tell matplotlib to use the locale we set above
plt.rcParams['axes.formatter.use_locale'] = True

# plt.clf()
# plt.rcdefaults()

# Style und Font definieren

style.use('dark_background')

# Pfeile erstellen
def arrowed_spines(fig, ax):

    xmin, xmax = ax.get_xlim()
    ymin, ymax = ax.get_ylim()

    # removing the default axis on all sides:
    for side in ['bottom','right','top','left']:
        ax.spines[side].set_visible(False)

    # removing the axis ticks
    # plt.xticks([]) # labels
    # plt.yticks([])
    # ax.xaxis.set_ticks_position('none') # tick markers
    # ax.yaxis.set_ticks_position('none')

    # get width and height of axes object to compute
    # matching arrowhead length and width
    dps = fig.dpi_scale_trans.inverted()
    bbox = ax.get_window_extent().transformed(dps)
    width, height = bbox.width, bbox.height

    # manual arrowhead width and length
    hw = 1./20.*(ymax-ymin)
    hl = 1./20.*(xmax-xmin)
    lw = 1. # axis line width
    ohg = 0.3 # arrow overhang

    # compute matching arrowhead length and width
    yhw = hw/(ymax-ymin)*(xmax-xmin)* height/width
    yhl = hl/(xmax-xmin)*(ymax-ymin)* width/height

    # draw x and y axis
    ax.arrow(xmin, 0, xmax-xmin, 0., fc='w', ec='w', lw = lw,
             head_width=hw, head_length=hl, overhang = ohg,
             length_includes_head= True, clip_on = False)

    ax.arrow(0, ymin, 0., ymax-ymin, fc='w', ec='w', lw = lw,
             head_width=yhw, head_length=yhl, overhang = ohg,
             length_includes_head= True, clip_on = False)

# Meine easing-Funktion
def ease(n):
    if n < 0.0:
        return 0
    elif n > 1.0:
        return 1
    else:
        return 3*n**2-2*n**3

# Meine Floor/Warte Funktion
def wait(n):
    if n < 0.0:
        return 0
    else:
        return n

# Canvas erstellen
fig = plt.figure()
ax = fig.add_subplot(111)
fig.set_size_inches([8,9])

def f(x):
    return -0.05*x**2+125
xlin = np.linspace(0,60,100)


# Beschriftung und Optik

plt.xlabel(r"$x$ in $\rm{m}$", horizontalalignment='right', x=1.0)
plt.ylabel(r"$y$ in $\rm{m}$", horizontalalignment='right', y=1.0)
ax.set_xlim(0,100)
ax.set_ylim(0,139)
plt.grid(alpha=.4)
plt.xticks(np.arange(0, 100, 20))
plt.yticks(np.arange(0, 140, 20))
ax.yaxis.set_minor_locator(MultipleLocator(10))
ax.xaxis.set_minor_locator(MultipleLocator(10))
ax.tick_params(axis='x', direction = "inout", length= 10.0, which='both', width=3)
ax.tick_params(axis='y', direction = "inout", length= 10.0, which='both', width=3)


xsub = np.array([0])

# statische Linien definieren
line2, = ax.plot(xsub,f(xsub),linewidth=5,zorder=0,c = 'b')
arrowed_spines(fig, ax)
plt.tight_layout()

# Linien animieren
def animate(i):

    xsub = xlin[0:wait(i-20)]
    global line2
    line2.remove()
    line2, = ax.plot(xsub, f(xsub), linewidth=5, zorder=0,c = "b")
    plt.tight_layout()

animation = FuncAnimation(fig, animate, np.arange(0, 130, 1), interval=100)

plt.show()

# animation.save(r"YOUR\PATH\HERE\reduced_x-y.mov", codec="png",
         dpi=100, bitrate=-1,
         savefig_kwargs={'transparent': True, 'facecolor': 'none'})