Recherche avancée

Médias (0)

Mot : - Tags -/presse-papier

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (36)

  • Les notifications de la ferme

    1er décembre 2010, par

    Afin d’assurer une gestion correcte de la ferme, il est nécessaire de notifier plusieurs choses lors d’actions spécifiques à la fois à l’utilisateur mais également à l’ensemble des administrateurs de la ferme.
    Les notifications de changement de statut
    Lors d’un changement de statut d’une instance, l’ensemble des administrateurs de la ferme doivent être notifiés de cette modification ainsi que l’utilisateur administrateur de l’instance.
    À la demande d’un canal
    Passage au statut "publie"
    Passage au (...)

  • Création définitive du canal

    12 mars 2010, par

    Lorsque votre demande est validée, vous pouvez alors procéder à la création proprement dite du canal. Chaque canal est un site à part entière placé sous votre responsabilité. Les administrateurs de la plateforme n’y ont aucun accès.
    A la validation, vous recevez un email vous invitant donc à créer votre canal.
    Pour ce faire il vous suffit de vous rendre à son adresse, dans notre exemple "http://votre_sous_domaine.mediaspip.net".
    A ce moment là un mot de passe vous est demandé, il vous suffit d’y (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

Sur d’autres sites (3396)

  • Trying to get the current FPS and Frametime value into Matplotlib title

    16 juin 2022, par TiSoBr

    I try to turn an exported CSV with benchmark logs into an animated graph. Works so far, but I can't get the Titles on top of both plots with their current FPS and frametime in ms values animated.

    


    Thats the output I'm getting. Looks like he simply stores all values in there instead of updating them ?

    


    Screengrab of cli output
Screengrab of the final output (inverted)

    


    from __future__ import division
import sys, getopt
import time
import matplotlib
import numpy as np
import subprocess
import math
import re
import argparse
import os
import glob

import matplotlib.animation as animation
import matplotlib.pyplot as plt


def check_pos(arg):
    ivalue = int(arg)
    if ivalue <= 0:
        raise argparse.ArgumentTypeError("%s Not a valid positive integer value" % arg)
    return True
    
def moving_average(x, w):
    return np.convolve(x, np.ones(w), 'valid') / w
    

parser = argparse.ArgumentParser(
    description = "Example Usage python frame_scan.py -i mangohud -c '#fff' -o mymov",
    formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument("-i", "--input", help = "Input data set from mangohud", required = True, nargs='+', type=argparse.FileType('r'), default=sys.stdin)
parser.add_argument("-o", "--output", help = "Output file name", required = True, type=str, default = "")
parser.add_argument("-r", "--framerate", help = "Set the desired framerate", required = False, type=float, default = 60)
parser.add_argument("-c", "--colors", help = "Colors for the line graphs; must be in quotes", required = True, type=str, nargs='+', default = 60)
parser.add_argument("--fpslength", help = "Configures how long the data will be shown on the FPS graph", required = False, type=float, default = 5)
parser.add_argument("--fpsthickness", help = "Changes the line width for the FPS graph", required = False, type=float, default = 3)
parser.add_argument("--frametimelength", help = "Configures how long the data will be shown on the frametime graph", required = False, type=float, default = 2.5)
parser.add_argument("--frametimethickness", help = "Changes the line width for the frametime graph", required = False, type=float, default = 1.5)
parser.add_argument("--graphcolor", help = "Changes all of the line colors on the graph; expects hex value", required = False, default = '#FFF')
parser.add_argument("--graphthicknes", help = "Changes the line width of the graph", required = False, type=float, default = 1)
parser.add_argument("-ts","--textsize", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 23)
parser.add_argument("-fsM","--fpsmax", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 180)
parser.add_argument("-fsm","--fpsmin", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 0)
parser.add_argument("-fss","--fpsstep", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 30)
parser.add_argument("-ftM","--frametimemax", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 50)
parser.add_argument("-ftm","--frametimemin", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 0)
parser.add_argument("-fts","--frametimestep", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 10)

arg = parser.parse_args()
status = False


if arg.input:
    status = True
if arg.output:
    status = True
if arg.framerate:
    status = check_pos(arg.framerate)
if arg.fpslength:
    status = check_pos(arg.fpslength)
if arg.fpsthickness:
    status = check_pos(arg.fpsthickness)
if arg.frametimelength:
    status = check_pos(arg.frametimelength)
if arg.frametimethickness:
    status = check_pos(arg.frametimethickness)
if arg.colors:
    if len(arg.output) != len(arg.colors):
        for i in arg.colors:
            if re.match(r"^#([A-Fa-f0-9]{6}|[A-Fa-f0-9]{3})$", i):
                status = True
            else:
                print('{} : Isn\'t a valid hex value!'.format(i))
                status = False
    else:
        print('You must have the same amount of colors as files in input!')
        status = False
if arg.graphcolor:
    if re.match(r"^#([A-Fa-f0-9]{6}|[A-Fa-f0-9]{3})$", arg.graphcolor):
        status = True
    else:
        print('{} : Isn\'t a vaild hex value!'.format(arg.graphcolor))
        status = False
if arg.graphthicknes:
    status = check_pos(arg.graphthicknes)
if arg.textsize:
    status = check_pos(arg.textsize)
if not status:
    print("For a list of arguments try -h or --help") 
    exit()


# Empty output folder
files = glob.glob('/output/*')
for f in files:
    os.remove(f)


# We need to know the longest recording out of all inputs so we know when to stop the video
longest_data = 0

# Format the raw data into a list of tuples (fps, frame time in ms, time from start in micro seconds)
# The first three lines of our data are setup so we ignore them
data_formated = []
for li, i in enumerate(arg.input):
    t = 0
    sublist = []
    for line in i.readlines()[3:]:
        x = line[:-1].split(',')
        fps = float(x[0])
        frametime = int(x[1])/1000 # convert from microseconds to milliseconds
        elapsed = int(x[11])/1000 # convert from nanosecond to microseconds
        data = (fps, frametime, elapsed)
        sublist.append(data)
    # Compare last entry of each list with the 
    if sublist[-1][2] >= longest_data:
        longest_data = sublist[-1][2]
    data_formated.append(sublist)


max_blocksize = max(arg.fpslength, arg.frametimelength) * arg.framerate
blockSize = arg.framerate * arg.fpslength


# Get step time in microseconds
step = (1/arg.framerate) * 1000000 # 1000000 is one second in microseconds
frame_size_fps = (arg.fpslength * arg.framerate) * step
frame_size_frametime = (arg.frametimelength * arg.framerate) * step


# Total frames will have to be updated for more then one source
total_frames = int(int(longest_data) / step)


if True: # Gonna be honest, this only exists so I can collapse this block of code

    # Sets up our figures to be next to each other (horizontally) and with a ratio 3:1 to each other
    fig, (ax1, ax2) = plt.subplots(1, 2, gridspec_kw={'width_ratios': [3, 1]})

    # Size of whole output 1920x360 1080/3=360
    fig.set_size_inches(19.20, 3.6)

    # Make the background transparent
    fig.patch.set_alpha(0)


    # Loop through all active axes; saves a lot of lines in ax1.do_thing(x) ax2.do_thing(x)
    for axes in fig.axes:

        # Set all splines to the same color and width
        for loc, spine in axes.spines.items():
            axes.spines[loc].set_color(arg.graphcolor)
            axes.spines[loc].set_linewidth(arg.graphthicknes)

        # Make sure we don't render any data points as this will be our background
        axes.set_xlim(-(max_blocksize * step), 0)
        

        # Make both plots transparent as well as the background
        axes.patch.set_alpha(.5)
        axes.patch.set_color('#020202')

        # Change the Y axis info to be on the right side
        axes.yaxis.set_label_position("right")
        axes.yaxis.tick_right()

        # Add the white lines across the graphs; the location of the lines are based off set_{}ticks
        axes.grid(alpha=.8, b=True, which='both', axis='y', color=arg.graphcolor, linewidth=arg.graphthicknes)

        # Remove X axis info
        axes.set_xticks([])

    # Add a another Y axis so ticks are on both sides
    tmp_ax1 = ax1.secondary_yaxis("left")
    tmp_ax2 = ax2.secondary_yaxis("left")

    # Set both to the same values
    ax1.set_yticks(np.arange(arg.fpsmin, arg.fpsmax + 1, step=arg.fpsstep))
    ax2.set_yticks(np.arange(arg.frametimemin, arg.frametimemax + 1, step=arg.frametimestep))
    tmp_ax1.set_yticks(np.arange(arg.fpsmin , arg.fpsmax + 1, step=arg.fpsstep))
    tmp_ax2.set_yticks(np.arange(arg.frametimemin, arg.frametimemax + 1, step=arg.frametimestep))

    # Change the "ticks" to be white and correct size also change font size
    ax1.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=16, labelsize=arg.textsize, labelcolor=arg.graphcolor)
    ax2.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=16, labelsize=arg.textsize, labelcolor=arg.graphcolor)
    tmp_ax1.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=8, labelsize=0) # Label size of 0 disables the fps/frame numbers
    tmp_ax2.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=8, labelsize=0)


    # Limits Y scale
    ax1.set_ylim(arg.fpsmin,arg.fpsmax + 1)
    ax2.set_ylim(arg.frametimemin,arg.frametimemax + 1)

    # Add an empty plot
    line = ax1.plot([], lw=arg.fpsthickness)
    line2 = ax2.plot([], lw=arg.frametimethickness)

    # Sets all the data for our benchmark
    for benchmarks, color in zip(data_formated, arg.colors):
        y = moving_average([x[0] for x in benchmarks], 25)
        y2 = [x[1] for x in benchmarks]
        x = [x[2] for x in benchmarks]
        line += ax1.plot(x[12:-12],y, c=color, lw=arg.fpsthickness)
        line2 += ax2.step(x,y2, c=color, lw=arg.fpsthickness)
    
    # Add titles with values
    ax1.set_title("Avg. frames per second: {}".format(y2), color=arg.graphcolor, fontsize=20, fontweight='bold', loc='left')
    ax2.set_title("Frametime in ms: {}".format(y2), color=arg.graphcolor, fontsize=20, fontweight='bold', loc='left')  

    # Removes unwanted white space; also controls the space between the two graphs
    plt.tight_layout(pad=0, h_pad=0, w_pad=2.5)
    
    fig.canvas.draw()

    # Cache the background
    axbackground = fig.canvas.copy_from_bbox(ax1.bbox)
    ax2background = fig.canvas.copy_from_bbox(ax2.bbox)


# Create a ffmpeg instance as a subprocess we will pipe the finished frame into ffmpeg
# encoded in Apple QuickTime (qtrle) for small(ish) file size and alpha support
# There are free and opensource types that will also do this but with much larger sizes
canvas_width, canvas_height = fig.canvas.get_width_height()
outf = '{}.mov'.format(arg.output)
cmdstring = ('ffmpeg',
                '-stats', '-hide_banner', '-loglevel', 'error', # Makes ffmpeg less annoying / to much console output
                '-y', '-r', '60', # set the fps of the video
                '-s', '%dx%d' % (canvas_width, canvas_height), # size of image string
                '-pix_fmt', 'argb', # format cant be changed since this is what  `fig.canvas.tostring_argb()` outputs
                '-f', 'rawvideo',  '-i', '-', # tell ffmpeg to expect raw video from the pipe
                '-vcodec', 'qtrle', outf) # output encoding must support alpha channel
pipe = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)

def render_frame(frame : int):

    # Set the bounds of the graph for each frame to render the correct data
    start = (frame * step) - frame_size_fps
    end = start + frame_size_fps
    ax1.set_xlim(start,end)
     
     
    start = (frame * step) - frame_size_frametime
    end = start + frame_size_frametime
    ax2.set_xlim(start,end)
    

    # Restore background
    fig.canvas.restore_region(axbackground)
    fig.canvas.restore_region(ax2background)

    # Redraw just the points will only draw points with in `axes.set_xlim`
    for i in line:
        ax1.draw_artist(i)
        
    for i in line2:
        ax2.draw_artist(i)

    # Fill in the axes rectangle
    fig.canvas.blit(ax1.bbox)
    fig.canvas.blit(ax2.bbox)
    
    fig.canvas.flush_events()

    # Converts the finished frame to ARGB
    string = fig.canvas.tostring_argb()
    return string




#import multiprocessing
#p = multiprocessing.Pool()
#for i, _ in enumerate(p.imap(render_frame, range(0, int(total_frames + max_blocksize))), 20):
#    pipe.stdin.write(_)
#    sys.stderr.write('\rdone {0:%}'.format(i/(total_frames + max_blocksize)))
#p.close()

#Signle Threaded not much slower then multi-threading
if __name__ == "__main__":
    for i , _ in enumerate(range(0, int(total_frames + max_blocksize))):
        render_frame(_)
        pipe.stdin.write(render_frame(_))
        sys.stderr.write('\rdone {0:%}'.format(i/(total_frames + max_blocksize)))


    


  • ffmpeg - when cutting video, last frame is green [on hold]

    4 juillet 2016, par Mister Fresh

    In a php web app, the user can upload a video and cut it in samples. Php launches a shell command with shell_exec() to start ffmpeg and cut the video.

    It works except that the last frame shows a green splash in the cut sample.

    Server is on linux. In dev environment (mac) with latest ffmpeg, the problem cannot be reproduced.

    Command is the following :

    ffmpeg -i /var/www/user_dir/web/projects/127472/cuts/1146 -ss 00:00:45 -t 15 -vcodec libx264 -s 640x360 -strict experimental -v quiet -y /var/www/user_dir/web/projects/127472/cuts/sample1954_small.mp4

    Console output :

    ffmpeg version 0.8.17-6:0.8.17-1,

    Copyright (c) 2000-2014 the Libav developers

    built on Mar 15 2015 17:00:31 with gcc 4.7.2

    configuration: --arch=amd64 --enable-pthreads --enable-runtime-cpudetect --extra-version='6:0.8.17-1' --libdir=/usr/lib/x86_64-linux-gnu --prefix=/usr --enable-bzlib --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-frei0r --enable-gnutls --enable-libgsm --enable-libmp3lame --enable-librtmp --enable-libopencv --enable-libopenjpeg --enable-libpulse --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-vaapi --enable-vdpau --enable-libvorbis --enable-libvpx --enable-zlib --enable-gpl --enable-postproc --enable-swscale --enable-libcdio --enable-x11grab --enable-libx264 --enable-libxvid --shlibdir=/usr/lib/x86_64-linux-gnu --enable-shared --disable-static

    libavutil    51. 22. 3 / 51. 22. 3

    libavcodec   53. 35. 0 / 53. 35. 0

    libavformat  53. 21. 1 / 53. 21. 1

    libavdevice  53.  2. 0 / 53.  2. 0

    libavfilter   2. 15. 0 /  2. 15. 0

    libswscale    2.  1. 0 /  2.  1. 0

    libpostproc  52.  0. 0 / 52.  0. 0

    The ffmpeg program is only provided for script compatibility and will be removed
    in a future release. It has been deprecated in the Libav project to allow for
    incompatible command line syntax improvements in its replacement called avconv
    (see Changelog for details). Please use avconv instead.

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/var/www/user_dir/web/projects/127472/cuts/1146':

    Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf56.15.103
       genre           : Blues

    Duration: 00:01:29.47, start: 0.036281, bitrate: 808 kb/s

    Stream #0.0(und): Video: h264 (High), yuv420p, 1920x1080 [PAR 1:1 DAR 16:9], 675 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc

    Stream #0.1(und): Audio: aac, 44100 Hz, stereo, s16, 128 kb/s
    [buffer @ 0x30a2ca0] w:1920 h:1080 pixfmt:yuv420p
    [scale @ 0x309bf00] w:1920 h:1080 fmt:yuv420p -> w:640 h:360 fmt:yuv420p flags:0x4
    [libx264 @ 0x30a3200] using SAR=1/1
    [libx264 @ 0x30a3200] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2
    [libx264 @ 0x30a3200] profile Main, level 3.0
    [libx264 @ 0x30a3200] 264 - core 123 r2189 35cf912 - H.264/MPEG-4 AVC codec -

       Copyleft 2003-2012 - http://www.videolan.org/x264.html
    - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x1:0x111 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=1 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=0 b_adapt=1 b_bias=0 direct=1 weightb=0 open_gop=1 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.25 aq=1:1.00

    Output #0, mp4, to '/var/www/user_dir/web/projects/127472/cuts/sample1954_small.mp4':

    Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       genre           : Blues
       encoder         : Lavf53.21.1

    Stream #0.0(und): Video: libx264, yuv420p, 640x360 [PAR 1:1 DAR 16:9], q=-1--1, 25 tbn, 25 tbc

    Stream #0.1(und): Audio: aac, 44100 Hz, stereo, s16, 200 kb/s
    Stream mapping:
     Stream #0.0 -> #0.0
     Stream #0.1 -> #0.1

    Press ctrl-c to stop encoding

    [buffer @ 0x30a2ca0] Buffering several frames is not supported. Please consume all available frames before adding a new one.

    Last message repeated 1124 times 565kB time=12.80 bitrate= 361.7kbits/s    ts/s    
    frame=  375 fps= 17 q=28.0 Lsize=     742kB time=14.96 bitrate= 406.4kbits/s    
    video:360kB audio:371kB global headers:0kB muxing overhead 1.515374%
    frame I:3     Avg QP:16.97  size:  6922

    [libx264 @ 0x30a3200] frame P:158   Avg QP:24.40  size:  1858

    [libx264 @ 0x30a3200] frame B:214   Avg QP:28.69  size:   253

    [libx264 @ 0x30a3200] consecutive B-frames: 13.9% 28.3%  5.6% 52.3%

    [libx264 @ 0x30a3200] mb I  I16..4: 70.8%  0.0% 29.2%

    [libx264 @ 0x30a3200] mb P  I16..4:  4.8%  0.0%  1.6%  P16..4: 16.0%  3.9%  2.0%  0.0%  0.0%    skip:71.9%

    [libx264 @ 0x30a3200] mb B  I16..4:  0.5%  0.0%  0.1%  B16..8: 11.1%  0.7%  0.1%  direct: 0.1%  skip:87.4%  L0:44.2% L1:44.7% BI:11.1%

    [libx264 @ 0x30a3200] coded y,uvDC,uvAC intra: 19.3% 30.9% 23.5% inter: 2.3% 2.3% 1.3%

    [libx264 @ 0x30a3200] i16 v,h,dc,p: 39% 47%  2% 12%

    [libx264 @ 0x30a3200] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 25% 32%  5%  5%  3%  4%  3%  3%

    [libx264 @ 0x30a3200] i8c dc,h,v,p: 51% 40%  7%  2%

    [libx264 @ 0x30a3200] Weighted P-Frames: Y:0.0% UV:0.0%

    [libx264 @ 0x30a3200] ref P L0: 63.7%  4.9% 19.3% 12.1%

    [libx264 @ 0x30a3200] ref B L0: 67.6% 32.4%

    [libx264 @ 0x30a3200] kb/s:196.48
  • high memory & cpu usage in ffmpeg C sample when muxing audio and video

    24 juin 2016, par chandu

    I am using the muxing.c example provided with ffmpeg 3.0 version to create an MP4 file (H.264 & AAC) with VS 2013.

    The sample is working fine with default width & height for video, but when I changed the width to 1920 and height to 1080, the sample is taking nearly 400MB & 60-70% cpu usage (using task manager & in Release mode) throughout the program. I have used multi threading also.

    I tried to free the encoded packet after calling write_frame(), but to no success.
    The memory is being released only after calling avcodec_close().

    Could anybody please tell me what I am doing wrong ?

    I am adding link (https://drive.google.com/open?id=0B75_-V7se7tmWUhyM0ItS0kzUVk) to code I tested with VS 2013.

    The screenshot link https://drive.google.com/open?id=0B75_-V7se7tmVm4tUjFtSnNNSHc

    The STREAM_DURATION value in the sample is set to 120 seconds (even tested with 600 seconds) and I changed default Height & Width values of AVCodecContext in add_stream function for video type to 1080 & 1920 respectively. Through out program, it is taking 355 MB, not changing at all. I think, once frame is encoded using avcodec_encode_video2 and written to the file, the memory should be released.

    Please correct me if I am wrong.