Recherche avancée

Médias (1)

Mot : - Tags -/MediaSPIP

Autres articles (46)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Dépôt de média et thèmes par FTP

    31 mai 2013, par

    L’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
    Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (2061)

  • TypeError : 'function' object is not subscriptable Python with ffmpeg

    25 novembre 2019, par S1mple
    clips = []

    def clipFinder(CurrentDir, fileType):
       clips.clear()
       for r,d,f in os.walk(CurrentDir):
           for file in f:
               if fileType in file:
                   clips.append(r+file)
       random.shuffle(clips)

    def removeVods(r):
       for f in clips:
           if 'vod' in clips:
               os.remove(r+f)

    def clipString():
       string = 'intermediate'
       clipList = []
       clipNum = 1
       for f in clips:
           clipList.append(string+str(clipNum)+'.ts'+'|')
           clipNum+=1
       string1 = ''.join(clipList)
       string2 = string1[0:len(string1)-1]
       return string2

    def concatFiles():
       clipFinder('***', '.mp4')
       removeVods('***')
       i = 0
       intermediates = []
       for f in clips:
           subprocess.call(['***', '-i', clips[i], '-c', 'copy', '-bsf:v', 'h264_mp4toannexb', '-f', 'mpegts', 'intermediate'+ str(i+1) +'.ts'])
           i += 1
       clipsLength = len(clips)
       subprocess.call['***', '-i', '"concat:' + clipString() + '"', '-c', 'copy', '-bsf:a
       aac_adtstoasc', 'output.mp4']

    I am trying to make a clip concatenator, but the last subprocess call won’t run and gives me the error shown in the question.

    Problematic code :

    subprocess.call(['***', '-i', '"concat:' + clipString() + '"', '-c', 'copy', '-bsf:a aac_adtstoasc', 'output.mp4'])

    all places with * were paths such as :
    /davidscomputer/bin/ffmpeg/

  • Concatenate list of streams with ffmpeg

    16 novembre 2019, par oioioi

    I’d like to write python code, automatically rearranging the scenes of a video. How do I use ffmpeg-python to concatenate a list of n streams/scenes ? What is the easiest way to merge those scenes with their audio counterpart ?
    Unfortunately, I couldn’t figure out how to do it properly.

    The status quo :

    import os

    import scenedetect
    from scenedetect.video_manager import VideoManager
    from scenedetect.scene_manager import SceneManager
    from scenedetect.frame_timecode import FrameTimecode
    from scenedetect.stats_manager import StatsManager
    from scenedetect.detectors import ContentDetector

    import ffmpeg

    STATS_FILE_PATH = 'stats.csv'

    def main():

       for root, dirs, files in os.walk('material'):
           for file in files:
               file = os.path.join(root, file)
               print(file)
               video_manager = VideoManager([file])
               stats_manager = StatsManager()
               scene_manager = SceneManager(stats_manager)
               scene_manager.add_detector(ContentDetector())
               base_timecode = video_manager.get_base_timecode()
               end_timecode = video_manager.get_duration()

               start_time = base_timecode
               end_time = base_timecode + 100.0
               #end_time = end_timecode[2]

               video_manager.set_duration(start_time=start_time, end_time=end_time)
               video_manager.set_downscale_factor()
               video_manager.start()
               scene_manager.detect_scenes(frame_source=video_manager)

               scene_list = scene_manager.get_scene_list(base_timecode)

               print('List of scenes obtained:')
               for i, scene in enumerate(scene_list):
                   print('    Scene %2d: Start %s / Frame %d, End %s / Frame %d' % (
                       i+1,
                       scene[0].get_timecode(), scene[0].get_frames(),
                       scene[1].get_timecode(), scene[1].get_frames(),))

                   start = scene[0].get_frames()
                   end = scene[1].get_frames()

                   print(start)
                   print(end)

                   (
                       ffmpeg
                       .input(file)
                       .trim(start_frame=start, end_frame=end)
                       .setpts ('PTS-STARTPTS')
                       .output('scene %d.mp4' % (i+1))
                       .run()
                   )

               if stats_manager.is_save_required():
                   with open(STATS_FILE_PATH, 'w') as stats_file:
                       stats_manager.save_to_csv(stats_file, base_timecode)

    if __name__ == "__main__":
       main()
  • WARN : Tried to pass invalid video frame, marking as broken : Your frame has data type int64, but we require uint8

    5 septembre 2019, par Tavo Diaz

    I am doing some Udemy AI courses and came across with one that "teaches" a bidimensional cheetah how to walk. I was doing the exercises on my computer, but it takes too much time. I decided to use Google Cloud to run the code and see the results some hours after. Nevertheless, when I run the code I get the following error " WARN : Tried to pass
    invalid video frame, marking as broken : Your frame has data type int64, but we require uint8 (i.e. RGB values from 0-255)".

    After the code is executed, I see into the folder and I don’t see any videos (just the meta info).

    Some more info (if it helps) :
    I have a 1 CPU (4g), SSD Ubuntu 16.04 LTS

    I have not tried anything yet to solve it because I don´t know what to try. Im looking for solutions on the web, but nothing I could try.

    This is the code

    import os
    import numpy as np
    import gym
    from gym import wrappers
    import pybullet_envs


    class Hp():
       def __init__(self):
           self.nb_steps = 1000
           self.episode_lenght =   1000
           self.learning_rate = 0.02
           self.nb_directions = 32
           self.nb_best_directions = 32
           assert self.nb_best_directions <= self.nb_directions
           self.noise = 0.03
           self.seed = 1
           self.env_name = 'HalfCheetahBulletEnv-v0'


    class Normalizer():
       def __init__(self, nb_inputs):
           self.n = np.zeros(nb_inputs)
           self.mean = np.zeros(nb_inputs)
           self.mean_diff = np.zeros(nb_inputs)
           self.var = np.zeros(nb_inputs)

       def observe(self, x):
           self.n += 1.
           last_mean = self.mean.copy()
           self.mean += (x - self.mean) / self.n
           #abajo es el online numerator update
           self.mean_diff += (x - last_mean) * (x - self.mean)
           #abajo online computation de la varianza
           self.var = (self.mean_diff / self.n).clip(min = 1e-2)  

       def normalize(self, inputs):
           obs_mean = self.mean
           obs_std = np.sqrt(self.var)
           return (inputs - obs_mean) / obs_std

    class Policy():
       def __init__(self, input_size, output_size):
           self.theta = np.zeros((output_size, input_size))

       def evaluate(self, input, delta = None, direction = None):
           if direction is None:
               return self.theta.dot(input)
           elif direction == 'positive':
               return (self.theta + hp.noise * delta).dot(input)
           else:
               return (self.theta - hp.noise * delta).dot(input)

       def sample_deltas(self):
           return [np.random.randn(*self.theta.shape) for _ in range(hp.nb_directions)]

       def update (self, rollouts, sigma_r):
           step = np.zeros(self.theta.shape)
           for r_pos, r_neg, d in rollouts:
               step += (r_pos - r_neg) * d
           self.theta += hp.learning_rate / (hp.nb_best_directions * sigma_r) * step


    def explore(env, normalizer, policy, direction = None, delta = None):
       state = env.reset()
       done = False
       num_plays = 0.
       #abajo puede ser promedio de las rewards
       sum_rewards = 0
       while not done and num_plays < hp.episode_lenght:
           normalizer.observe(state)
           state = normalizer.normalize(state)
           action = policy.evaluate(state, delta, direction)
           state, reward, done, _ = env.step(action)
           reward = max(min(reward, 1), -1)
           #abajo sería poner un promedio
           sum_rewards += reward
           num_plays += 1
       return sum_rewards

    def train (env, policy, normalizer, hp):
       for step in range(hp.nb_steps):
           #iniciar las perturbaciones deltas y los rewards positivos/negativos
           deltas = policy.sample_deltas()
           positive_rewards = [0] * hp.nb_directions
           negative_rewards = [0] * hp.nb_directions
           #sacar las rewards en la dirección positiva
           for k in range(hp.nb_directions):
               positive_rewards[k] = explore(env, normalizer, policy, direction = 'positive', delta = deltas[k])
           #sacar las rewards en dirección negativo
           for k in range(hp.nb_directions):
               negative_rewards[k] = explore(env, normalizer, policy, direction = 'negative', delta = deltas[k])
           #sacar todas las rewards para sacar la desvest
           all_rewards = np.array(positive_rewards + negative_rewards)
           sigma_r = all_rewards.std()
           #acomodar los rollauts por el max (r_pos, r_neg) y seleccionar la mejor dirección
           scores = {k:max(r_pos, r_neg) for k, (r_pos, r_neg) in enumerate(zip(positive_rewards, negative_rewards))}
           order = sorted(scores.keys(), key = lambda x:scores[x])[:hp.nb_best_directions]
           rollouts = [(positive_rewards[k], negative_rewards[k], deltas[k]) for k in order]
           #actualizar policy
           policy.update (rollouts, sigma_r)
           #poner el final reward del policy luego del update
           reward_evaluation = explore (env, normalizer, policy)
           print('Paso: ', step, 'Lejania: ', reward_evaluation)

    def mkdir(base, name):
       path = os.path.join(base, name)
       if not os.path.exists(path):
           os.makedirs(path)
       return path
    work_dir = mkdir('exp', 'brs')
    monitor_dir = mkdir(work_dir, 'monitor')

    hp = Hp()
    np.random.seed(hp.seed)
    env = gym.make(hp.env_name)
    env = wrappers.Monitor(env, monitor_dir, force = True)
    nb_inputs = env.observation_space.shape[0]
    nb_outputs = env.action_space.shape[0]
    policy = Policy(nb_inputs, nb_outputs)
    normalizer = Normalizer(nb_inputs)
    train(env, policy, normalizer, hp)