
Recherche avancée
Autres articles (74)
-
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users.
Sur d’autres sites (6237)
-
avformat/vobsub : fix several issues.
29 septembre 2013, par Clément Bœschavformat/vobsub : fix several issues.
Here is an extract of fate-samples/sub/vobsub.idx, with an additional
text at the end of each line to better identify each bitmap :timestamp : 00:04:55:445, filepos : 00001b000 Ace !
timestamp : 00:05:00:049, filepos : 00001b800 Wake up, honey !
timestamp : 00:05:02:018, filepos : 00001c800 I gotta go to work.
timestamp : 00:05:02:035, filepos : 00001d000 < ???>
timestamp : 00:05:04:203, filepos : 00001d800 Look after Clayton, okay ?
timestamp : 00:05:05:947, filepos : 00001e800 I’ll be back tonight.
timestamp : 00:05:07:957, filepos : 00001f800 Bye ! Love you.
timestamp : 00:05:21:295, filepos : 000020800 Hey, Ace ! What’s up ?
timestamp : 00:05:23:356, filepos : 000021800 Hey, how’s it going ?
timestamp : 00:05:24:640, filepos : 000022800 Remember what today is ? The 3rd !
timestamp : 00:05:27:193, filepos : 000023800 Look over there !
timestamp : 00:05:28:369, filepos : 000024800 Where are they going ?
timestamp : 00:05:28:361, filepos : 000025000 < ???>
timestamp : 00:05:29:946, filepos : 000025800 Let’s go see.
timestamp : 00:05:31:230, filepos : 000026000 I can’t, man. I got Clayton.Note the two "< ???>" : they are basically split subtitles (with the
previous one), which the dvdsub decoder is now supposed to reconstruct
with a previous commit. But also note that while the first chunk has
increasing timestamps,timestamp : 00:05:02:018, filepos : 00001c800
timestamp : 00:05:02:035, filepos : 00001d000...it’s not the case of the second one (and this is not an exception in the
original file) :timestamp : 00:05:28:369, filepos : 000024800
timestamp : 00:05:28:361, filepos : 000025000For the dvdsub decoder, they need to be "filepos’ed" ordered, but the
FFDemuxSubtitlesQueue is timestamps ordered, which is the reason of the
introduction of a sub sort method in the context, to allow giving
priority to the position, and then the timestamps. With that change, the
dvdsub decoder get fed with ordered packets.Now the packet size estimation was also broken : the filepos differences
in the vobsub index defines the full data read between two subtitles
chunks, and it is necessary to take into account what is read by the
mpegps_read_pes_header() function since the length returned by that
function doesn’t count the size of the data it reads. This is fixed with
the introduction of total_read, and old,new_pos. By doing this change,
we can drop the unreliable len16 heuristic and simplify the whole loop.
Note that mpegps_read_pes_header() often read more than one PES packet
(typically in one call it can read 0x1ba and 0x1be chunk along with the
relevant 0x1bd packet), which triggers the "total_read + pkt_size >
psize" check. This is an expected behaviour, which could be avoided by
having a more chunked version of mpegps_read_pes_header().The latest change is the extraction of each stream into its own
subtitles queue. If we don’t do this, the maximum size for a subtitle
chunk is broken, and the previous changes can not work. Having each
stream in a different queue requires some little adjustments in the
seek code of the demuxer.This commit is only meaningful as a whole change and can not be easily
split. The FATE test changes because it uses the vobsub demuxer. -
Android recorded video getting rotated after using ffmpeg
6 novembre 2014, par VaeianorI’m developing an android app, in which users can record a video, trim it, and then upload it to my server. I’m using the MediaRecorder class to handle the recording and using ffmpeg to trim the recorded video. The problem I’m having with ffmpeg is that the video is always getting rotated either 90 or 180 degrees after being trimmed. I know I can add a video filter (transpose=1) within the ffmpeg command to rotate the video. But that will require re-encoding the video. In my case, I don’t want to re-encode the video,as it takes too long. Instead, I’m having "-vcodec:copy" within ffmpeg command to use the same video codec.
Because I’m setting an orientation hint to the media recorder, the media recorder always adds "rotate=90" or "rotate=180" to the video metadata. I think that’s why the video is always getting rotated by ffmpeg.
So I was wondering if there is a way to rotate the video without re-encoding it. Or if there is a way to modify the metadata(rotate) of a recorded video before trimming it with ffmpeg.Please help ! The problem has been driving me crazy...
Thanks in advance !
Here is the ffmpeg command :
/data/data/com.xxx.xxx/app_bin/ffmpeg -y -ss 00:00:00 -t 4.000000 -i file:/storage/sdcard0/Movies/xxx/vid.mp4 -vcodec copy -acodec copy -metadata:s:v:0 rotate=0 - strict -2 file:/storage/sdcard0/Movies/xxx/vid_new.mp4
Below is the console output :
I/ShellCallback : shellOut()(9781): ffmpeg version 0.11.1 Copyright (c) 2000-2012 the FFmpeg developers
I/ShellCallback : shellOut()(9781): built on Nov 15 2013 00:50:10 with gcc 4.6 20120106 (prerelease)
I/ShellCallback : shellOut()(9781): configuration: --arch=arm --cpu=cortex-a8 --target-os=linux --enable-runtime-cpudetect --enable-small --prefix=/data/data/info.guardianproject.ffmpeg/app_opt --enable-pic --disable-shared --enable-static --cross-prefix=/home/n8fr8/dev/android/ndk//toolchains/arm-linux-androideabi-4.6/prebuilt/linux-x86_64/bin/arm-linux-androideabi- --sysroot=/home/n8fr8/dev/android/ndk//platforms/android-3/arch-arm --extra-cflags='-I../x264 -mfloat-abi=softfp -mfpu=neon' --extra-ldflags=-L../x264 --enable-version3 --enable-gpl --disable-doc --enable-yasm --enable-decoders --enable-encoders --enable-muxers --enable-demuxers --enable-parsers --enable-protocols --enable-filters --enable-avresample --enable-libfreetype --disable-indevs --enable-indev=lavfi --disable-outdevs --enable-hwaccels --enable-ffmpeg --disable-ffplay --disable-ffprobe --disable-ffserver --disable-network --enable-libx264 --enable-zlib --enable-muxer=md5
I/ShellCallback : shellOut()(9781): libavutil 51. 54.100 / 51. 54.100
I/ShellCallback : shellOut()(9781): libavcodec 54. 23.100 / 54. 23.100
I/ShellCallback : shellOut()(9781): libavformat 54. 6.100 / 54. 6.100
I/ShellCallback : shellOut()(9781): libavdevice 54. 0.100 / 54. 0.100
I/ShellCallback : shellOut()(9781): libavfilter 2. 77.100 / 2. 77.100
I/ShellCallback : shellOut()(9781): libswscale 2. 1.100 / 2. 1.100
I/ShellCallback : shellOut()(9781): libswresample 0. 15.100 / 0. 15.100
I/ShellCallback : shellOut()(9781): libpostproc 52. 0.100 / 52. 0.100
I/ShellCallback : shellOut()(9781): Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'file:/storage/sdcard0/Movies/xxx/vid.mp4':
I/ShellCallback : shellOut()(9781): Metadata:
I/ShellCallback : shellOut()(9781): major_brand : isom
I/ShellCallback : shellOut()(9781): minor_version : 0
I/ShellCallback : shellOut()(9781): compatible_brands: isom3gp4
I/ShellCallback : shellOut()(9781): creation_time : 2014-09-17 17:25:50
I/ShellCallback : shellOut()(9781): Duration: 00:00:04.69, start: 0.000000, bitrate: 2969 kb/s
I/ShellCallback : shellOut()(9781): Stream #0:0(eng): Video: h264 (avc1 / 0x31637661), yuv420p, 720x480, 2989 kb/s, 29.89 fps, 30 tbr, 90k tbn, 180k tbc
I/ShellCallback : shellOut()(9781): Metadata:
I/ShellCallback : shellOut()(9781): rotate : 90
I/ShellCallback : shellOut()(9781): creation_time : 2014-09-17 17:25:50
I/ShellCallback : shellOut()(9781): handler_name : VideoHandle
I/ShellCallback : shellOut()(9781): Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 44100 Hz, mono, s16, 128 kb/s
I/ShellCallback : shellOut()(9781): Metadata:
I/ShellCallback : shellOut()(9781): creation_time : 2014-09-17 17:25:50
I/ShellCallback : shellOut()(9781): handler_name : SoundHandle
I/ShellCallback : shellOut()(9781): Output #0, mp4, to 'file:/storage/sdcard0/Movies/xxx/vid_new.mp4':
I/ShellCallback : shellOut()(9781): Metadata:
I/ShellCallback : shellOut()(9781): major_brand : isom
I/ShellCallback : shellOut()(9781): minor_version : 0
I/ShellCallback : shellOut()(9781): compatible_brands: isom3gp4
I/ShellCallback : shellOut()(9781): creation_time : 2014-09-17 17:25:50
I/ShellCallback : shellOut()(9781): encoder : Lavf54.6.100
I/ShellCallback : shellOut()(9781): Stream #0:0(eng): Video: h264 (![0][0][0] / 0x0021), yuv420p, 720x480, q=2-31, 2989 kb/s, 29.89 fps, 90k tbn, 90k tbc
I/ShellCallback : shellOut()(9781): Metadata:
I/ShellCallback : shellOut()(9781): handler_name : VideoHandle
I/ShellCallback : shellOut()(9781): creation_time : 2014-09-17 17:25:50
I/ShellCallback : shellOut()(9781): rotate : 0
I/ShellCallback : shellOut()(9781): Stream #0:1(eng): Audio: aac (@[0][0][0] / 0x0040), 44100 Hz, mono, 128 kb/s
I/ShellCallback : shellOut()(9781): Metadata:
I/ShellCallback : shellOut()(9781): creation_time : 2014-09-17 17:25:50
I/ShellCallback : shellOut()(9781): handler_name : SoundHandle
I/ShellCallback : shellOut()(9781): Stream mapping:
I/ShellCallback : shellOut()(9781): Stream #0:0 -> #0:0 (copy)
I/ShellCallback : shellOut()(9781): Stream #0:1 -> #0:1 (copy)
I/ShellCallback : shellOut()(9781): Press [q] to stop, [?] for help
I/ShellCallback : shellOut()(9781): frame= 120 fps=0.0 q=-1.0 Lsize= 1530kB time=00:00:03.98 bitrate=3147.1kbits/s
I/ShellCallback : shellOut()(9781): video:1462kB audio:62kB global headers:0kB muxing overhead 0.329934%
I/ShellCallback : shellOut()(9781): ret 0, stream_spec v:0 -
WARN : Tried to pass invalid video frame, marking as broken : Your frame has data type int64, but we require uint8
5 septembre 2019, par Tavo DiazI am doing some Udemy AI courses and came across with one that "teaches" a bidimensional cheetah how to walk. I was doing the exercises on my computer, but it takes too much time. I decided to use Google Cloud to run the code and see the results some hours after. Nevertheless, when I run the code I get the following error " WARN : Tried to pass
invalid video frame, marking as broken : Your frame has data type int64, but we require uint8 (i.e. RGB values from 0-255)".After the code is executed, I see into the folder and I don’t see any videos (just the meta info).
Some more info (if it helps) :
I have a 1 CPU (4g), SSD Ubuntu 16.04 LTSI have not tried anything yet to solve it because I don´t know what to try. Im looking for solutions on the web, but nothing I could try.
This is the code
import os
import numpy as np
import gym
from gym import wrappers
import pybullet_envs
class Hp():
def __init__(self):
self.nb_steps = 1000
self.episode_lenght = 1000
self.learning_rate = 0.02
self.nb_directions = 32
self.nb_best_directions = 32
assert self.nb_best_directions <= self.nb_directions
self.noise = 0.03
self.seed = 1
self.env_name = 'HalfCheetahBulletEnv-v0'
class Normalizer():
def __init__(self, nb_inputs):
self.n = np.zeros(nb_inputs)
self.mean = np.zeros(nb_inputs)
self.mean_diff = np.zeros(nb_inputs)
self.var = np.zeros(nb_inputs)
def observe(self, x):
self.n += 1.
last_mean = self.mean.copy()
self.mean += (x - self.mean) / self.n
#abajo es el online numerator update
self.mean_diff += (x - last_mean) * (x - self.mean)
#abajo online computation de la varianza
self.var = (self.mean_diff / self.n).clip(min = 1e-2)
def normalize(self, inputs):
obs_mean = self.mean
obs_std = np.sqrt(self.var)
return (inputs - obs_mean) / obs_std
class Policy():
def __init__(self, input_size, output_size):
self.theta = np.zeros((output_size, input_size))
def evaluate(self, input, delta = None, direction = None):
if direction is None:
return self.theta.dot(input)
elif direction == 'positive':
return (self.theta + hp.noise * delta).dot(input)
else:
return (self.theta - hp.noise * delta).dot(input)
def sample_deltas(self):
return [np.random.randn(*self.theta.shape) for _ in range(hp.nb_directions)]
def update (self, rollouts, sigma_r):
step = np.zeros(self.theta.shape)
for r_pos, r_neg, d in rollouts:
step += (r_pos - r_neg) * d
self.theta += hp.learning_rate / (hp.nb_best_directions * sigma_r) * step
def explore(env, normalizer, policy, direction = None, delta = None):
state = env.reset()
done = False
num_plays = 0.
#abajo puede ser promedio de las rewards
sum_rewards = 0
while not done and num_plays < hp.episode_lenght:
normalizer.observe(state)
state = normalizer.normalize(state)
action = policy.evaluate(state, delta, direction)
state, reward, done, _ = env.step(action)
reward = max(min(reward, 1), -1)
#abajo sería poner un promedio
sum_rewards += reward
num_plays += 1
return sum_rewards
def train (env, policy, normalizer, hp):
for step in range(hp.nb_steps):
#iniciar las perturbaciones deltas y los rewards positivos/negativos
deltas = policy.sample_deltas()
positive_rewards = [0] * hp.nb_directions
negative_rewards = [0] * hp.nb_directions
#sacar las rewards en la dirección positiva
for k in range(hp.nb_directions):
positive_rewards[k] = explore(env, normalizer, policy, direction = 'positive', delta = deltas[k])
#sacar las rewards en dirección negativo
for k in range(hp.nb_directions):
negative_rewards[k] = explore(env, normalizer, policy, direction = 'negative', delta = deltas[k])
#sacar todas las rewards para sacar la desvest
all_rewards = np.array(positive_rewards + negative_rewards)
sigma_r = all_rewards.std()
#acomodar los rollauts por el max (r_pos, r_neg) y seleccionar la mejor dirección
scores = {k:max(r_pos, r_neg) for k, (r_pos, r_neg) in enumerate(zip(positive_rewards, negative_rewards))}
order = sorted(scores.keys(), key = lambda x:scores[x])[:hp.nb_best_directions]
rollouts = [(positive_rewards[k], negative_rewards[k], deltas[k]) for k in order]
#actualizar policy
policy.update (rollouts, sigma_r)
#poner el final reward del policy luego del update
reward_evaluation = explore (env, normalizer, policy)
print('Paso: ', step, 'Lejania: ', reward_evaluation)
def mkdir(base, name):
path = os.path.join(base, name)
if not os.path.exists(path):
os.makedirs(path)
return path
work_dir = mkdir('exp', 'brs')
monitor_dir = mkdir(work_dir, 'monitor')
hp = Hp()
np.random.seed(hp.seed)
env = gym.make(hp.env_name)
env = wrappers.Monitor(env, monitor_dir, force = True)
nb_inputs = env.observation_space.shape[0]
nb_outputs = env.action_space.shape[0]
policy = Policy(nb_inputs, nb_outputs)
normalizer = Normalizer(nb_inputs)
train(env, policy, normalizer, hp)