
Recherche avancée
Autres articles (45)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (6362)
-
How to ensure plt.savefig saves multiple images instead of one ?
18 mars 2018, par NatalieI’m trying to save images after for each iteration to show how my neural network is learning.
for iterations in range(1,1000):
model.fit(x_train,
y_train,
batch_size=20,
epochs=1,
verbose=2)
predictions = model.predict(X)
plt.plot(X,predictions,'o')
plt.plot(r, morse(r,De,Re,alpha))
plt.xlabel(r'$r$') # internuclear separation distance
plt.ylabel(r'$V(r)$') # morse potential energy
plt.savefig('myfig'+str(iterations))
plt.clf()Originally, I was able to save every image, however, it now only saves the last iteration image only. I wondered how I might be able to solve this issue ?
Also, related the first question : using the images I’m saving, I’m trying to merge all the images together into a quick movie to show the training process. I’ve been using ffmpeg (see image below for syntax error), but I keep getting syntax errors. Could anyone guide me through what I might be doing wrong ? ffmpeg syntax error I’m getting
Thanks in advance for helping me out - completely new to machine learning but using it for a university project, so apologies for my lack of understanding/mistakes !!
-
aacenc_tns : tune and reduce artifacts
6 décembre 2015, par Rostislav Pehlivanovaacenc_tns : tune and reduce artifacts
There are a couple of major changes here :
1. Start using TNS coefficient compression.
2. Start using 3 bits per coefficient maximum for short windows.
The bits we save from these 2 changes seem to make a nice impact on the
rest of the file/windows.3. Remove special case gain checking for short windows.
4. Modify the coefficient loop to support up to 3 windows.
The additional restrictions on TNS were something that was no in the
specifications and furthermore restricting TNS to only low energy short
windows was done to compensate for bugs elsewhere in the code.Overall, the improvements here reduce crackling artifacts heard in very
noisy tracks.Signed-off-by : Rostislav Pehlivanov <atomnuker@gmail.com>
-
ffmpeg extracting frames from video
22 novembre 2017, par TheOtherguyz4kjI am using Writingminds FFmpeg to use FFmpeg with Android. I am currently trying to extract frames from a video.
I would like to extract 8 frames from the video evenly distributed throughout the video. I found this tutorial here is my implementation of it.
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
retriever.setDataSource(videoCroppedFile.getAbsolutePath());
String time = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
Long videoLength = Long.parseLong(time);
final String cmd[] = {
"-i",
videoCroppedFile.getAbsolutePath(),
"-f",
"image2",
"-ss",
String.valueOf(videoLength.intValue() / ((8 + 1) * 10)),
"-r",
String.valueOf((8 + 1) / videoLength.intValue()),
mediaStorageDir.getAbsolutePath() + "/%d.jpg"
};However, when i go to the folder where the frames should be saved there isn’t anything there. Also there are no error messages.
I feel like its a way this library takes the String parameters. I’ve been stuck on this for sometime I have tried lots of different versions of the
cmd
. I was hoping someone could help.Here is my output from ffmpeg :
1-22 19:33:14.904 30981-30981/com.firebase.android D/videoFrames: failure reason ffmpeg version n3.0.1 Copyright (c) 2000-2016 the FFmpeg developers
built with gcc 4.8 (GCC)
configuration: --target-os=linux --cross-prefix=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- --arch=arm --cpu=cortex-a8 --enable-runtime-cpudetect --sysroot=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/sysroot --enable-pic --enable-libx264 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-fontconfig --enable-pthreads --disable-debug --disable-ffserver --enable-version3 --enable-hardcoded-tables --disable-ffplay --disable-ffprobe --enable-gpl --enable-yasm --disable-doc --disable-shared --enable-static --pkg-config=/home/vagrant/SourceCode/ffmpeg-android/ffmpeg-pkg-config --prefix=/home/vagrant/SourceCode/ffmpeg-android/build/armeabi-v7a --extra-cflags='-I/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all' --extra-ldflags='-L/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie' --extra-libs='-lpng -lexpat -lm' --extra-cxxflags=
libavutil 55. 17.103 / 55. 17.103
libavcodec 57. 24.102 / 57. 24.102
libavformat 57. 25.100 / 57. 25.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 31.100 / 6. 31.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/data/user/0/com.firebase.android/cache/videovgntp6q5ar4dglkaflaoobfpcv945824159.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.25.100
Duration: 00:00:08.28, start: 0.000000, bitrate: 391 kb/s
Stream #0:0(eng): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 720x764 [SAR 1:1 DAR 180:191], 132 kb/s, 16.67 fps, 16.67 tbr, 12800 tbn, 33.33 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 256 kb/s (default)
Metadata:
handler_name : SoundHandler
Invalid framerate value: 0Update 2
Here is my Android code :
try {
ffmpeg.loadBinary(new LoadBinaryResponseHandler() {
@Override
public void onStart() {
Log.d("videoFrames", "starting to load binary");
}
@Override
public void onFailure() {
Log.d("videoFrames", "failed to load binary");
}
@Override
public void onSuccess() {
Log.d("videoFrames", "loaded binary");
try {
ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {
@Override
public void onStart() {
Log.d("videoFrames", " starting to get frames from video");
if (mediaStorageDir.isDirectory() && mediaStorageDir.list().length > 0) {
String[] children = mediaStorageDir.list();
for (int i = 0; i < children.length; i++) {
new File(mediaStorageDir, children[i]).delete();
}
}
}
@Override
public void onProgress(String message) {
Log.d("videoFrames", " progress getting frames from video");
}
@Override
public void onFailure(String message) {
Log.d("videoFrames", " failed to get frames from video");
Log.d("videoFrames", " failure reason " + message);
Log.d("videoFrames", " ----------------------------------------------- ");
}
@Override
public void onSuccess(String message) {
Log.d("videoFrames", " success getting frames from video");
}
@Override
public void onFinish() {
Log.d("videoFrames", " finished getting frames from video");
}
});
} catch (FFmpegCommandAlreadyRunningException e) {
Log.d("videoFrames", " command already running in fmpeg");
e.printStackTrace();
}
}
@Override
public void onFinish() {}
});
} catch (FFmpegNotSupportedException e) {
// Handle if FFmpeg is not supported by device
}Update 3
I fixed my code by doing :
final String cmd[] = {
"-i",
videoCroppedFile.getAbsolutePath(),
"-f",
"image2",
"-ss",
String.valueOf(videoLength.floatValue() / (8.0 * 10.0)),
"-r",
String.valueOf(8.0 / videoLength.floatValue()),
mediaStorageDir.getAbsolutePath() + "/%d.jpg"
};