
Recherche avancée
Médias (91)
-
Les Miserables
9 décembre 2019, par
Mis à jour : Décembre 2019
Langue : français
Type : Textuel
-
VideoHandle
8 novembre 2019, par
Mis à jour : Novembre 2019
Langue : français
Type : Video
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
-
Un test - mauritanie
3 avril 2014, par
Mis à jour : Avril 2014
Langue : français
Type : Textuel
-
Pourquoi Obama lit il mes mails ?
4 février 2014, par
Mis à jour : Février 2014
Langue : français
-
IMG 0222
6 octobre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Image
Autres articles (14)
-
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.
Sur d’autres sites (2289)
-
calling ffmpeg.c function inside native.cpp in android cmake
10 septembre 2020, par yejafoti generated and attached
libavcodec.so,libavdevice.so,libavfilter.so,libavformat.so,libavutil.so,libswresample.so,libswscale.so
usingcmakelists.txt
file

add_library( # Sets the name of the library.
 native-lib

 # Sets the library as a shared library.
 SHARED

 # Provides a relative path to your source file(s).
 native-lib.cpp )

#Add an example of so library.
# avdevice
add_library(avdevice
 SHARED
 IMPORTED)
set_target_properties(avdevice
 PROPERTIES IMPORTED_LOCATION
 ${ARM_DIR}/${ANDROID_ABI}/lib/libavdevice.so)
.
.
.
etc



In
native-lib.cpp
file i can now edit videos by calling this functionsavformat_open_input()
. As i could see its very difficult and also very few examples available in ffmpeg for c functions api(native code). I now decide to call ffmpegmain()
and pass commands as argument.

ffmpeg -i input.mp4



I could see many do by this method only. I dont know the exact difference between these two methods but i think i can customise more if i do it with c functions api(native code).


Now i have only library's shared object like
libavcodec.so
. How do i attachffmpeg.c
file and i think it needsconfig.h
file that too i think i should build for each environment(arm64-v8a,armeabi-v7a). Since i have onlynative.cpp
c++ file how do i attachffmpeg.c,config.h
files with it ? So that i can run commands by

run(new String[]{
 "ffmpeg", "-h"
 });
 -------------------------------------
JNIEXPORT void JNICALL Java_uk_co_halfninja_videokit_Videokit_run(JNIEnv *env, jobject obj, jobjectArray args)
{
 int i = 0;
 int argc = 0;
 char **argv = NULL;
 jstring *strr = NULL;

 main(argc, argv);



I could see this git repo. They first generated .so files and attached the
ffmpeg.c
which they took fromffmpeg/fftools
directory i think, by usingAndroid.mk
file.
github.com/halfninja/android-ffmpeg-x264/blob/master/Project/jni/videokit/ffmpeg.c

Since im having cmake i have to do in different method i think. Im confused and dont understand how it works. Anyone know can help ! Thanks


-
Use FFMPEG to put images separately inside a "box" and keeping their original positions of the X and Y boundaries and maybe modify their offsets
14 septembre 2020, par karl-policeI have a collection of images, for this example, I created 3 images that have different sizes, but the image itsself is the same, except it is moved more down or left or right.


1 :
2 :
3 :


These are the images. The total sizes of all images together is 35x39, so that means that they need to go inside an 35x39 image so it can later be used to craft them into a GIF as example. Since "crop" does not really work as it crops it smaller and can't make them bigger and I can't imagine it being the best solution for that anyway, perhaps.


So this is the invisible 35x39 sized box.


image :


So what I'm trying to do is to figure out how I can put each of these images separately in the 35x39 sized box, but maintaining the original X position or the Y position or both, from the boundaries of the images. I'm trying to figure out how I can do this for other transparent images for similar things, mostly used to craft animations. Here into a GIF out of image collections, but the images need to be fixed first.


I tried to look in the FFMPEG documentation, but there's many many filters and etc. that I had a few issues finding the right thing. I'm also not sure if it is also then possible to change the offset of the X and Y, because I think if there's something to make it keep the original position of X and Y, then I also think that there's probably something to change the offset of it aswell.




End result of the images could basically be :


1 :
2 :
3 :


This end result example of the images, basically have their X aligned on the top and Y aligned on the left. I'm not sure if you can call it "original X position", because if I compare it to Photoshop's special paste and keep at original position, it puts the first image a bit more down as example, for some reason. So I just moved the X all the way to the top and the Y all the way to the left.


-
How to espace quotes, double quotes and colon inside double quotes when using FFMpeg
23 septembre 2020, par DnerD.DevI'm trying to get a text written onto a video. I can get it to work when I do it in a single line but not when I need to quote the drawtext inside the double quotes to write two lines.


ffmpeg.exe -i input.ts -vf "[in]drawtext=fontfile=Bebas-Regular.ttf:text='"Day: Sunday"':fontcolor=white:y=(h-h*0.2):x=(w-w*0.95):fontsize=36, drawtext=fontfile=Bebas-Regular.ttf:text='"thing1, thing2, thing3"':fontcolor=white:y=(h-h*0.1):x=(w-w*0.95):fontsize=36[out]" -codec:a copy output1.mp4


I've tried so many combinations of \ but I can't get it to work. The error I get is the following :


Unable to find a suitable output format for 'Sunday':fontcolor=white:y=(h-h*0.2):x=(w-w*0.95):fontsize=36, drawtext=fontfile=Bebas-Regular.ttf:text='thing1,'
Sunday':fontcolor=white:y=(h-h*0.2):x=(w-w*0.95):fontsize=36, drawtext=fontfile=Bebas-Regular.ttf:text='thing1,: Invalid argument



I need the video to have this :


Day: Sunday
Thing1, Thing2, Thing3