
Recherche avancée
Médias (91)
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#1 The Wires
11 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
ED-ME-5 1-DVD
11 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (23)
-
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras. -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (5257)
-
ffmpeg blend filter does not work properly
16 mai 2018, par tainguyenI’m doing my android project which uses blend filter to create uncover down video transition. My test device is Samsung S4 (Android version : 4.4.2)
this is my command string :ffmpeg
-loop 1 -t 1 -i img001.jpg
-loop 1 -t 1 -i img002.jpg
-loop 1 -t 1 -i img003.jpg
-loop 1 -t 1 -i img004.jpg
-loop 1 -t 1 -i img005.jpg
-filter_complex
"[1:v][0:v]blend=all_expr='if(lte(Y,N*H/24),A,B)'[b1v];
[2:v][1:v]blend=all_expr='if(lte(Y,H*N/24),A,B)'[b2v];
[3:v][2:v]blend=all_expr='if(lte(Y,H*N/24),A,B)'[b3v];
[4:v][3:v]blend=all_expr='if(lte(Y,H*N/24),A,B)'[b4v];
[0:v][b1v][1:v][b2v][2:v][b3v][3:v][b4v]
[4:v]concat=n=9:v=1:a=0,format=yuv420p[v]" -map "[v]" out_cover_top.mp4 -yThe expected output is that the top image (yellow) will uncover from the top to the bottom of bottom image (red) like this :
I tested in my PC (windows 7) it’s right, but this is what I get in android
My logcat :
onProgress : ffmpeg version n3.0.1 Copyright (c) 2000-2016 the FFmpeg developers
onProgress : built with gcc 4.8 (GCC)
onProgress : configuration : —target-os=linux —cross-prefix=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- —arch=arm —cpu=cortex-a8 —enable-runtime-cpudetect —sysroot=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/sysroot —enable-pic —enable-libx264 —enable-libass —enable-libfreetype —enable-libfribidi —enable-libmp3lame —enable-fontconfig —enable-pthreads —disable-debug —disable-ffserver —enable-version3 —enable-hardcoded-tables —disable-ffplay —disable-ffprobe —enable-gpl —enable-yasm —disable-doc —disable-shared —enable-static —pkg-config=/home/vagrant/SourceCode/ffmpeg-android/ffmpeg-pkg-config —prefix=/home/vagrant/SourceCode/ffmpeg-android/build/armeabi-v7a —extra-cflags=’-I/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all’ —extra-ldflags=’-L/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie’ —extra-libs=’-lpng -lexpat -lm’ —extra-cxxflags=
onProgress : libavutil 55. 17.103 / 55. 17.103
onProgress : libavcodec 57. 24.102 / 57. 24.102
onProgress : libavformat 57. 25.100 / 57. 25.100
onProgress : libavdevice 57. 0.101 / 57. 0.101
onProgress : libavfilter 6. 31.100 / 6. 31.100
onProgress : libswscale 4. 0.100 / 4. 0.100
onProgress : libswresample 2. 0.101 / 2. 0.101
onProgress : libpostproc 54. 0.100 / 54. 0.100
onProgress : [mjpeg @ 0x43240d90] Changing bps to 8
onProgress : Input #0, image2, from ’/storage/emulated/0/com.example.mrtai.test_animationtovideo/temporary/0.jpg’ :
onProgress : Duration : 00:00:00.04, start : 0.000000, bitrate : 4096 kb/s
onProgress : Stream #0:0 : Video : mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 480x480 [SAR 1:1 DAR 1:1], 25 fps, 25 tbr, 25 tbn, 25 tbc
onProgress : [mjpeg @ 0x43242a70] Changing bps to 8
onProgress : Input #1, image2, from ’/storage/emulated/0/com.example.mrtai.test_animationtovideo/temporary/1.jpg’ :
onProgress : Duration : 00:00:00.04, start : 0.000000, bitrate : 4971 kb/s
onProgress : Stream #1:0 : Video : mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 480x480 [SAR 1:1 DAR 1:1], 25 fps, 25 tbr, 25 tbn, 25 tbc
onProgress : [mjpeg @ 0x43280780] Changing bps to 8
onProgress : Input #2, image2, from ’/storage/emulated/0/com.example.mrtai.test_animationtovideo/temporary/2.jpg’ :
onProgress : Duration : 00:00:00.04, start : 0.000000, bitrate : 9413 kb/s
onProgress : Stream #2:0 : Video : mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 480x480 [SAR 1:1 DAR 1:1], 25 fps, 25 tbr, 25 tbn, 25 tbc
onProgress : [mjpeg @ 0x43244260] Changing bps to 8
onProgress : Input #3, image2, from ’/storage/emulated/0/com.example.mrtai.test_animationtovideo/temporary/3.jpg’ :
onProgress : Duration : 00:00:00.04, start : 0.000000, bitrate : 4096 kb/s
onProgress : Stream #3:0 : Video : mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 480x480 [SAR 1:1 DAR 1:1], 25 fps, 25 tbr, 25 tbn, 25 tbc
onProgress : [swscaler @ 0x438292b0] deprecated pixel format used, make sure you did set range correctly
onProgress : [swscaler @ 0x438320f0] deprecated pixel format used, make sure you did set range correctly
onProgress : [swscaler @ 0x4383af40] deprecated pixel format used, make sure you did set range correctly
onProgress : [swscaler @ 0x44044100] deprecated pixel format used, make sure you did set range correctly
onProgress : [swscaler @ 0x4404cf50] deprecated pixel format used, make sure you did set range correctly
onProgress : [swscaler @ 0x44055d90] deprecated pixel format used, make sure you did set range correctly
onProgress : [swscaler @ 0x4405ebe0] deprecated pixel format used, make sure you did set range correctly
onProgress : [swscaler @ 0x44067a20] deprecated pixel format used, make sure you did set range correctly
onProgress : [swscaler @ 0x44070870] deprecated pixel format used, make sure you did set range correctly
onProgress : [swscaler @ 0x440796b0] deprecated pixel format used, make sure you did set range correctly
onProgress : [libx264 @ 0x43df2fd0] using SAR=1/1
onProgress : [libx264 @ 0x43df2fd0] using cpu capabilities : none !
onProgress : [libx264 @ 0x43df2fd0] profile Constrained Baseline, level 3.0
onProgress : [libx264 @ 0x43df2fd0] 264 - core 148 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options : cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=25 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0
onProgress : Output #0, mp4, to ’/storage/emulated/0/com.example.mrtai.test_animationtovideo/saves/test.mp4’ :
onProgress : Metadata :
onProgress : encoder : Lavf57.25.100
onProgress : Stream #0:0 : Video : h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 480x480 [SAR 1:1 DAR 1:1], q=-1—1, 25 fps, 12800 tbn, 25 tbc (default)
onProgress : Metadata :
onProgress : encoder : Lavc57.24.102 libx264
onProgress : Side data :
onProgress : unknown side data type 10 (24 bytes)
onProgress : Stream mapping :
onProgress : Stream #0:0 (mjpeg) -> blen>bottom
onProgress : Stream #0:0 (mjpeg) -> concat:in0:v0
onProgress : Stream #1:0 (mjpeg) -> blen>top
onProgress : Stream #1:0 (mjpeg) -> blen>bottom
onProgress : Stream #1:0 (mjpeg) -> concat:in2:v0
onProgress : Stream #2:0 (mjpeg) -> blen>top
onProgress : Stream #2:0 (mjpeg) -> blen>bottom
onProgress : Stream #2:0 (mjpeg) -> concat:in4:v0
onProgress : Stream #3:0 (mjpeg) -> blen>top
onProgress : Stream #3:0 (mjpeg) -> concat:in6:v0
onProgress : format -> Stream #0:0 (libx264)
onProgress : Press [q] to stop, [?] for help
onProgress : frame= 16 fps=0.0 q=12.0 size= 51kB time=00:00:00.36 bitrate=1171.2kbits/s speed=0.685x
onProgress : frame= 29 fps= 25 q=25.0 size= 64kB time=00:00:00.88 bitrate= 593.2kbits/s speed=0.762x
onProgress : frame= 33 fps= 19 q=25.0 size= 72kB time=00:00:01.04 bitrate= 565.8kbits/s speed=0.611x
onProgress : frame= 37 fps= 17 q=16.0 size= 113kB time=00:00:01.20 bitrate= 769.9kbits/s speed=0.537x
onProgress : frame= 41 fps= 15 q=13.0 size= 147kB time=00:00:01.36 bitrate= 886.3kbits/s speed=0.495x
onProgress : frame= 46 fps= 14 q=12.0 size= 180kB time=00:00:01.56 bitrate= 943.1kbits/s speed=0.463x
onProgress : frame= 50 fps= 13 q=12.0 size= 188kB time=00:00:01.72 bitrate= 896.8kbits/s speed=0.439x
onProgress : frame= 77 fps= 17 q=12.0 size= 209kB time=00:00:02.80 bitrate= 610.9kbits/s speed=0.63x
onProgress : frame= 82 fps= 16 q=27.0 size= 213kB time=00:00:03.00 bitrate= 580.6kbits/s speed=0.594x
onProgress : frame= 87 fps= 15 q=17.0 size= 288kB time=00:00:03.20 bitrate= 737.1kbits/s speed=0.566x
onProgress : frame= 91 fps= 15 q=13.0 size= 349kB time=00:00:03.36 bitrate= 851.1kbits/s speed=0.536x
onProgress : frame= 95 fps= 14 q=12.0 size= 407kB time=00:00:03.52 bitrate= 946.2kbits/s speed=0.515x
onProgress : frame= 100 fps= 13 q=12.0 size= 416kB time=00:00:03.72 bitrate= 917.1kbits/s speed=0.501x
onProgress : frame= 128 fps= 16 q=17.0 size= 429kB time=00:00:04.84 bitrate= 725.7kbits/s speed=0.604x
onProgress : frame= 133 fps= 16 q=24.0 size= 438kB time=00:00:05.04 bitrate= 712.4kbits/s speed=0.589x
onProgress : frame= 138 fps= 15 q=14.0 size= 487kB time=00:00:05.24 bitrate= 761.2kbits/s speed=0.577x
onProgress : frame= 142 fps= 15 q=12.0 size= 519kB time=00:00:05.40 bitrate= 787.4kbits/s speed=0.564x
onProgress : frame= 147 fps= 14 q=12.0 size= 545kB time=00:00:05.60 bitrate= 796.5kbits/s speed=0.551x
onProgress : frame= 175 fps= 16 q=12.0 size= 561kB time=00:00:06.72 bitrate= 683.7kbits/s speed=0.619x
onProgress : frame= 175 fps= 16 q=-1.0 Lsize= 566kB time=00:00:07.00 bitrate= 662.9kbits/s speed=0.642x
onProgress : video:565kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhea> 0.262062%
onProgress : [libx264 @ 0x43df2fd0] frame I:1 Avg QP:20.00 size : 28930
onProgress : [libx264 @ 0x43df2fd0] frame P:174 Avg QP:14.14 size : 3155
onProgress : [libx264 @ 0x43df2fd0] mb I I16..4 : 100.0% 0.0% 0.0%
onProgress : [libx264 @ 0x43df2fd0] mb P I16..4 : 1.7% 0.0% 0.0% P16..4 : 28.8% 0.0% 0.0% 0.0% 0.0% skip:69.4%
onProgress : [libx264 @ 0x43df2fd0] coded y,uvDC,uvAC intra : 74.7% 79.5% 49.5% inter : 11.8% 14.3% 5.3%
onProgress : [libx264 @ 0x43df2fd0] i16 v,h,dc,p : 19% 46% 18% 18%
onProgress : [libx264 @ 0x43df2fd0] i8c dc,h,v,p : 26% 46% 17% 11%
onProgress : [libx264 @ 0x43df2fd0] kb/s:660.46
onSuccess : -------------------------
onFinish :Logcat of setSar behavior :
onProgress : ffmpeg version n3.0.1 Copyright (c) 2000-2016 the FFmpeg developers
onProgress : built with gcc 4.8 (GCC)
onProgress : configuration : —target-os=linux —cross-prefix=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- —arch=arm —cpu=cortex-a8 —enable-runtime-cpudetect —sysroot=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/sysroot —enable-pic —enable-libx264 —enable-libass —enable-libfreetype —enable-libfribidi —enable-libmp3lame —enable-fontconfig —enable-pthreads —disable-debug —disable-ffserver —enable-version3 —enable-hardcoded-tables —disable-ffplay —disable-ffprobe —enable-gpl —enable-yasm —disable-doc —disable-shared —enable-static —pkg-config=/home/vagrant/SourceCode/ffmpeg-android/ffmpeg-pkg-config —prefix=/home/vagrant/SourceCode/ffmpeg-android/build/armeabi-v7a —extra-cflags=’-I/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all’ —extra-ldflags=’-L/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie’ —extra-libs=’-lpng -lexpat -lm’ —extra-cxxflags=
onProgress : libavutil 55. 17.103 / 55. 17.103
onProgress : libavcodec 57. 24.102 / 57. 24.102
onProgress : libavformat 57. 25.100 / 57. 25.100
onProgress : libavdevice 57. 0.101 / 57. 0.101
onProgress : libavfilter 6. 31.100 / 6. 31.100
onProgress : libswscale 4. 0.100 / 4. 0.100
onProgress : libswresample 2. 0.101 / 2. 0.101
onProgress : libpostproc 54. 0.100 / 54. 0.100
onProgress : [mjpeg @ 0x41be8890] Changing bps to 8
onProgress : Input #0, image2, from ’/storage/emulated/0/test.jpg’ :
onProgress : Duration : 00:00:00.04, start : 0.000000, bitrate : 16530 kb/s
onProgress : Stream #0:0 : Video : mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 900x900 [SAR 300:300 DAR 1:1], 25 tbr, 25 tbn, 25 tbc
onProgress : [swscaler @ 0x41b1c020] deprecated pixel format used, make sure you did set range correctly
onProgress : Output #0, image2, to ’/storage/emulated/0/com.example.mrtai.test_animationtovideo/temporary/0.jpg’ :
onProgress : Metadata :
onProgress : encoder : Lavf57.25.100
onProgress : Stream #0:0 : Video : mjpeg, yuvj420p(pc), 480x480 [SAR 1:1 DAR 1:1], q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc
onProgress : Metadata :
onProgress : encoder : Lavc57.24.102 mjpeg
onProgress : Side data :
onProgress : unknown side data type 10 (24 bytes)
onProgress : Stream mapping :
onProgress : Stream #0:0 -> #0:0 (mjpeg (native) -> mjpeg (native))
onProgress : Press [q] to stop, [?] for help
onProgress : frame= 1 fps=0.0 q=4.6 Lsize=N/A time=00:00:00.04 bitrate=N/A speed=0.293x
onProgress : video:20kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhea> unknown
onSuccess : -------------------------
onFinish :
onProgress : ffmpeg version n3.0.1 Copyright (c) 2000-2016 the FFmpeg developers
onProgress : built with gcc 4.8 (GCC)
onProgress : configuration : —target-os=linux —cross-prefix=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- —arch=arm —cpu=cortex-a8 —enable-runtime-cpudetect —sysroot=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/sysroot —enable-pic —enable-libx264 —enable-libass —enable-libfreetype —enable-libfribidi —enable-libmp3lame —enable-fontconfig —enable-pthreads —disable-debug —disable-ffserver —enable-version3 —enable-hardcoded-tables —disable-ffplay —disable-ffprobe —enable-gpl —enable-yasm —disable-doc —disable-shared —enable-static —pkg-config=/home/vagrant/SourceCode/ffmpeg-android/ffmpeg-pkg-config —prefix=/home/vagrant/SourceCode/ffmpeg-android/build/armeabi-v7a —extra-cflags=’-I/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all’ —extra-ldflags=’-L/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie’ —extra-libs=’-lpng -lexpat -lm’ —extra-cxxflags=
onProgress : libavutil 55. 17.103 / 55. 17.103
onProgress : libavcodec 57. 24.102 / 57. 24.102
onProgress : libavformat 57. 25.100 / 57. 25.100
onProgress : libavdevice 57. 0.101 / 57. 0.101
onProgress : libavfilter 6. 31.100 / 6. 31.100
onProgress : libswscale 4. 0.100 / 4. 0.100
onProgress : libswresample 2. 0.101 / 2. 0.101
onProgress : libpostproc 54. 0.100 / 54. 0.100
onProgress : [mjpeg @ 0x41ffe890] Changing bps to 8
onProgress : Input #0, image2, from ’/storage/emulated/0/test2.jpg’ :
onProgress : Duration : 00:00:00.04, start : 0.000000, bitrate : 14981 kb/s
onProgress : Stream #0:0 : Video : mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 720x960 [SAR 72:72 DAR 3:4], 25 tbr, 25 tbn, 25 tbc
onProgress : [swscaler @ 0x420171b0] deprecated pixel format used, make sure you did set range correctly
onProgress : Output #0, image2, to ’/storage/emulated/0/com.example.mrtai.test_animationtovideo/temporary/1.jpg’ :
onProgress : Metadata :
onProgress : encoder : Lavf57.25.100
onProgress : Stream #0:0 : Video : mjpeg, yuvj420p(pc), 480x480 [SAR 1:1 DAR 1:1], q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc
onProgress : Metadata :
onProgress : encoder : Lavc57.24.102 mjpeg
onProgress : Side data :
onProgress : unknown side data type 10 (24 bytes)
onProgress : Stream mapping :
onProgress : Stream #0:0 -> #0:0 (mjpeg (native) -> mjpeg (native))
onProgress : Press [q] to stop, [?] for help
onProgress : frame= 1 fps=0.0 q=5.8 Lsize=N/A time=00:00:00.04 bitrate=N/A speed=0.428x
onProgress : video:24kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhea> unknown
onSuccess : -------------------------
onFinish :
onProgress : ffmpeg version n3.0.1 Copyright (c) 2000-2016 the FFmpeg developers
onProgress : built with gcc 4.8 (GCC)
onProgress : configuration : —target-os=linux —cross-prefix=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- —arch=arm —cpu=cortex-a8 —enable-runtime-cpudetect —sysroot=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/sysroot —enable-pic —enable-libx264 —enable-libass —enable-libfreetype —enable-libfribidi —enable-libmp3lame —enable-fontconfig —enable-pthreads —disable-debug —disable-ffserver —enable-version3 —enable-hardcoded-tables —disable-ffplay —disable-ffprobe —enable-gpl —enable-yasm —disable-doc —disable-shared —enable-static —pkg-config=/home/vagrant/SourceCode/ffmpeg-android/ffmpeg-pkg-config —prefix=/home/vagrant/SourceCode/ffmpeg-android/build/armeabi-v7a —extra-cflags=’-I/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all’ —extra-ldflags=’-L/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie’ —extra-libs=’-lpng -lexpat -lm’ —extra-cxxflags=
onProgress : libavutil 55. 17.103 / 55. 17.103
onProgress : libavcodec 57. 24.102 / 57. 24.102
onProgress : libavformat 57. 25.100 / 57. 25.100
onProgress : libavdevice 57. 0.101 / 57. 0.101
onProgress : libavfilter 6. 31.100 / 6. 31.100
onProgress : libswscale 4. 0.100 / 4. 0.100
onProgress : libswresample 2. 0.101 / 2. 0.101
onProgress : libpostproc 54. 0.100 / 54. 0.100
onProgress : [mjpeg @ 0x42dac890] Changing bps to 8
onProgress : Input #0, image2, from ’/storage/emulated/0/test3.jpg’ :
onProgress : Duration : 00:00:00.04, start : 0.000000, bitrate : 6901 kb/s
onProgress : Stream #0:0 : Video : mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 500x283, 25 tbr, 25 tbn, 25 tbc
onProgress : [swscaler @ 0x42dcd8a0] deprecated pixel format used, make sure you did set range correctly
onProgress : Output #0, image2, to ’/storage/emulated/0/com.example.mrtai.test_animationtovideo/temporary/2.jpg’ :
onProgress : Metadata :
onProgress : encoder : Lavf57.25.100
onProgress : Stream #0:0 : Video : mjpeg, yuvj420p(pc), 480x480 [SAR 1:1 DAR 1:1], q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc
onProgress : Metadata :
onProgress : encoder : Lavc57.24.102 mjpeg
onProgress : Side data :
onProgress : unknown side data type 10 (24 bytes)
onProgress : Stream mapping :
onProgress : Stream #0:0 -> #0:0 (mjpeg (native) -> mjpeg (native))
onProgress : Press [q] to stop, [?] for help
onProgress : frame= 1 fps=0.0 q=6.0 Lsize=N/A time=00:00:00.04 bitrate=N/A speed=0.497x
onProgress : video:46kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhea> unknown
onSuccess : -------------------------
onFinish :
onProgress : ffmpeg version n3.0.1 Copyright (c) 2000-2016 the FFmpeg developers
onProgress : built with gcc 4.8 (GCC)
onProgress : configuration : —target-os=linux —cross-prefix=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- —arch=arm —cpu=cortex-a8 —enable-runtime-cpudetect —sysroot=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/sysroot —enable-pic —enable-libx264 —enable-libass —enable-libfreetype —enable-libfribidi —enable-libmp3lame —enable-fontconfig —enable-pthreads —disable-debug —disable-ffserver —enable-version3 —enable-hardcoded-tables —disable-ffplay —disable-ffprobe —enable-gpl —enable-yasm —disable-doc —disable-shared —enable-static —pkg-config=/home/vagrant/SourceCode/ffmpeg-android/ffmpeg-pkg-config —prefix=/home/vagrant/SourceCode/ffmpeg-android/build/armeabi-v7a —extra-cflags=’-I/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all’ —extra-ldflags=’-L/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie’ —extra-libs=’-lpng -lexpat -lm’ —extra-cxxflags=
onProgress : libavutil 55. 17.103 / 55. 17.103
onProgress : libavcodec 57. 24.102 / 57. 24.102
onProgress : libavformat 57. 25.100 / 57. 25.100
onProgress : libavdevice 57. 0.101 / 57. 0.101
onProgress : libavfilter 6. 31.100 / 6. 31.100
onProgress : libswscale 4. 0.100 / 4. 0.100
onProgress : libswresample 2. 0.101 / 2. 0.101
onProgress : libpostproc 54. 0.100 / 54. 0.100
onProgress : [mjpeg @ 0x4228a890] Changing bps to 8
onProgress : Input #0, image2, from ’/storage/emulated/0/test.jpg’ :
onProgress : Duration : 00:00:00.04, start : 0.000000, bitrate : 16530 kb/s
onProgress : Stream #0:0 : Video : mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 900x900 [SAR 300:300 DAR 1:1], 25 tbr, 25 tbn, 25 tbc
onProgress : [swscaler @ 0x422a31b0] deprecated pixel format used, make sure you did set range correctly
onProgress : Output #0, image2, to ’/storage/emulated/0/com.example.mrtai.test_animationtovideo/temporary/3.jpg’ :
onProgress : Metadata :
onProgress : encoder : Lavf57.25.100
onProgress : Stream #0:0 : Video : mjpeg, yuvj420p(pc), 480x480 [SAR 1:1 DAR 1:1], q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc
onProgress : Metadata :
onProgress : encoder : Lavc57.24.102 mjpeg
onProgress : Side data :
onProgress : unknown side data type 10 (24 bytes)
onProgress : Stream mapping :
onProgress : Stream #0:0 -> #0:0 (mjpeg (native) -> mjpeg (native))
onProgress : Press [q] to stop, [?] for help
onProgress : frame= 1 fps=0.0 q=4.6 Lsize=N/A time=00:00:00.04 bitrate=N/A speed=0.533x
onProgress : video:20kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhea> unknown
onSuccess : -------------------------
onFinish :Help me please.
-
Piwik 1.12, New Features, API Improvements, Stability — The Last Piwik 1.X Release
30 mai 2013, par Piwik team — DevelopmentWe are very excited to announce the immediate availability of Piwik v1.12 !
- Download Link
- How to update Piwik ?
- List of all tickets closed : Changelog
Piwik v1.12 is a major new release with four big new features, seven smaller new features, several API improvements and all together 82 tickets fixed. This is also the last major 1.X release, which means after this release we will be working on releasing Piwik 2.0. This also means that you should upgrade to PHP 5.3 or higher if you haven’t already, since Piwik 2.0 will only support PHP 5.3 and above.
Finally, this release contains two breaking changes to the API. If you use the Piwik API click here or scroll down to see if you’re affected.
Table of Contents :
New Big Feature – Beta Release Channel
For those of you who want to help test Piwik 2.0-beta releases as soon as they come up, we’ve made it easier to use our beta releases. Navigate to the Settings > General Settings page and click the The latest beta release radio button. You will then be able to upgrade to beta releases.
This isn’t truly a major feature, but we think it’s just as important because it will allow us to create more beta releases and thus catch more bugs before we make a final release. This means more releases and more stability for you.
New Big Feature – Segment Editor
The Segment Editor is a long-awaited new feature that allows you to view, save and edit your segments.
Piwik has supported segmentation (filtering visits and reports by arbitrary criteria, like browser family) for quite some time now, but it has never been possible to visually create and modify them. Nor could they be saved for later recall.
Thanks to the eighty individuals and company who funded this feature, it is now possible to :
- visually segment your visitors, instead of creating URLs.
- save segments and easily switch between them, instead of remembering URLs.
- get suggestions for segments that might be helpful to view.
- learn more in the Segmentating Analytics reports user documentation..
New Big Feature – Page Speed Reports
You can now see how long it took your webserver to generate and send pages over HTTP through the new Avg. Generation Time metric.
This metric can be viewed on both the Pages and Page Titles reports :
And the average page generation time for all the pages in your website/webapp is displayed on the visitors overview :
You can use this new information to benchmark your webapp and web server.
New Big Feature – Device Detection Reports
Piwik 1.12 also includes a new plugin that provides reports on the device types (tablet, desktop, smartphone, etc.), device brands (Apple, Google, Samsung, etc.) and device models (iPad, Nexus 7, etc.) your visitors use to access your website :
The new plugin also enhances Operating system detections (detecting sub versions of Linux, Windows, and more).
Note : This plugin is not enabled by default, but will be in Piwik 2.0. If you want to view these reports now, you can activate the plugin in the Installed Plugins admin page. Navigate to Visitors > Devices to see the new reports. You may also use the new (beta) ‘Device type’.
The new plugin was developed with the support of Clearcode.cc our technology partner
Other improvements
Majestic SEO Metrics
We’ve added two new SEO metrics to the SEO widget, both of which are calculated by MajesticSEO.com. These metrics will tell you the number of external backlinks (the number of links to your site from other sites) and the number of referrer domains (the number of domains that link to your site).
We thank the team at Majestic for their support and hard work in bringing you these metrics to your Piwik dashboards !
Real-time Visitor Count Dashboard Widget
There is now a simple new widget you can use to see the number of visitors, visits and actions that occurred in the last couple minutes. We call it the Real Time Visitor Counter !
New segment parameter : siteSearchKeyword.
There is now a new segment parameter you can use to segment your visits : siteSearchKeyword. This parameter will let you select visits that had site searches with a specific keyword.
Ignore URL letter case when importing log files.
We’ve added a new option to the log import script, –force-lowercase-path. When used, the importer will change URL paths to lowercase before tracking them. This way http://domain.com/MY/BLOG will be treated the same as http://domain.com/my/blog.
Updated ISP Names
We’ve also modified the Providers report so prettier and more up-to-date names of ISPs are displayed.
Customize the background/text/axis color of graphs.
It is now possible to change the background color, text color and/or axis color of the graph images generated by the ImageGraph plugin. To access this functionality, use the following URL query parameters when generating an image :
- backgroundColor
- textColor
- axisColor
For example :
http://demo.piwik.org/index.php?module=API&method=ImageGraph.get&idSite=7&apiModule=UserSettings&apiAction=getBrowser&token_auth=anonymous&period=day&date=2013-03-21,2013-04-19&language=en&width=779&height=150&fontSize=9&showMetricTitle=0&aliasedGraph=1&legendAppendMetric=0&backgroundColor=efefef&gridColor=dcdcdc&colors=cb2026
Send your users to a custom URL after they logout.
If you manage a Piwik installation with many users and you want to send them to a custom page or website after they log out of Piwik, you can now specify the URL to redirect users after they log out.
API Changes and Improvements
BREAKING CHANGE – renamed segment parameters.
The following segment parameters have been renamed :
- continent renamed to : continentCode
- browserName renamed to : browserCode
- operatingSystem renamed to : operatingSystemCode
- lat renamed to : latitude
- long renamed to : longitude
- region renamed to : regionCode
- country renamed to : countryCode
- continent renamed to : continentCode
If you use one of the old segment parameter names, Piwik will throw an exception, so you should notice when you’re using an old name.
BREAKING CHANGE – changes to the input & output of the Live.getLastVisitsDetails method.
The following changes were made to the Live.getLastVisitsDetails API method :
- The method no longer uses the maxIdVisit query parameter. It has been replaced by the filter_offset parameter.
- Site search keywords are now displayed in a <siteSearchKeyword> element. They were formerly in <pageTitle> elements.
- Custom variables with page scope now have ‘Page’ in their element names when displayed. For example, <customVariablePageName1>, <customVariablePageName2>, etc.
Filter results of MultiSites.getAll by website name.
It is now possible to filter the results of MultiSites.getAll by website name. To do this, set the pattern query parameter to the desired regex pattern.
Get suggested values to use for a segment parameter.
The new API method API.getSuggestedValuesForSegment can now be used to get suggested values for a segment parameter. This method will return a list of the most seen values (in the last 60 days) for a certain segment parameter. So for browserCode, this would return the codes for the browsers most visitors used in the last 60 days.
Use extra tracking query parameters with the JS tracker (such as ‘lat’ & ‘long’).
We’ve added a new method to the JavaScript tracker named appendToTrackingUrl. You can use this method to add extra query parameters to a tracking request, like so :
_paq.push(['appendToTrackingUrl', 'lat=X&long=Y']);
What we’re working on
As we said above, Piwik v1.12 is the last in the 1.X series of releases. This means we are now officially working on Piwik 2.0.
Piwik 2.0 will be a big release, to be sure, but it’s going to bring you more than just a couple new features and a bag of bug fixes. For Piwik 2.0 we will be revisiting the user needs and the ideals that originally prompted us to create Piwik in order to build our vision of the future of web analytics.
Piwik 2.0 won’t just be a bigger, better web app, but a new platform for observing and analyzing the things that matter to you.
Participate in Piwik
Are you a talented developer or an experienced User Interface designer ? Or maybe you like to write documentation or are a marketing guru ?
If you have some free time and if you want to contribute to one of the most awesome open source projects around, please get in touch with the Piwik team, or read this page to learn more…
Summary
For the full list of changes in Piwik 1.12 check out the Changelog.
Thank you to the core developers, all the beta testers and users, our official supporters, the translators & everyone who reported bugs or feature requests. Also thank you to softwares we use, and the libraries we use.
If you are a company and would like to help an important project like Piwik grow, please get in touch, it means a lot to us. You can also participate in the project —
–> if you like what you read, please tell your friends and colleagues or write on your website, blog, forums, stackoverflow, etc. <–
Peace. Enjoy !
-
Decode mp3 using FFMpeg, Android NDK - What is wrong with my AVFormatContext ?
27 février 2020, par michpohlI am trying to decode am MP3 file to a raw PCM stream using FFMpeg via JNI on Android. I have compiled the latest FFMpeg version (4.2) and added it to my app. This did not make any problems.
The goal is to be able to use mp3 files from the device’s storage for playback with oboeSince I am relatively inexperienced with both C++ and FFMpeg, my approach is based upon this :
oboe’s RhythmGame exampleI have based my
FFMpegExtractor
class on the one found in the example here. With the help of StackOverflow theAAssetManager
use was removed and instead aMediaSource
helper class now serves as a wrapper for my stream (see here)But unfortunately, creating the AVFormatContext doesn’t work right - and I can’t seem to understand why. Since I have very limited understanding of correct pointer usage and C++ memory management, I suspect it’s most likely I’m doing something wrong in that area. But honestly, I have no idea.
This is my
FFMpegExtractor.h
:#define MYAPP_FFMPEGEXTRACTOR_H
extern "C" {
#include <libavformat></libavformat>avformat.h>
#include <libswresample></libswresample>swresample.h>
#include <libavutil></libavutil>opt.h>
}
#include <cstdint>
#include <android></android>asset_manager.h>
#include
#include <fstream>
#include "MediaSource.cpp"
class FFMpegExtractor {
public:
FFMpegExtractor();
~FFMpegExtractor();
int64_t decode2(char *filepath, uint8_t *targetData, AudioProperties targetProperties);
private:
MediaSource *mSource;
bool createAVFormatContext(AVIOContext *avioContext, AVFormatContext **avFormatContext);
bool openAVFormatContext(AVFormatContext *avFormatContext);
int32_t cleanup(AVIOContext *avioContext, AVFormatContext *avFormatContext);
bool getStreamInfo(AVFormatContext *avFormatContext);
AVStream *getBestAudioStream(AVFormatContext *avFormatContext);
AVCodec *findCodec(AVCodecID id);
void printCodecParameters(AVCodecParameters *params);
bool createAVIOContext2(const std::string &filePath, uint8_t *buffer, uint32_t bufferSize,
AVIOContext **avioContext);
};
#endif //MYAPP_FFMPEGEXTRACTOR_H
</fstream></cstdint>This is
FFMPegExtractor.cpp
:#include <memory>
#include <oboe></oboe>Definitions.h>
#include "FFMpegExtractor.h"
#include "logging.h"
#include <fstream>
FFMpegExtractor::FFMpegExtractor() {
mSource = new MediaSource;
}
FFMpegExtractor::~FFMpegExtractor() {
delete mSource;
}
constexpr int kInternalBufferSize = 1152; // Use MP3 block size. https://wiki.hydrogenaud.io/index.php?title=MP3
/**
* Reads from an IStream into FFmpeg.
*
* @param ptr A pointer to the user-defined IO data structure.
* @param buf A buffer to read into.
* @param buf_size The size of the buffer buff.
*
* @return The number of bytes read into the buffer.
*/
// If FFmpeg needs to read the file, it will call this function.
// We need to fill the buffer with file's data.
int read(void *opaque, uint8_t *buffer, int buf_size) {
MediaSource *source = (MediaSource *) opaque;
return source->read(buffer, buf_size);
}
// If FFmpeg needs to seek in the file, it will call this function.
// We need to change the read pos.
int64_t seek(void *opaque, int64_t offset, int whence) {
MediaSource *source = (MediaSource *) opaque;
return source->seek(offset, whence);
}
// Create and save a MediaSource instance.
bool FFMpegExtractor::createAVIOContext2(const std::string &filepath, uint8_t *buffer, uint32_t bufferSize,
AVIOContext **avioContext) {
mSource = new MediaSource;
mSource->open(filepath);
constexpr int isBufferWriteable = 0;
*avioContext = avio_alloc_context(
buffer, // internal buffer for FFmpeg to use
bufferSize, // For optimal decoding speed this should be the protocol block size
isBufferWriteable,
mSource, // Will be passed to our callback functions as a (void *)
read, // Read callback function
nullptr, // Write callback function (not used)
seek); // Seek callback function
if (*avioContext == nullptr) {
LOGE("Failed to create AVIO context");
return false;
} else {
return true;
}
}
bool
FFMpegExtractor::createAVFormatContext(AVIOContext *avioContext,
AVFormatContext **avFormatContext) {
*avFormatContext = avformat_alloc_context();
(*avFormatContext)->pb = avioContext;
if (*avFormatContext == nullptr) {
LOGE("Failed to create AVFormatContext");
return false;
} else {
LOGD("Successfully created AVFormatContext");
return true;
}
}
bool FFMpegExtractor::openAVFormatContext(AVFormatContext *avFormatContext) {
int result = avformat_open_input(&avFormatContext,
"", /* URL is left empty because we're providing our own I/O */
nullptr /* AVInputFormat *fmt */,
nullptr /* AVDictionary **options */
);
if (result == 0) {
return true;
} else {
LOGE("Failed to open file. Error code %s", av_err2str(result));
return false;
}
}
bool FFMpegExtractor::getStreamInfo(AVFormatContext *avFormatContext) {
int result = avformat_find_stream_info(avFormatContext, nullptr);
if (result == 0) {
return true;
} else {
LOGE("Failed to find stream info. Error code %s", av_err2str(result));
return false;
}
}
AVStream *FFMpegExtractor::getBestAudioStream(AVFormatContext *avFormatContext) {
int streamIndex = av_find_best_stream(avFormatContext, AVMEDIA_TYPE_AUDIO, -1, -1, nullptr, 0);
if (streamIndex < 0) {
LOGE("Could not find stream");
return nullptr;
} else {
return avFormatContext->streams[streamIndex];
}
}
int64_t FFMpegExtractor::decode2(
char* filepath,
uint8_t *targetData,
AudioProperties targetProperties) {
LOGD("Decode SETUP");
int returnValue = -1; // -1 indicates error
// Create a buffer for FFmpeg to use for decoding (freed in the custom deleter below)
auto buffer = reinterpret_cast(av_malloc(kInternalBufferSize));
// Create an AVIOContext with a custom deleter
std::unique_ptr ioContext{
nullptr,
[](AVIOContext *c) {
av_free(c->buffer);
avio_context_free(&c);
}
};
{
AVIOContext *tmp = nullptr;
if (!createAVIOContext2(filepath, buffer, kInternalBufferSize, &tmp)) {
LOGE("Could not create an AVIOContext");
return returnValue;
}
ioContext.reset(tmp);
}
// Create an AVFormatContext using the avformat_free_context as the deleter function
std::unique_ptr formatContext{
nullptr,
&avformat_free_context
};
{
AVFormatContext *tmp;
if (!createAVFormatContext(ioContext.get(), &tmp)) return returnValue;
formatContext.reset(tmp);
}
if (!openAVFormatContext(formatContext.get())) return returnValue;
LOGD("172");
if (!getStreamInfo(formatContext.get())) return returnValue;
LOGD("175");
// Obtain the best audio stream to decode
AVStream *stream = getBestAudioStream(formatContext.get());
if (stream == nullptr || stream->codecpar == nullptr) {
LOGE("Could not find a suitable audio stream to decode");
return returnValue;
}
LOGD("183");
printCodecParameters(stream->codecpar);
// Find the codec to decode this stream
AVCodec *codec = avcodec_find_decoder(stream->codecpar->codec_id);
if (!codec) {
LOGE("Could not find codec with ID: %d", stream->codecpar->codec_id);
return returnValue;
}
// Create the codec context, specifying the deleter function
std::unique_ptr codecContext{
nullptr,
[](AVCodecContext *c) { avcodec_free_context(&c); }
};
{
AVCodecContext *tmp = avcodec_alloc_context3(codec);
if (!tmp) {
LOGE("Failed to allocate codec context");
return returnValue;
}
codecContext.reset(tmp);
}
// Copy the codec parameters into the context
if (avcodec_parameters_to_context(codecContext.get(), stream->codecpar) < 0) {
LOGE("Failed to copy codec parameters to codec context");
return returnValue;
}
// Open the codec
if (avcodec_open2(codecContext.get(), codec, nullptr) < 0) {
LOGE("Could not open codec");
return returnValue;
}
// prepare resampler
int32_t outChannelLayout = (1 << targetProperties.channelCount) - 1;
LOGD("Channel layout %d", outChannelLayout);
SwrContext *swr = swr_alloc();
av_opt_set_int(swr, "in_channel_count", stream->codecpar->channels, 0);
av_opt_set_int(swr, "out_channel_count", targetProperties.channelCount, 0);
av_opt_set_int(swr, "in_channel_layout", stream->codecpar->channel_layout, 0);
av_opt_set_int(swr, "out_channel_layout", outChannelLayout, 0);
av_opt_set_int(swr, "in_sample_rate", stream->codecpar->sample_rate, 0);
av_opt_set_int(swr, "out_sample_rate", targetProperties.sampleRate, 0);
av_opt_set_int(swr, "in_sample_fmt", stream->codecpar->format, 0);
av_opt_set_sample_fmt(swr, "out_sample_fmt", AV_SAMPLE_FMT_FLT, 0);
av_opt_set_int(swr, "force_resampling", 1, 0);
// Check that resampler has been inited
int result = swr_init(swr);
if (result != 0) {
LOGE("swr_init failed. Error: %s", av_err2str(result));
return returnValue;
};
if (!swr_is_initialized(swr)) {
LOGE("swr_is_initialized is false\n");
return returnValue;
}
// Prepare to read data
int bytesWritten = 0;
AVPacket avPacket; // Stores compressed audio data
av_init_packet(&avPacket);
AVFrame *decodedFrame = av_frame_alloc(); // Stores raw audio data
int bytesPerSample = av_get_bytes_per_sample((AVSampleFormat) stream->codecpar->format);
LOGD("Bytes per sample %d", bytesPerSample);
// While there is more data to read, read it into the avPacket
while (av_read_frame(formatContext.get(), &avPacket) == 0) {
if (avPacket.stream_index == stream->index) {
while (avPacket.size > 0) {
// Pass our compressed data into the codec
result = avcodec_send_packet(codecContext.get(), &avPacket);
if (result != 0) {
LOGE("avcodec_send_packet error: %s", av_err2str(result));
goto cleanup;
}
// Retrieve our raw data from the codec
result = avcodec_receive_frame(codecContext.get(), decodedFrame);
if (result != 0) {
LOGE("avcodec_receive_frame error: %s", av_err2str(result));
goto cleanup;
}
// DO RESAMPLING
auto dst_nb_samples = (int32_t) av_rescale_rnd(
swr_get_delay(swr, decodedFrame->sample_rate) + decodedFrame->nb_samples,
targetProperties.sampleRate,
decodedFrame->sample_rate,
AV_ROUND_UP);
short *buffer1;
av_samples_alloc(
(uint8_t **) &buffer1,
nullptr,
targetProperties.channelCount,
dst_nb_samples,
AV_SAMPLE_FMT_FLT,
0);
int frame_count = swr_convert(
swr,
(uint8_t **) &buffer1,
dst_nb_samples,
(const uint8_t **) decodedFrame->data,
decodedFrame->nb_samples);
int64_t bytesToWrite = frame_count * sizeof(float) * targetProperties.channelCount;
memcpy(targetData + bytesWritten, buffer1, (size_t) bytesToWrite);
bytesWritten += bytesToWrite;
av_freep(&buffer1);
avPacket.size = 0;
avPacket.data = nullptr;
}
}
}
av_frame_free(&decodedFrame);
returnValue = bytesWritten;
cleanup:
return returnValue;
}
void FFMpegExtractor::printCodecParameters(AVCodecParameters *params) {
LOGD("Stream properties");
LOGD("Channels: %d", params->channels);
LOGD("Channel layout: %"
PRId64, params->channel_layout);
LOGD("Sample rate: %d", params->sample_rate);
LOGD("Format: %s", av_get_sample_fmt_name((AVSampleFormat) params->format));
LOGD("Frame size: %d", params->frame_size);
}
</fstream></memory>And this is the
MediaSource.cpp
:#ifndef MYAPP_MEDIASOURCE_CPP
#define MYAPP_MEDIASOURCE_CPP
extern "C" {
#include <libavformat></libavformat>avformat.h>
#include <libswresample></libswresample>swresample.h>
#include <libavutil></libavutil>opt.h>
}
#include <cstdint>
#include <android></android>asset_manager.h>
#include
#include <fstream>
#include "logging.h"
// wrapper class for file stream
class MediaSource {
public:
MediaSource() {
}
~MediaSource() {
source.close();
}
void open(const std::string &filePath) {
const char *x = filePath.c_str();
LOGD("Opened %s", x);
source.open(filePath, std::ios::in | std::ios::binary);
}
int read(uint8_t *buffer, int buf_size) {
// read data to buffer
source.read((char *) buffer, buf_size);
// return how many bytes were read
return source.gcount();
}
int64_t seek(int64_t offset, int whence) {
if (whence == AVSEEK_SIZE) {
// FFmpeg needs file size.
int oldPos = source.tellg();
source.seekg(0, std::ios::end);
int64_t length = source.tellg();
// seek to old pos
source.seekg(oldPos);
return length;
} else if (whence == SEEK_SET) {
// set pos to offset
source.seekg(offset);
} else if (whence == SEEK_CUR) {
// add offset to pos
source.seekg(offset, std::ios::cur);
} else {
// do not support other flags, return -1
return -1;
}
// return current pos
return source.tellg();
}
private:
std::ifstream source;
};
#endif //MYAPP_MEDIASOURCE_CPP
</fstream></cstdint>When the code is executed, I can see that I submit the correct file path, so I assume the resource mp3 is there.
When this code is executed the app crashes in line 103 ofFFMpegExtractor.cpp
, atformatContext.reset(tmp);
This is what Android Studio logs when the app crashes :
--------- beginning of crash
2020-02-27 14:31:26.341 9852-9945/com.user.myapp A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x7fffffff0 in tid 9945 (chaelpohl.loopy), pid 9852 (user.myapp)This is the (sadly very short) output I get with
ndk-stack
:********** Crash dump: **********
Build fingerprint: 'samsung/dreamltexx/dreamlte:9/PPR1.180610.011/G950FXXU6DSK9:user/release-keys'
#00 0x0000000000016c50 /data/app/com.user.myapp-D7dBCgHF-vdQNNSald4lWA==/lib/arm64/libavformat.so (avformat_free_context+260)
avformat_free_context
??:0:0
Crash dump is completedI tested a bit around, and every call to my
formatContext
crashes the app. So I assume there is something wrong with the input I provide to build it but I have no clue how to debug this.Any help is appreciated ! (Happy to provide additional resources if something crucial is missing).