Recherche avancée

Médias (1)

Mot : - Tags -/ogv

Autres articles (63)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Pas question de marché, de cloud etc...

    10 avril 2011

    Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
    sur le web 2.0 et dans les entreprises qui en vivent.
    Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
    Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
    le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
    Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...)

Sur d’autres sites (8551)

  • Linux Media Player Survey Circa 2001

    2 septembre 2010, par Multimedia Mike — General

    Here’s a document I scavenged from my archives. It was dated September 1, 2001 and I now publish it 9 years later. It serves as sort of a time capsule for the state of media player programs at the time. Looking back on this list, I can’t understand why I couldn’t find MPlayer while I was conducting this survey, especially since MPlayer is the project I eventually started to work for a few months after writing this piece.

    For a little context, I had been studying multimedia concepts and tech for a year and was itching to get my hands dirty with practical multimedia coding. But I wanted to tackle what I perceived as unsolved problems– like playback of proprietary codecs. I didn’t want to have to build a new media playback framework just to start working on my problems. So I surveyed the players available to see which ones I could plug into and use as a testbed for implementing new decoders.

    Regarding Real Player, I wrote : “We’re trying to move away from the proprietary, closed-source “solutions”. Heh. Was I really an insufferable open source idealist back in the day ?

    Anyway, here’s the text with some Where are they now ? commentary [in brackets] :


    Towards an All-Inclusive Media Playing Solution for Linux

    I don’t feel that the media playing solutions for Linux set their sights high enough, even though they do tend to be quite ambitious.

    I want to create a media player for Linux that can open a file, figure out what type of file it is (AVI, MOV, etc.), determine the compression algorithms used to encode the audio and video chunks inside (MPEG, Cinepak, Sorenson, etc.) and replay the file using the best audio, video, and CPU facilities available on the computer.

    Video and audio playback is a solved problem on Linux ; I don’t wish to solve that problem again. The problem that isn’t solved is reliance on proprietary multimedia solutions through some kind of WINE-like layer in order to decode compressed multimedia files.

    Survey of Linux solutions for decoding proprietary multimedia
    updated 2001-09-01

    AVI Player for XMMS
    This is based on Avifile. All the same advantages and limitations apply.
    [Top Google hit is a Freshmeat page that doesn’t indicate activity since 2001-2002.]

    Avifile
    This player does a great job at taking apart AVI and ASF files and then feeding the compressed chunks of multimedia data through to the binary Win32 decoders.

    The program is written in C++ and I’m not very good at interpreting that kind of code. But I’m learning all over again. Examining the object hierarchy, it appears that the designers had the foresight to include native support for decoders that are compiled into the program from source code. However, closer examination reveals that there is support for ONE source decoder and that’s the “decoder” for uncompressed data. Still, I tried to manipulate this routine to accept and decode data from other codecs but no dice. It’s really confounding. The program always crashes when I feed non-uncompressed data through the source decoder.
    [Lives at http://avifile.sourceforge.net/ ; not updated since 2006.]

    Real Player
    There’s not much to do with this since it is closed source and proprietary. Even though there is a plugin architecture, that’s not satisfactory. We’re trying to move away from the proprietary, closed-source “solutions”.
    [Still kickin’ with version 11.]

    XAnim
    This is a well-established Unix media player. To his credit, the author does as well as he can with the resources he has. In other words, he supports the non-proprietary video codecs well, and even has support for some proprietary video codecs through binary-only decoders.

    The source code is extremely difficult to work with as the author chose to use the X coding format which I’ve never seen used anywhere else except for X header files. The infrastructure for extending the program and supporting other codecs and file formats is there, I suppose, but I would have to wrap my head around the coding style. Maybe I can learn to work past that. The other thing that bothers me about this program is the decoding approach : It seems that each video decoder includes routines to decompress the multimedia data into every conceivable RGB and YUV output format. This seems backwards to me ; it seems better to have one decoder function that decodes the data into its native format it was compressed from (e.g., YV12 for MPEG data) and then pass that data to another layer of the program that’s in charge of presenting the data and possibly converting it if necessary. This layer would encompass highly-optimized software conversion routines including special CPU-specific instructions (e.g., MMX and SSE) and eliminate the need to place those routines in lots of other routines. But I’m getting ahead of myself.
    [This one was pretty much dead before I made this survey, the most recent update being in 1999. Still, we owe it much respect as the granddaddy of Unix multimedia playback programs.]

    Xine
    This seems like a promising program. It was originally designed to play MPEGs from DVDs. It can also play MPEG files on a hard drive and utilizes the Xv extensions for hardware YUV playback. It’s also supposed to play AVI files using the same technique as Avifile but I have never, ever gotten it to work. If an AVI file has both video and sound, the binary video decoder can’t decode any frames. If the AVI file has video and no sound, the program gets confused and crashes, as far as I can tell.

    Still, it’s promising, and I’ve been trying to work around these crashes. It doesn’t yet have the type of modularization I’d like to see. Right now, it tailored to suit MPEG playback and AVI playback is an afterthought. Still, it appears to have a generalized interface for dropping in new file demultiplexers.

    I tried to extend the program for supporting source decoders by rewriting w32codec.c from scratch. I’m not having a smooth time of it so far. I’m able to perform some manipulations on the output window. However, I can’t get the program to deal with an RGB image format. It has trouble allocating an RGB surface with XvShmCreateImage(). This isn’t suprising, per my limited knowledge of X which is that Xv applies to YUV images, but it could also apply to RGB images as well. Anyway, the program should be able to fall back on regular RGB pixmaps if that Xv call fails.

    Right now, this program is looking the most promising. It will take some work to extend the underlying infrastructure, but it seems doable since I know C quite well and can understand the flow of this program, as opposed to Avifile and its C++. The C code also compiles about 10 times faster.
    [My home project for many years after a brief flirtation with MPlayer. It is still alive ; its latest release was just a month ago.]

    XMovie
    This library is a Quicktime movie player. I haven’t looked at it too extensively yet, but I do remember looking at it at one point and reading the documentation that said it doesn’t support key frames. Still, I should examine it again since they released a new version recently.
    [Heroine Virtual still puts out some software but XMovie has not been updated since 2005.]

    XMPS
    This program compiles for me, but doesn’t do much else. It can play an MP3 file. I have been able to get MPEG movies to play through it, but it refuses to show the full video frame, constricting it to a small window (obviously a bug).
    [This project is hosted on SourceForge and is listed with a registration date of 2003, well after this survey was made. So the project obviously lived elsewhere in 2001. Meanwhile, it doesn’t look like any files ever made it to SF for hosting.]

    XTheater
    I can’t even get this program to compile. It’s supposed to be an MPEG player based on SMPEG. As such, it probably doesn’t hold much promise for being easily extended into a general media player.
    [Last updated in 2002.]

    GMerlin
    I can’t get this to compile yet. I have a bug report in to the dev group.
    [Updated consistently in the last 9 years. Last update was in February of this year. I can’t find any record of my bug report, though.]

  • how can l use this jni code to play video in GLSurfaceView,I can noy find the way to use it

    2 mai 2016, par MrHuang
    #include
    #include
    #include
    #include
    #include <sys></sys>time.h>
    #include

    #include "AeeVideo.h"
    #include "videorender.h"
    #include "decode.h"


    #include
    #include <android></android>log.h>
    #define TAG "AeeVideo"
    #define LOGE(format, ...)  __android_log_print(ANDROID_LOG_ERROR, TAG, format, ##__VA_ARGS__)
    #define LOGI(format, ...)  __android_log_print(ANDROID_LOG_INFO,  TAG, format, ##__VA_ARGS__)

    static int g_connectstate = -1;
    static DecodeCtx *g_dec_ctx;

    static int     last_width   = 0;
    static int     last_height  = 0;
    static int     last_picsize = 0;
    static uint8_t last_picture[3 * 1024 * 1024];//save color pic

    JNIEXPORT jint JNICALL Java_com_aee_video_FirstOpenGLProjectJNI_AeePlayerSetScreen(JNIEnv * env, jobject obj,jint width,jint height)
    {
       gl_screen_set(0,  0, width, height);
       LOGI("and opengl set screen size (%d,%d,%d,%d)\n",0,0,width,height);

       gl_initialize();
       Decode_Init();
       return 0;
    }

    JNIEXPORT jint JNICALL Java_com_aee_video_FirstOpenGLProjectJNI_AeePlayerStart(JNIEnv * env, jobject obj, jstring Url)
    {
       const char *pUrl = (*env)->GetStringUTFChars(env, Url, 0);
       LOGI("stream url %s \n",pUrl);

       g_dec_ctx = Decode_OpenStream((char*)pUrl);
       if (!g_dec_ctx) {
           LOGE("openstream %s,failed!\n", pUrl);
           return -1;
       }
       return 0;
    }

    JNIEXPORT jint JNICALL Java_com_aee_video_FirstOpenGLProjectJNI_AeePlayerRender(JNIEnv * env, jobject obj)
    {
       if(!g_dec_ctx)
       return -1;

       VideoFrame frame;
       int ret = Decode_ReadFrame(g_dec_ctx, &amp;frame);
       if ( ret &lt;= 0 ) {
           if ( last_picsize > 0 ){
           LOGI("disconnect,render last pic\n");
               gl_render_frame(last_picture, last_width, last_height);
       }
           return ret;
       }
       LOGI("render video frame,pkt w,h:(%d,%d)\n",frame.width,frame.height);
       gl_render_frame(frame.data, frame.width, frame.height);

       if (last_width != frame.width || last_height != frame.height){
           memset(last_picture, 0x80, sizeof(last_picture));       /* gray background */
       }
       last_width   = frame.width;
       last_height  = frame.height;
       last_picsize = frame.size;
       memcpy(last_picture, frame.data, frame.width * frame.height);           /* copy a gray pic */
       return 0;
    }


    JNIEXPORT jint JNICALL Java_com_aee_video_FirstOpenGLProjectJNI_AeePlayerStop(JNIEnv * env, jobject obj)
    {
       LOGI("AeePlayer Stop");
       if (g_dec_ctx){
       Decode_CloseStream(g_dec_ctx);
       }
       Decode_Quit();
       gl_uninitialize();
       return 0;
    }

    JNIEXPORT jint JNICALL Java_com_aee_video_FirstOpenGLProjectJNI_AeePlayerSetState(JNIEnv * env, jobject obj, jint state)
    {
       g_connectstate = state;
       LOGI("g_connectstate %d \n",g_connectstate);
       return 0;
    }

    JNIEXPORT jstring JNICALL Java_com_aee_video_FirstOpenGLProjectJNI_AeePlayerGetVersion(JNIEnv * env, jobject obj)
    {
       char v[10]= "1.0.0";
       return (*env)->NewStringUTF(env,v);
    }

    #define _ANDROID_APP_

    #include
    #include
    #include
    #include
    #include
    #include
    #include "videorender.h"

    #ifdef _ANDROID_APP_
    #include <gles2></gles2>gl2.h>
    #include <gles2></gles2>gl2ext.h>
    #include <sys></sys>time.h>
    #include
    #include <android></android>log.h>
    #define LOGE(format, ...)  __android_log_print(ANDROID_LOG_ERROR, "VideoRender", format, ##__VA_ARGS__)
    #define LOGI(format, ...)  __android_log_print(ANDROID_LOG_INFO,  "VideoRender", format, ##__VA_ARGS__)
    #endif

    #ifdef _IOS_APP_
    #include <opengles></opengles>ES2/gl.h>
    #include <opengles></opengles>ES2/glext.h>
    #endif

    GLint ATTRIB_VERTEX, ATTRIB_TEXTURE;

    static GLuint g_texYId;
    static GLuint g_texUId;
    static GLuint g_texVId;
    static GLuint simpleProgram;

    static int s_x = 0;
    static int s_y = 0;
    static int s_width = 0;
    static int s_height = 0;

    static int view_x = 0;  
    static int view_y = 0;
    static int view_width = 0;
    static int view_height = 0;

    int g_width = 1280;  
    int g_height = 720;


    static const char* FRAG_SHADER =
       "varying lowp vec2 tc;\n"
       "uniform sampler2D SamplerY;\n"
       "uniform sampler2D SamplerU;\n"
       "uniform sampler2D SamplerV;\n"
       "void main(void)\n"
       "{\n"
           "mediump vec3 yuv;\n"
           "lowp vec3 rgb;\n"
           "yuv.x = texture2D(SamplerY, tc).r;\n"
           "yuv.y = texture2D(SamplerU, tc).r - 0.5;\n"
           "yuv.z = texture2D(SamplerV, tc).r - 0.5;\n"
           "rgb = mat3( 1,   1,   1,\n"
                       "0,       -0.39465,  2.03211,\n"
                       "1.13983,   -0.58060,  0) * yuv;\n"
           "gl_FragColor = vec4(rgb, 1);\n"
       "}\n";

    static const char* VERTEX_SHADER =
         "attribute vec4 vPosition;    \n"
         "attribute vec2 a_texCoord;   \n"
         "varying vec2 tc;     \n"
         "void main()                  \n"
         "{                            \n"
         "   gl_Position = vPosition;  \n"
         "   tc = a_texCoord;  \n"
         "}                            \n";

    static GLuint bindTexture(GLuint texture, const char *buffer, GLuint w , GLuint h)
    {
       glBindTexture ( GL_TEXTURE_2D, texture );
       glTexImage2D ( GL_TEXTURE_2D, 0, GL_LUMINANCE, w, h, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, buffer);
       glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
       glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
       glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
       glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );

       return texture;
    }

    static void renderFrame()
    {
       static GLfloat squareVertices[] = {
           -1.0f, -1.0f,
           1.0f, -1.0f,
           -1.0f,  1.0f,
           1.0f,  1.0f,
       };

       //texture rotate
       /*static GLfloat squareVertices[] = {
           -1.0f, -0.5f,
           0.5f, -1.0f,
           -0.5f,  1.0f,
           1.0f,  0.5f,
       };*/

       static GLfloat coordVertices[] = {
           0.0f, 1.0f,
           1.0f, 1.0f,
           0.0f,  0.0f,
           1.0f,  0.0f,
       };
       //texture half
       /*static GLfloat coordVertices[] = {
           0.0f, 1.0f,
           0.5f, 1.0f,
           0.0f,  0.0f,
           0.5f,  0.0f,
       };*/
       glClearColor(0.0f, 0.0f, 0.0f, 1);
       glClear(GL_COLOR_BUFFER_BIT);

       GLint tex_y = glGetUniformLocation(simpleProgram, "SamplerY");
       GLint tex_u = glGetUniformLocation(simpleProgram, "SamplerU");
       GLint tex_v = glGetUniformLocation(simpleProgram, "SamplerV");
       //LOGI("tex_y:%d,tex_u:%d,tex_v:%d \n",tex_y,tex_u,tex_v);

       ATTRIB_VERTEX = glGetAttribLocation(simpleProgram, "vPosition");
       ATTRIB_TEXTURE = glGetAttribLocation(simpleProgram, "a_texCoord");
       //LOGI("vertex %d,texture %d",ATTRIB_VERTEX,ATTRIB_TEXTURE);
       glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
       glEnableVertexAttribArray(ATTRIB_VERTEX);
       //LOGI("enableVertexAttribArray vertex");

       glVertexAttribPointer(ATTRIB_TEXTURE, 2, GL_FLOAT, 0, 0, coordVertices);
       glEnableVertexAttribArray(ATTRIB_TEXTURE);
       //LOGI("enableVertexAttribArray texture");

       glActiveTexture(GL_TEXTURE0);
       glBindTexture(GL_TEXTURE_2D, g_texYId);
       glUniform1i(tex_y, 0);

       glActiveTexture(GL_TEXTURE1);
       glBindTexture(GL_TEXTURE_2D, g_texUId);
       glUniform1i(tex_u, 1);

       glActiveTexture(GL_TEXTURE2);
       glBindTexture(GL_TEXTURE_2D, g_texVId);
       glUniform1i(tex_v, 2);

       glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
       //glutSwapBuffers();//double buffer
       //glFlush();//single buffer
    }

    static GLuint buildShader(const char* source, GLenum shaderType)
    {
       GLuint shaderHandle = glCreateShader(shaderType);

       if (shaderHandle)
       {
           glShaderSource(shaderHandle, 1, &amp;source, 0);
           glCompileShader(shaderHandle);
           GLint compiled = 0;
           glGetShaderiv(shaderHandle, GL_COMPILE_STATUS, &amp;compiled);
           if (!compiled){
               GLint infoLen = 0;
               glGetShaderiv(shaderHandle, GL_INFO_LOG_LENGTH, &amp;infoLen);
               if (infoLen){
                   char* buf = (char*) malloc(infoLen);
                   if (buf){
                       glGetShaderInfoLog(shaderHandle, infoLen, NULL, buf);
                       LOGE("error::Could not compile shader %d:\n%s\n", shaderType, buf);
                       free(buf);
                   }
                   glDeleteShader(shaderHandle);
                   shaderHandle = 0;
               }
           }
       }

       return shaderHandle;
    }

    static GLuint buildProgram(const char* vertexShaderSource,
           const char* fragmentShaderSource)
    {
       GLuint vertexShader = buildShader(vertexShaderSource, GL_VERTEX_SHADER);
       GLuint fragmentShader = buildShader(fragmentShaderSource, GL_FRAGMENT_SHADER);
       GLuint programHandle = glCreateProgram();

       if (programHandle)
       {
           glAttachShader(programHandle, vertexShader);
           glAttachShader(programHandle, fragmentShader);
           glLinkProgram(programHandle);
           GLint linkStatus = GL_FALSE;
           glGetProgramiv(programHandle, GL_LINK_STATUS, &amp;linkStatus);
           if (linkStatus != GL_TRUE) {
               GLint bufLength = 0;
               glGetProgramiv(programHandle, GL_INFO_LOG_LENGTH, &amp;bufLength);
               if (bufLength) {
                   char* buf = (char*) malloc(bufLength);
                   if (buf) {
                       glGetProgramInfoLog(programHandle, bufLength, NULL, buf);
                       LOGE("error::Could not link program:\n%s\n", buf);
                       free(buf);
                   }
               }
               glDeleteProgram(programHandle);
               programHandle = 0;
           }
       }

       return programHandle;
    }

    void gl_initialize()
    {
       LOGI("####gl_initialize###\n");
       simpleProgram = buildProgram(VERTEX_SHADER, FRAG_SHADER);
       if(!simpleProgram){
           LOGE("opengl buildProgram() failed! \n");
           return;
       }
       LOGI("glProgram %d\n",simpleProgram);
       glUseProgram(simpleProgram);
       glGenTextures(1, &amp;g_texYId);
       glGenTextures(1, &amp;g_texUId);
       glGenTextures(1, &amp;g_texVId);
       LOGI("opengl gentextures end");
    }

    void gl_uninitialize()
    {
       LOGI("####gl_uninitialize####");
       glDeleteProgram(simpleProgram);
       glDeleteTextures(1, &amp;g_texYId);
       glDeleteTextures(1, &amp;g_texUId);
       glDeleteTextures(1, &amp;g_texVId);
    }

    void gl_render_frame(const char *buf, int w, int h)
    {
       if (!buf || w &lt; 0 || h &lt; 0)
       {
           LOGE("this frame is invalid \n");
           return;
       }
       char *y_buf = (char *)buf;
       char *u_buf = y_buf + w * h;
       char *v_buf = u_buf + w * h / 4;
       gl_viewsize_set(w, h);
       glViewport(view_x, view_y, view_width, view_height);
       //LOGI("glViewport x,y,width,height=[%d,%d,%d,%d]\n",view_x,view_y,view_width,view_height);
       bindTexture(g_texYId, (const char*)y_buf, w, h);
       bindTexture(g_texUId, (const char*)u_buf, w/2, h/2);
       bindTexture(g_texVId, (const char*)v_buf, w/2, h/2);
       renderFrame();
    }

    void gl_screen_set(int screen_x, int screen_y, int screen_width, int screen_height)
    {
       s_x = screen_x;
       s_y = screen_y;
       s_width = screen_width;
       s_height = screen_height;
    }

    void gl_viewsize_set(int frame_width, int frame_height)
    {
       int view_p = (int)((float)frame_height * 100 / frame_width);
       int screen_p = (int)((float)s_height * 100 / s_width);
       if (view_p == screen_p) {
           view_x = s_x;
           view_y = s_y;
           view_width = s_width;
           view_height = s_height;
       }
       else if (view_p > screen_p){
           view_width = (int)(s_height * 100 / view_p);
           view_height = s_height;
           view_x = (int)((s_width - view_width) / 2);
           view_y = s_y;
       }
       else if (view_p &lt; screen_p){
           view_width = s_width;
           view_height = (int)(s_width * view_p / 100) ;
           view_x = s_x;
           view_y = (int)((s_height - view_height) / 2);
       }
    }

    void gl_imagesize_set(int width, int height)
    {
       g_width = width;
       g_height = height;
    }

    public class FirstOpenGLProjectJNI {

        public static native int AeePlayerSetScreen(int width,int height);

        public static native int AeePlayerStart(String url);

        public static native int AeePlayerRender();

        public static native int AeePlayerStop();

    }

    I want to use GLSurfaceView to play video, but the surface is always black. How can I show the video in the GLSurfaceView by useing this jni code.

  • android ffmpeg bad video output

    20 août 2014, par Sujith Manjavana

    I’m following this tutorial to create my first ffmpeg app. I have successfully build the shared libs and compiled the project without any errors. But when i run the app on my nexus 5 the output is this this

    Here is the native code

    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libswscale></libswscale>swscale.h>
    #include <libavutil></libavutil>pixfmt.h>

    #include
    #include

    #include
    #include <android></android>native_window.h>
    #include <android></android>native_window_jni.h>

    #define LOG_TAG "android-ffmpeg-tutorial02"
    #define LOGI(...) __android_log_print(4, LOG_TAG, __VA_ARGS__);
    #define LOGE(...) __android_log_print(6, LOG_TAG, __VA_ARGS__);

    ANativeWindow*      window;
    char                *videoFileName;
    AVFormatContext     *formatCtx = NULL;
    int                 videoStream;
    AVCodecContext      *codecCtx = NULL;
    AVFrame             *decodedFrame = NULL;
    AVFrame             *frameRGBA = NULL;
    jobject             bitmap;
    void*               buffer;
    struct SwsContext   *sws_ctx = NULL;
    int                 width;
    int                 height;
    int                 stop;

    jint naInit(JNIEnv *pEnv, jobject pObj, jstring pFileName) {
       AVCodec         *pCodec = NULL;
       int             i;
       AVDictionary    *optionsDict = NULL;

       videoFileName = (char *)(*pEnv)->GetStringUTFChars(pEnv, pFileName, NULL);
       LOGI("video file name is %s", videoFileName);
       // Register all formats and codecs
       av_register_all();
       // Open video file
       if(avformat_open_input(&amp;formatCtx, videoFileName, NULL, NULL)!=0)
           return -1; // Couldn't open file
       // Retrieve stream information
       if(avformat_find_stream_info(formatCtx, NULL)&lt;0)
           return -1; // Couldn't find stream information
       // Dump information about file onto standard error
       av_dump_format(formatCtx, 0, videoFileName, 0);
       // Find the first video stream
       videoStream=-1;
       for(i=0; inb_streams; i++) {
           if(formatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
               videoStream=i;
               break;
           }
       }
       if(videoStream==-1)
           return -1; // Didn't find a video stream
       // Get a pointer to the codec context for the video stream
       codecCtx=formatCtx->streams[videoStream]->codec;
       // Find the decoder for the video stream
       pCodec=avcodec_find_decoder(codecCtx->codec_id);
       if(pCodec==NULL) {
           fprintf(stderr, "Unsupported codec!\n");
           return -1; // Codec not found
       }
       // Open codec
       if(avcodec_open2(codecCtx, pCodec, &amp;optionsDict)&lt;0)
           return -1; // Could not open codec
       // Allocate video frame
       decodedFrame=avcodec_alloc_frame();
       // Allocate an AVFrame structure
       frameRGBA=avcodec_alloc_frame();
       if(frameRGBA==NULL)
           return -1;
       return 0;
    }

    jobject createBitmap(JNIEnv *pEnv, int pWidth, int pHeight) {
       int i;
       //get Bitmap class and createBitmap method ID
       jclass javaBitmapClass = (jclass)(*pEnv)->FindClass(pEnv, "android/graphics/Bitmap");
       jmethodID mid = (*pEnv)->GetStaticMethodID(pEnv, javaBitmapClass, "createBitmap", "(IILandroid/graphics/Bitmap$Config;)Landroid/graphics/Bitmap;");
       //create Bitmap.Config
       //reference: https://forums.oracle.com/thread/1548728
       const wchar_t* configName = L"ARGB_8888";
       int len = wcslen(configName);
       jstring jConfigName;
       if (sizeof(wchar_t) != sizeof(jchar)) {
           //wchar_t is defined as different length than jchar(2 bytes)
           jchar* str = (jchar*)malloc((len+1)*sizeof(jchar));
           for (i = 0; i &lt; len; ++i) {
               str[i] = (jchar)configName[i];
           }
           str[len] = 0;
           jConfigName = (*pEnv)->NewString(pEnv, (const jchar*)str, len);
       } else {
           //wchar_t is defined same length as jchar(2 bytes)
           jConfigName = (*pEnv)->NewString(pEnv, (const jchar*)configName, len);
       }
       jclass bitmapConfigClass = (*pEnv)->FindClass(pEnv, "android/graphics/Bitmap$Config");
       jobject javaBitmapConfig = (*pEnv)->CallStaticObjectMethod(pEnv, bitmapConfigClass,
               (*pEnv)->GetStaticMethodID(pEnv, bitmapConfigClass, "valueOf", "(Ljava/lang/String;)Landroid/graphics/Bitmap$Config;"), jConfigName);
       //create the bitmap
       return (*pEnv)->CallStaticObjectMethod(pEnv, javaBitmapClass, mid, pWidth, pHeight, javaBitmapConfig);
    }

    jintArray naGetVideoRes(JNIEnv *pEnv, jobject pObj) {
       jintArray lRes;
       if (NULL == codecCtx) {
           return NULL;
       }
       lRes = (*pEnv)->NewIntArray(pEnv, 2);
       if (lRes == NULL) {
           LOGI(1, "cannot allocate memory for video size");
           return NULL;
       }
       jint lVideoRes[2];
       lVideoRes[0] = codecCtx->width;
       lVideoRes[1] = codecCtx->height;
       (*pEnv)->SetIntArrayRegion(pEnv, lRes, 0, 2, lVideoRes);
       return lRes;
    }

    void naSetSurface(JNIEnv *pEnv, jobject pObj, jobject pSurface) {
       if (0 != pSurface) {
           // get the native window reference
           window = ANativeWindow_fromSurface(pEnv, pSurface);
           // set format and size of window buffer
           ANativeWindow_setBuffersGeometry(window, 0, 0, WINDOW_FORMAT_RGBA_8888);
       } else {
           // release the native window
           ANativeWindow_release(window);
       }
    }

    jint naSetup(JNIEnv *pEnv, jobject pObj, int pWidth, int pHeight) {
       width = pWidth;
       height = pHeight;
       //create a bitmap as the buffer for frameRGBA
       bitmap = createBitmap(pEnv, pWidth, pHeight);
       if (AndroidBitmap_lockPixels(pEnv, bitmap, &amp;buffer) &lt; 0)
           return -1;
       //get the scaling context
       sws_ctx = sws_getContext (
               codecCtx->width,
               codecCtx->height,
               codecCtx->pix_fmt,
               pWidth,
               pHeight,
               AV_PIX_FMT_RGBA,
               SWS_BILINEAR,
               NULL,
               NULL,
               NULL
       );
       // Assign appropriate parts of bitmap to image planes in pFrameRGBA
       // Note that pFrameRGBA is an AVFrame, but AVFrame is a superset
       // of AVPicture
       avpicture_fill((AVPicture *)frameRGBA, buffer, AV_PIX_FMT_RGBA,
               pWidth, pHeight);
       return 0;
    }

    void finish(JNIEnv *pEnv) {
       //unlock the bitmap
       AndroidBitmap_unlockPixels(pEnv, bitmap);
       av_free(buffer);
       // Free the RGB image
       av_free(frameRGBA);
       // Free the YUV frame
       av_free(decodedFrame);
       // Close the codec
       avcodec_close(codecCtx);
       // Close the video file
       avformat_close_input(&amp;formatCtx);
    }

    void decodeAndRender(JNIEnv *pEnv) {
       ANativeWindow_Buffer    windowBuffer;
       AVPacket                packet;
       int                     i=0;
       int                     frameFinished;
       int                     lineCnt;
       while(av_read_frame(formatCtx, &amp;packet)>=0 &amp;&amp; !stop) {
           // Is this a packet from the video stream?
           if(packet.stream_index==videoStream) {
               // Decode video frame
               avcodec_decode_video2(codecCtx, decodedFrame, &amp;frameFinished,
                  &amp;packet);
               // Did we get a video frame?
               if(frameFinished) {
                   // Convert the image from its native format to RGBA
                   sws_scale
                   (
                       sws_ctx,
                       (uint8_t const * const *)decodedFrame->data,
                       decodedFrame->linesize,
                       0,
                       codecCtx->height,
                       frameRGBA->data,
                       frameRGBA->linesize
                   );
                   // lock the window buffer
                   if (ANativeWindow_lock(window, &amp;windowBuffer, NULL) &lt; 0) {
                       LOGE("cannot lock window");
                   } else {
                       // draw the frame on buffer
                       LOGI("copy buffer %d:%d:%d", width, height, width*height*4);
                       LOGI("window buffer: %d:%d:%d", windowBuffer.width,
                               windowBuffer.height, windowBuffer.stride);
                       memcpy(windowBuffer.bits, buffer,  width * height * 4);
                       // unlock the window buffer and post it to display
                       ANativeWindow_unlockAndPost(window);
                       // count number of frames
                       ++i;
                   }
               }
           }
           // Free the packet that was allocated by av_read_frame
           av_free_packet(&amp;packet);
       }
       LOGI("total No. of frames decoded and rendered %d", i);
       finish(pEnv);
    }

    /**
    * start the video playback
    */
    void naPlay(JNIEnv *pEnv, jobject pObj) {
       //create a new thread for video decode and render
       pthread_t decodeThread;
       stop = 0;
       pthread_create(&amp;decodeThread, NULL, decodeAndRender, NULL);
    }

    /**
    * stop the video playback
    */
    void naStop(JNIEnv *pEnv, jobject pObj) {
       stop = 1;
    }

    jint JNI_OnLoad(JavaVM* pVm, void* reserved) {
       JNIEnv* env;
       if ((*pVm)->GetEnv(pVm, (void **)&amp;env, JNI_VERSION_1_6) != JNI_OK) {
            return -1;
       }
       JNINativeMethod nm[8];
       nm[0].name = "naInit";
       nm[0].signature = "(Ljava/lang/String;)I";
       nm[0].fnPtr = (void*)naInit;

       nm[1].name = "naSetSurface";
       nm[1].signature = "(Landroid/view/Surface;)V";
       nm[1].fnPtr = (void*)naSetSurface;

       nm[2].name = "naGetVideoRes";
       nm[2].signature = "()[I";
       nm[2].fnPtr = (void*)naGetVideoRes;

       nm[3].name = "naSetup";
       nm[3].signature = "(II)I";
       nm[3].fnPtr = (void*)naSetup;

       nm[4].name = "naPlay";
       nm[4].signature = "()V";
       nm[4].fnPtr = (void*)naPlay;

       nm[5].name = "naStop";
       nm[5].signature = "()V";
       nm[5].fnPtr = (void*)naStop;

       jclass cls = (*env)->FindClass(env, "roman10/tutorial/android_ffmpeg_tutorial02/MainActivity");
       //Register methods with env->RegisterNatives.
       (*env)->RegisterNatives(env, cls, nm, 6);
       return JNI_VERSION_1_6;
    }

    Here is the build.sh

    #!/bin/bash
    NDK=$HOME/Desktop/adt/android-ndk-r9
    SYSROOT=$NDK/platforms/android-9/arch-arm/
    TOOLCHAIN=$NDK/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64
    function build_one
    {
    ./configure \
       --prefix=$PREFIX \
       --enable-shared \
       --disable-static \
       --disable-doc \
       --disable-ffmpeg \
       --disable-ffplay \
       --disable-ffprobe \
       --disable-ffserver \
       --disable-avdevice \
       --disable-doc \
       --disable-symver \
       --cross-prefix=$TOOLCHAIN/bin/arm-linux-androideabi- \
       --target-os=linux \
       --arch=arm \
       --enable-cross-compile \
       --sysroot=$SYSROOT \
       --extra-cflags="-Os -fpic $ADDI_CFLAGS" \
       --extra-ldflags="$ADDI_LDFLAGS" \
       $ADDITIONAL_CONFIGURE_FLAG
    make clean
    make
    make install
    }
    CPU=arm
    PREFIX=$(pwd)/android/$CPU
    ADDI_CFLAGS="-marm"
    build_one

    It works on the Galaxy tab2. what can i do to make it work on all devices ?? Please help me..