Recherche avancée

Médias (1)

Mot : - Tags -/bug

Autres articles (42)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (5734)

  • bitmap to yuv , video recorded has only green pixels

    19 janvier 2016, par UserAx

    I am trying to convert a bitmap to yuv, and recording this yuv in the ffmpeg frame recorder...
    I am getting the video output with only green pixels, though when i check the properties of this video it shows the set Frame rate and the resolution...

    The yuv encoding part is correct, but i feel i am making mistake somewhere else, mostly in returning the yuv bytes to recording part ( getByte(byte [] yuv ) because only there the yuv.length displayed in console is 0,, rest all methods return a big value in console ...

    Kindly help...

    @Override
    public void onCreate(Bundle savedInstanceState) {
       super.onCreate(savedInstanceState);
       setContentView(R.layout.activity_main);
       directory.mkdirs();

       addListenerOnButton();

       play=(Button)findViewById(R.id.buttonplay);
       stop=(Button)findViewById(R.id.buttonstop);
       record=(Button)findViewById(R.id.buttonstart);

       stop.setEnabled(false);
       play.setEnabled(false);


       record.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View v) {
               startRecording();
               getByte(new byte[]{});
           }
       });

       stop.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View v) {
               stopRecording();
           }
       });


       play.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View v) throws IllegalArgumentException, SecurityException, IllegalStateException {
               Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse(String.valueOf(asmileys)));
               intent.setDataAndType(Uri.parse(String.valueOf(asmileys)), "video/mp4");
               startActivity(intent);
               Toast.makeText(getApplicationContext(), "Playing Video", Toast.LENGTH_LONG).show();
           }
       });

    }

    ......//......



    public void getByte(byte[] yuv) {
       getNV21(640, 480, bitmap);
       System.out.println(yuv.length + " ");
       if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) {
           startTime = System.currentTimeMillis();
           return;
       }
       if (RECORD_LENGTH > 0) {
           int i = imagesIndex++ % images.length;
           yuvimage = images[i];
           timestamps[i] = 1000 * (System.currentTimeMillis() - startTime);
       }
           /* get video data */
       if (yuvimage != null && recording) {
               ((ByteBuffer) yuvimage.image[0].position(0)).put(yuv);

               if (RECORD_LENGTH <= 0) {
                   try {
                       long t = 1000 * (System.currentTimeMillis() - startTime);
                       if (t > recorder.getTimestamp()) {
                           recorder.setTimestamp(t);
                       }
                       recorder.record(yuvimage);
                   } catch (FFmpegFrameRecorder.Exception e) {

                       e.printStackTrace();
                   }
               }
           }
    }

    public byte [] getNV21(int inputWidth, int inputHeight, Bitmap bitmap) {

       int[] argb = new int[inputWidth * inputHeight];

       bitmap.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);

       byte[] yuv = new byte[inputWidth * inputHeight * 3 / 2];
       encodeYUV420SP(yuv, argb, inputWidth, inputHeight);

       bitmap.recycle();
       System.out.println(yuv.length + " ");
       return yuv;

    }

    void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
       final int frameSize = width * height;

       int yIndex = 0;
       int uIndex = frameSize;
       int vIndex = frameSize;
       System.out.println(yuv420sp.length + " " + frameSize);

       int a, R, G, B, Y, U, V;
       int index = 0;
       for (int j = 0; j < height; j++) {
           for (int i = 0; i < width; i++) {

               a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
               R = (argb[index] & 0xff0000) >> 16;
               G = (argb[index] & 0xff00) >> 8;
               B = (argb[index] & 0xff) >> 0;

               // well known RGB to YUV algorithm

               Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
               U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
               V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;

               // NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
               //    meaning for every 4 Y pixels there are 1 V and 1 U.  Note the sampling is every other
               //    pixel AND every other scanline.
               yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
               if (j % 2 == 0 && index % 2 == 0) {
                   yuv420sp[uIndex++] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
                   yuv420sp[vIndex++] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
               }

               index++;
           }
       }
    }

    .....//.....

    public void addListenerOnButton() {
    image = (ImageView) findViewById(R.id.imageView);
    image.setDrawingCacheEnabled(true);
    image.buildDrawingCache();
    bitmap = image.getDrawingCache();
    System.out.println(bitmap.getByteCount() + " " );

    button = (Button) findViewById(R.id.btn1);
    button.setOnClickListener(new OnClickListener() {
    @Override
    public void onClick(View view){
       image.setImageResource(R.drawable.image1);
     }
    });

    ......//......

    EDIT 1 :

    I made few changes in the above code :

    record.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View v) {
               startRecording();
               getByte();
           }
       });
    .....//....

    public void getbyte() {
       byte[] yuv = getNV21(640, 480, bitmap);

    So now in the console ; i get same yuv length in this method as the yuv length from getNV21 method..

    But now i am getting half screen Black and Half screen green(black above and green below) pixels in the recorded video...

    If i add these lines to onCreate method ;

    image = (ImageView) findViewById(R.id.imageView);
    image.setDrawingCacheEnabled(true);
    image.buildDrawingCache();
    bitmap = image.getDrawingCache();

    I do get distorted frames( frames are 1/4th of the image displayed with mix up of colors here and there) in the video....

    All i am trying to learn is the image processing and flow of Bytes[] from one method to another ; but i am still a noob.. ;

    Kindly help..!

  • ffmpeg "End mismatch 1" warning, jpeg2000 to avi

    11 avril 2023, par jklebes

    Trying to convert a directory of jpeg2000 grayscale images to a video with ffmpeg, I get warnings

    


    [0;36m[jpeg2000 @ 0x55d8fa1b68c0] [0m[0;33mEnd mismatch 1


    


    (and lots of

    


    Last message repeated <n> times&#xA;</n>

    &#xA;

    )

    &#xA;

    The command was

    &#xA;

    ffmpeg -y -r 10 -start_number 1 -i <path>/surface_30///img_000%01d.jp2 -vcodec msmpeg4 -vf scale=1920:-1 -q:v 8 <path>//surface_30///surface_30.avi&#xA;</path></path>

    &#xA;

    The output is

    &#xA;

    ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers&#xA;  built with gcc 7.3.0 (crosstool-NG 1.23.0.449-a04d0)&#xA;  configuration: --prefix=/home/jklebes001/miniconda3 --cc=/tmp/build/80754af9/ffmpeg_1587154242452/_build_env/bin/x86_64-conda_cos6-linux-gnu-cc --disable-doc --enable-avresample --enable-gmp --enable-hardcoded-tables --enable-libfreetype --enable-libvpx --enable-pthreads --enable-libopus --enable-postproc --enable-pic --enable-pthreads --enable-shared --enable-static --enable-version3 --enable-zlib --enable-libmp3lame --disable-nonfree --enable-gpl --enable-gnutls --disable-openssl --enable-libopenh264 --enable-libx264&#xA;  libavutil      56. 31.100 / 56. 31.100&#xA;  libavcodec     58. 54.100 / 58. 54.100&#xA;  libavformat    58. 29.100 / 58. 29.100&#xA;  libavdevice    58.  8.100 / 58.  8.100&#xA;  libavfilter     7. 57.100 /  7. 57.100&#xA;  libavresample   4.  0.  0 /  4.  0.  0&#xA;  libswscale      5.  5.100 /  5.  5.100&#xA;  libswresample   3.  5.100 /  3.  5.100&#xA;  libpostproc    55.  5.100 / 55.  5.100&#xA;[0;36m[jpeg2000 @ 0x55cb44144480] [0m[0;33mEnd mismatch 1&#xA;&#xA;[0m    Last message repeated 1 times&#xA;    Last message repeated 2 times&#xA;    Last message repeated 3 times&#xA;

    &#xA;

    ...

    &#xA;

    Last message repeated 73 times&#xA;&#xA;Input #0, image2, from &#x27;<path>//surface_30///img_000%01d.jp2&#x27;:&#xA;&#xA;  Duration: 00:00:00.20, start: 0.000000, bitrate: N/A&#xA;&#xA;    Stream #0:0: Video: jpeg2000, gray, 6737x4869, 25 tbr, 25 tbn, 25 tbc&#xA;&#xA;Stream mapping:&#xA;&#xA;  Stream #0:0 -> #0:0 (jpeg2000 (native) -> msmpeg4v3 (msmpeg4))&#xA;&#xA;Press [q] to stop, [?] for help&#xA;&#xA;[0;36m[jpeg2000 @ 0x55cb4418e200] [0m[0;33mEnd mismatch 1&#xA;&#xA;[0m[0;36m[jpeg2000 @ 0x55cb441900c0] [0m[0;33mEnd mismatch 1&#xA;</path>

    &#xA;

    ...

    &#xA;

    (about 600 lines of "end mismatch" and "last message repeated" cut)

    &#xA;

    ...

    &#xA;

    [0m[0;36m[jpeg2000 @ 0x55cb4418e8c0] [0m[0;33mEnd mismatch 1&#xA;&#xA;[0mOutput #0, avi, to &#x27;<path>/surface_30///surface_30.avi&#x27;:&#xA;&#xA;  Metadata:&#xA;&#xA;    ISFT            : Lavf58.29.100&#xA;&#xA;    Stream #0:0: Video: msmpeg4v3 (msmpeg4) (MP43 / 0x3334504D), yuv420p, 1920x1388, q=2-31, 200 kb/s, 10 fps, 10 tbn, 10 tbc&#xA;&#xA;    Metadata:&#xA;&#xA;      encoder         : Lavc58.54.100 msmpeg4&#xA;&#xA;    Side data:&#xA;&#xA;      cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1&#xA;&#xA;frame=    2 fps=0.8 q=8.0 size=       6kB time=00:00:00.20 bitrate= 227.1kbits/s speed=0.0844x    &#xA;frame=    5 fps=1.7 q=8.0 size=       6kB time=00:00:00.50 bitrate=  90.8kbits/s speed=0.172x    &#xA;frame=    5 fps=1.7 q=8.0 Lsize=     213kB time=00:00:00.50 bitrate=3494.7kbits/s speed=0.172x    &#xA;video:208kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 2.732246%&#xA;</path>

    &#xA;

    What is the meaning of characters like [0 ;33m here ?

    &#xA;

    I thought it might have something to do with bit depth and color format. Setting -pix_fmt gray had no effect, and indeed the format of the jp2 images is already detected as 8-bit gray.

    &#xA;

    The output .avi exists and seems fine.

    &#xA;

    The line was previously used on jpeg files and works fine on jpeg. With jpeg, the output has the line

    &#xA;

    Input #0, image2, from &#x27;<path>/surface_30///img_000%01d.jpeg&#x27;:&#xA;&#xA;  Duration: 00:00:00.16, start: 0.000000, bitrate: N/A&#xA;&#xA;    Stream #0:0: Video: mjpeg (Baseline), gray(bt470bg/unknown/unknown), 6737x4869 [SAR 1:1 DAR 6737:4869], 25 tbr, 25 tbn, 25 tbc&#xA;&#xA;Stream mapping:&#xA;&#xA;  Stream #0:0 -> #0:0 (mjpeg (native) -> msmpeg4v3 (msmpeg4))&#xA;&#xA;Press [q] to stop, [?] for help&#xA;&#xA;Output #0, avi, to &#x27;<path>/surface_30///surface_30.avi&#x27;:&#xA;&#xA;  Metadata:&#xA;&#xA;    ISFT            : Lavf58.29.100&#xA;&#xA;    Stream #0:0: Video: msmpeg4v3 (msmpeg4) (MP43 / 0x3334504D), yuv420p, 6737x4869 [SAR 1:1 DAR 6737:4869], q=2-31, 200 kb/s, 10 fps, 10 tbn, 10 tbc&#xA;&#xA;    Metadata:&#xA;&#xA;      encoder         : Lavc58.54.100 msmpeg4&#xA;&#xA;    Side data:&#xA;&#xA;      cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1&#xA;&#xA;frame=    2 fps=0.0 q=8.0 size=    6662kB time=00:00:00.20 bitrate=272859.9kbits/s speed=0.334x    &#xA;frame=    3 fps=2.2 q=10.0 size=   10502kB time=00:00:00.30 bitrate=286764.2kbits/s speed=0.22x    &#xA;frame=    4 fps=1.9 q=12.3 size=   13574kB time=00:00:00.40 bitrate=277987.7kbits/s speed=0.19x    &#xA;frame=    4 fps=1.4 q=12.3 size=   13574kB time=00:00:00.40 bitrate=277987.7kbits/s speed=0.145x    &#xA;frame=    4 fps=1.4 q=12.3 Lsize=   13657kB time=00:00:00.40 bitrate=279702.3kbits/s speed=0.145x    &#xA;video:13652kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.041926%&#xA;</path></path>

    &#xA;

    detecting mjpeg format and similar, but more detailed format gray(bt470bg/unknown/unknown), 6737x4869 [SAR 1:1 DAR 6737:4869].

    &#xA;

    What is the difference when switching input to jp2 ?

    &#xA;

  • ffmpeg blend=screen makes background look green or foreground green

    24 novembre 2022, par OneWorld

    When I apply a screen blend to the foreground asset (Pikachu) over the background asset (White circle on black background)&#xA;GIMP and Adobe Photoshop make the circle asset look white, and the background asset look RGB like this :-

    &#xA;

    enter image description here

    &#xA;

    which is how it should look.

    &#xA;

    However, if we take the input assets :-&#xA;enter image description here&#xA;and&#xA;enter image description here

    &#xA;

    and use this ffmpeg command

    &#xA;

    ffmpeg -i circle_rgb_50.png -i pikachu_rgb.png -filter_complex "[0:v][1:v]blend=screen" pikachu_screened_over_circle_rgb_just_blend.png&#xA;

    &#xA;

    we get this :-&#xA;enter image description here

    &#xA;

    and if I reverse the blend so that the inputs to the blend function are the other way around, like this :-

    &#xA;

    ffmpeg -i circle_rgb_50.png -i pikachu_rgb.png -filter_complex "[1:v][0:v]blend=screen" pikachu_screened_over_circle_rgb_just_blend_other_way_around.png&#xA;

    &#xA;

    we get this :-&#xA;enter image description here

    &#xA;

    Why doesn't FFMPEG do blending the same way as GIMP or Adobe Photoshop ?

    &#xA;

    Or is there another parameter I need to pass so that blends look as they should ?

    &#xA;