Recherche avancée

Médias (0)

Mot : - Tags -/signalement

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (10)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

  • À propos des documents

    21 juin 2013, par

    Que faire quand un document ne passe pas en traitement, dont le rendu ne correspond pas aux attentes ?
    Document bloqué en file d’attente ?
    Voici une liste d’actions ordonnée et empirique possible pour tenter de débloquer la situation : Relancer le traitement du document qui ne passe pas Retenter l’insertion du document sur le site MédiaSPIP Dans le cas d’un média de type video ou audio, retravailler le média produit à l’aide d’un éditeur ou un transcodeur. Convertir le document dans un format (...)

Sur d’autres sites (3587)

  • "File descriptor in bad state" error while running ffmpeg on android device and selecting an input device

    25 août 2012, par user1545779

    Below is the output of the ffmpeg command :# ./ffmpeg -y -f s16le -i /dev/snd/pcmC3D0c 1640.wmv -to create an audio file from a Logitech webcam on an android device.

    As shown in the output, I received a File descriptor in bad state error for referring to the mic input as /dev/snd/pcmC3D0c I determined the value of the device (webcam mic) by reviewing the contents of /proc/asound. The webcam mic was card3 and its STREAM0 file indicated that the mic has an audio format of format S16_LE

    It was also confirmed that it is a capture device and its' pcm id was pcmC3D0c (C3 being the card number and D0 being the Device number. I then confirmed the correct device by checking the /dev/snd/ directory to confirm its proper and full description. The /dev/snd folder confirmed that the mic was /dev/snd/pcmC3D0c

    I then checked the permissions and ownership to make sure that I could use that device. Hence as far as identifying the correct device to used I do believe that /dev/snd/pcmC3D0c is the correct device. I do believe this error could possibly have something to do with the OS, however after all these checks, I still cannot figure out what is giving the bad file descriptor state error.

    Please note that I tested for different output formats, etc and that did not make any difference. Any leads or suggestions ?

    # ./ffmpeg -y -f s16le -i /dev/snd/pcmC3D0c 1640.wmv

    ffmpeg version N-43170-gd84dd35 Copyright (c) 2000-2012 the FFmpeg developers
    built on Aug 24 2012 09:16:05 with gcc 4.4.3 (GCC) configuration : —enable-cross-compile —arch=arm —cpu=cortex-a9 —target-os=linux —enable-runtime-cpudetect —prefix=/output —enable-pic —cross-prefix=/home/jasongipsyblues/Desktop/apps/android-ndk-r8b/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86/bin/arm-linux-androideabi- —sysroot=/home/jasongipsyblues/Desktop/apps/android-ndk-r8b/platforms/android-14/arch-arm —enable-version3 —enable-gpl —enable-memalign-hack —disable-doc —enable-yasm —enable-libx264 —enable-zlib —extra-cflags=-I../x264 —extra-ldflags='-L../x264 -lc'

    libavutil 51. 66.100 / 51. 66.100
    libavcodec 54. 48.100 / 54. 48.100
    libavformat 54. 22.100 / 54. 22.100
    libavdevice 54. 2.100 / 54. 2.100
    libavfilter 3. 5.102 / 3. 5.102
    libswscale 2. 1.100 / 2. 1.100
    libswresample 0. 15.100 / 0. 15.100
    libpostproc 52. 0.100 / 52. 0.100

    [s16le @ 0xfd84f0] Invalid sample rate 0 specified using default of 44100
    [s16le @ 0xfd84f0] Estimating duration from bitrate, this may be inaccurate
    Guessed Channel Layout for Input Stream #0.0 : mono
    Input #0, s16le, from '/dev/snd/pcmC3D0c' :
    Duration : N/A, bitrate : 705 kb/s
    Stream #0:0 : Audio : pcm_s16le, 44100 Hz, mono, s16, 705 kb/s
    Output #0, asf, to '1640.wmv' :
    Metadata :
    WM/EncodingSettings : Lavf54.22.100
    Stream #0:0 : Audio : wmav2 (a[1][0][0] / 0x0161), 44100 Hz, mono, s16, 128 kb/s
    Stream mapping :
    Stream #0:0 -> #0:0 (pcm_s16le -> wmav2)
    Press [q] to stop, [?] for help

    /dev/snd/pcmC3D0c : File descriptor in bad state

    size= 1kB time=00:00:00.00 bitrate= 0.0kbits/s
    video:0kB audio:0kB subtitle:0 global headers:0kB muxing overhead 5340.000000%

  • ffmpeg - Timecode & Fractional Frame Rate (Duplicating Frames)

    29 mars 2018, par Nimble

    I record two different frame rates using ffmpeg, 60 and 100. Or at least I thought I was recording 60 and 100, now it seems it’s actually 59.94 and 99.98.

    Here is the command I was using :

    ffmpeg -y -thread_queue_size 9999 -guess_layout_max 0 -f dshow -video_size 1920x1080 -rtbufsize 2147.48M -framerate 60 ^
    -pixel_format yuyv422 -i video="Game Capture HD60 S (Video) (#01)":audio="ADAT (5+6) (RME Fireface UC)" -map 0:0,0:1 ^
    -map 0:1 -c:v h264_nvenc -preset: llhp -pix_fmt yuv420p -b:v 40M -minrate 40M -maxrate 40M -bufsize 40M -b:a 384k -ac 2 ^
    -r 60 -af "pan=mono|c0=c0, adelay=84" -vsync 1 -max_muxing_queue_size 9999 -f segment -segment_time 600 ^
    -segment_wrap 9 -reset_timestamps 1 C:\Users\djcim\Videos\PC\Camera\CPC%02d.ts ^
    -thread_queue_size 9999 -f dshow -video_size 3440x1440 -rtbufsize 2147.48M -framerate 100 -pixel_format nv12 ^
    -itsoffset 00:00:00.215 -i video="Video (00 Pro Capture HDMI 4K+)" -thread_queue_size 9999 -guess_layout_max 0 -f dshow ^
    -rtbufsize 2147.48M -i audio="SPDIF/ADAT (1+2) (RME Fireface UC)" -map 1:0,2:0 -map 6:0 -c:v h264_nvenc -preset: llhp ^
    -pix_fmt nv12 -b:v 250M -minrate 250M -maxrate 250M -bufsize 250M -b:a 384k -ac 2 -r 100 -af "adelay=141|141" -vsync 1 ^
    -max_muxing_queue_size 9999 -f segment -segment_time 600 -segment_wrap 9 -reset_timestamps 1 ^
    C:\Users\djcim\Videos\PC\PC\PC%02d.ts

    I thought all was well with my frame rates, sure ffmpeg was duplicating frames every once in a while, but I thought it was just a random occurrence caused by ffmpeg dropping a frame during processing and therefore needed to duplicate one to make it up. I didn’t think duplicating a few frames would be noticeable in the footage... until I was reviewing some from the first output, which is actually a camera, and noticed very slight stutters consistently 3 times a minute. This began to bug me, it was very noticeable and I wanted smooth footage. A bit confused I decided to try the first output by itself and watch ffmpeg to see when frames were being duplicated and found that it was duplicating frames every 17 second (16.66 to be more precise).

    After doing the math (1/16.66=.06) I realized that the frame rate of that first capture card was actually 59.94. Doing the same thing for the other output I found that my "100fps" footage is actually 99.98. But what does that really entail ?

    Should I change the fps to 59.94 and 99.98 ? Wont that cause synchronization issues as 99.98 (100*.0002=99.98) isn’t the same standard as 59.94 (60*.001=59.94) ? Or does that mean I just need to set the second output to 99.9 (100*.001=99.9) to match the standard of the first output and drop frames ? If that is the case does this mean in my editing program, Adobe Premiere, I would need to export the final video as 59.94fps not 60fps to avoid duplication of frames ? Or is there some method within timecode that remedies this issue ?

    I guess I just really don’t understand drop frame and non-drop frame timecode / timecode in general. Up until yesterday when something said 60fps I thought it meant literally 60fps but I guess 99% of the time it actually means 59.94. I’d really like to just avoid the duplication of frames as it ruins what would be a smooth experience but don’t know if I can while trying to keep everything synchronized.

    Any help or insight would be appreciated, sorry if my question is a bit confusing I am undoubtedly confused.

  • Recording real-time video from images with FFmpeg

    17 juillet 2015, par Solarnum

    I am really not sure what else I could be doing to achieve this. I’m trying to record the actions in one of the views in my Android app so that it can be played back later and show the previous actions in real time. The major problem (among others, because there is no way I’m doing this optimally) is that the video takes at least 4 times longer to make than it does to playback. If I ask FFmpeg to create a 5 second video the process will run in the background for 20 seconds and output a greatly accelerated 5 second video.

    My current strategy is to use the -loop 1 parameter on a single image file and continuously write a jpeg to that image file. (If someone has a better idea than this for feeding continuously updated image information to FFmpeg let me know)

    encodingThread = new Thread(new Runnable() {
               private boolean running = true;
               @Override
               public void run() {
                   while (running) {
                       try {

                           Bitmap bitmap = makeBitmapFromView();

                           String filepath = Environment.getExternalStorageDirectory().getAbsolutePath() + "/test.jpg";
                           File file = new File(filepath);
                           FileOutputStream fout = new FileOutputStream(file);
                           bitmap.compress(Bitmap.CompressFormat.JPEG, 100, fout);
                           fout.flush();
                           fout.close();
                           Thread.sleep(50);
                       } catch (IOException e) {
                           e.printStackTrace();
                       } catch (InterruptedException e) {
                           running = false;
                       }
                   }
               }
           });
           startVideoMaking();
           encodingThread.start();

    The startVideoMaking method is as follows :

    private void startVideoMaking(){
       ffmpeg.killRunningProcesses();
               File outputFile = new File(Environment.getExternalStorageDirectory().getAbsolutePath() + "/testout.mp4");
               String path = Environment.getExternalStorageDirectory().getAbsolutePath() + "/test.jpg";
               String output = Environment.getExternalStorageDirectory().getAbsolutePath() + "/testout.mp4";

               String command = "-loop 1 -t 5 -re -i " + path + " -c:v libx264 -loglevel verbose -vsync 0 -threads 0 -preset ultrafast -tune zerolatency -y -pix_fmt yuv420p " + output;
               executeFFM(command);
    }

    Just to make it clear, the FFmpeg command that I am executing is

    ffmpeg -loop 1 -re -i /storage/emulated/0/test.jpg -t 5 -c:v libx264 -loglevel verbose -vsync 0 -threads 0 -preset ultrafast -tune zerolatency -y -pix_fmt yuv420p /storage/emulated/0/testout.mp4

    The makeBitmapFromView() method takes about 50ms to process and writing the bitmap to the sd card takes around 200ms, which is not great.

    I’m pretty lost as to what other solutions there would be to creating a video of a single view in Android. I know there is the MediaCodec class, but I couldn’t get that to work and also it would raise my minimum sdk, which is not ideal. I’m also not sure that the MediaCodec class would even solve my problem.

    Is there some way that I can get FFmpeg to create a 5 second video that is equivalent to 5 seconds of real time ? I have also tried converting a single image, without updating it’s content continuously and had the same results.

    If my question isn’t clear enough let me know.