Recherche avancée

Médias (91)

Autres articles (53)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (5410)

  • #0 does not contain any stream [closed]

    18 juillet 2024, par Test Dev

    ffmpeg exited with code 1 : Output #0, mp4, to 'D :\billion-view\video-compilation-electron-app.billionviews\simple_audio\yt-video-best-short-motivational-speech-video-24-hours-1-minute-motivation-2-fLeJJPxua3E-15-30.mp4' :
Output file #0 does not contain any stream

    


    

const videoFormats = info.formats.filter(
    (format: videoFormat) =>
      format.container === "mp4" && format.hasVideo && format.videoCodec && format.videoCodec.includes("avc1")
  );

  videoFormats.sort((a: videoFormat, b: videoFormat) => (b.height || 0) - (a.height || 0));
  const bestVideoFormat =
    videoFormats.find((format: videoFormat) => (format.qualityLabel = "1080p")) || videoFormats[0];

  const readableVideo = ytdl.downloadFromInfo(info, {
    format: bestVideoFormat,
    // requestOptions: { agent },
  });
  const readableAudio = ytdl(media.location, {
    quality: "highestaudio", // Ref: https://github.com/fent/node-ytdl-core#ytdlchooseformatformats-options
    // requestOptions: { agent },
  });

  readableVideo.on("progress", (_, downloaded: number, total: number) => {
    progressHandler?.({
      eventType: MediaProgressEventType.Download,
      timeStamp: new Date().toISOString(),
      culminationType: MediaProgressCulminationType.Running,
      media,
      percent: (100 * downloaded) / total,
    });
  });
  readableAudio.on("progress", (_, downloaded: number, total: number) => {
    progressHandler?.({
      eventType: MediaProgressEventType.Download,
      timeStamp: new Date().toISOString(),
      culminationType: MediaProgressCulminationType.Running,
      media,
      percent: (100 * downloaded) / total,
    });
  });

  agent?.destroy(); // Destroy the proxy agent.

  const writableVideo = createWriteStream(outputVideoPath);
  const writableAudio = createWriteStream(outputAudioPath);

  await pipeline(readableVideo, writableVideo);
  await pipeline(readableAudio, writableAudio);

  readableVideo.on('end', () => console.log('Video stream downloaded successfully.'));
  readableAudio.on('end', () => console.log('Audio stream downloaded successfully.'));
  writableVideo.on('finish', () => console.log('Video written to file successfully.'));
  writableAudio.on('finish', () => console.log('Audio written to file successfully.'));

  readableVideo.destroy();
  writableVideo.destroy();
  readableAudio.destroy();
  writableAudio.destroy();`


    


  • ffmpeg - Timecode & Fractional Frame Rate (Duplicating Frames)

    29 mars 2018, par Nimble

    I record two different frame rates using ffmpeg, 60 and 100. Or at least I thought I was recording 60 and 100, now it seems it’s actually 59.94 and 99.98.

    Here is the command I was using :

    ffmpeg -y -thread_queue_size 9999 -guess_layout_max 0 -f dshow -video_size 1920x1080 -rtbufsize 2147.48M -framerate 60 ^
    -pixel_format yuyv422 -i video="Game Capture HD60 S (Video) (#01)":audio="ADAT (5+6) (RME Fireface UC)" -map 0:0,0:1 ^
    -map 0:1 -c:v h264_nvenc -preset: llhp -pix_fmt yuv420p -b:v 40M -minrate 40M -maxrate 40M -bufsize 40M -b:a 384k -ac 2 ^
    -r 60 -af "pan=mono|c0=c0, adelay=84" -vsync 1 -max_muxing_queue_size 9999 -f segment -segment_time 600 ^
    -segment_wrap 9 -reset_timestamps 1 C:\Users\djcim\Videos\PC\Camera\CPC%02d.ts ^
    -thread_queue_size 9999 -f dshow -video_size 3440x1440 -rtbufsize 2147.48M -framerate 100 -pixel_format nv12 ^
    -itsoffset 00:00:00.215 -i video="Video (00 Pro Capture HDMI 4K+)" -thread_queue_size 9999 -guess_layout_max 0 -f dshow ^
    -rtbufsize 2147.48M -i audio="SPDIF/ADAT (1+2) (RME Fireface UC)" -map 1:0,2:0 -map 6:0 -c:v h264_nvenc -preset: llhp ^
    -pix_fmt nv12 -b:v 250M -minrate 250M -maxrate 250M -bufsize 250M -b:a 384k -ac 2 -r 100 -af "adelay=141|141" -vsync 1 ^
    -max_muxing_queue_size 9999 -f segment -segment_time 600 -segment_wrap 9 -reset_timestamps 1 ^
    C:\Users\djcim\Videos\PC\PC\PC%02d.ts

    I thought all was well with my frame rates, sure ffmpeg was duplicating frames every once in a while, but I thought it was just a random occurrence caused by ffmpeg dropping a frame during processing and therefore needed to duplicate one to make it up. I didn’t think duplicating a few frames would be noticeable in the footage... until I was reviewing some from the first output, which is actually a camera, and noticed very slight stutters consistently 3 times a minute. This began to bug me, it was very noticeable and I wanted smooth footage. A bit confused I decided to try the first output by itself and watch ffmpeg to see when frames were being duplicated and found that it was duplicating frames every 17 second (16.66 to be more precise).

    After doing the math (1/16.66=.06) I realized that the frame rate of that first capture card was actually 59.94. Doing the same thing for the other output I found that my "100fps" footage is actually 99.98. But what does that really entail ?

    Should I change the fps to 59.94 and 99.98 ? Wont that cause synchronization issues as 99.98 (100*.0002=99.98) isn’t the same standard as 59.94 (60*.001=59.94) ? Or does that mean I just need to set the second output to 99.9 (100*.001=99.9) to match the standard of the first output and drop frames ? If that is the case does this mean in my editing program, Adobe Premiere, I would need to export the final video as 59.94fps not 60fps to avoid duplication of frames ? Or is there some method within timecode that remedies this issue ?

    I guess I just really don’t understand drop frame and non-drop frame timecode / timecode in general. Up until yesterday when something said 60fps I thought it meant literally 60fps but I guess 99% of the time it actually means 59.94. I’d really like to just avoid the duplication of frames as it ruins what would be a smooth experience but don’t know if I can while trying to keep everything synchronized.

    Any help or insight would be appreciated, sorry if my question is a bit confusing I am undoubtedly confused.

  • FFmpeg - MJPEG decoding - getting different values

    27 décembre 2016, par ahmadh

    I have a set of JPEG frames which I am muxing into an avi, which gives me a mjpeg video. This is the command I run on the console :

    ffmpeg -y -start_number 0 -i %06d.JPEG -codec copy vid.avi

    When I try to demux the video using ffmpeg C api, I get frames which are slightly different in values. Demuxing code looks something like this :

    AVFormatContext* fmt_ctx = NULL;
    AVCodecContext* cdc_ctx = NULL;
    AVCodec* vid_cdc = NULL;
    int ret;
    unsigned int height, width;

    ....
    // read_nframes is the number of frames to read
    output_arr = new unsigned char [height * width * 3 *
                                   sizeof(unsigned char) * read_nframes];

    avcodec_open2(cdc_ctx, vid_cdc, NULL);

    int num_bytes;
    uint8_t* buffer = NULL;
    const AVPixelFormat out_format = AV_PIX_FMT_RGB24;

    num_bytes = av_image_get_buffer_size(out_format, width, height, 1);
    buffer = (uint8_t*)av_malloc(num_bytes * sizeof(uint8_t));

    AVFrame* vid_frame = NULL;
    vid_frame = av_frame_alloc();
    AVFrame* conv_frame = NULL;
    conv_frame = av_frame_alloc();

    av_image_fill_arrays(conv_frame->data, conv_frame->linesize, buffer,
                        out_format, width, height, 1);

    struct SwsContext *sws_ctx = NULL;
    sws_ctx = sws_getContext(width, height, cdc_ctx->pix_fmt,
                            width, height, out_format,
                            SWS_BILINEAR, NULL,NULL,NULL);

    int frame_num = 0;
    AVPacket vid_pckt;
    while (av_read_frame(fmt_ctx, &vid_pckt) >=0) {
       ret = avcodec_send_packet(cdc_ctx, &vid_pckt);
       if (ret < 0)
           break;

       ret = avcodec_receive_frame(cdc_ctx, vid_frame);
       if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
           break;
       if (ret >= 0) {
           // convert image from native format to planar GBR
           sws_scale(sws_ctx, vid_frame->data,
                     vid_frame->linesize, 0, vid_frame->height,
                     conv_frame->data, conv_frame->linesize);

           unsigned char* r_ptr = output_arr +
               (height * width * sizeof(unsigned char) * 3 * frame_num);
           unsigned char* g_ptr = r_ptr + (height * width * sizeof(unsigned char));
           unsigned char* b_ptr = g_ptr + (height * width * sizeof(unsigned char));
           unsigned int pxl_i = 0;

           for (unsigned int r = 0; r < height; ++r) {
               uint8_t* avframe_r = conv_frame->data[0] + r*conv_frame->linesize[0];
               for (unsigned int c = 0; c < width; ++c) {
                   r_ptr[pxl_i] = avframe_r[0];
                   g_ptr[pxl_i]   = avframe_r[1];
                   b_ptr[pxl_i]   = avframe_r[2];
                   avframe_r += 3;
                   ++pxl_i;
               }
           }

           ++frame_num;

           if (frame_num >= read_nframes)
               break;
       }
    }

    ...

    In my experience around two-thirds of the pixel values are different, each by +-1 (in a range of [0,255]). I am wondering is it due to some decoding scheme FFmpeg uses for reading JPEG frames ? I tried encoding and decoding png frames, and it works perfectly fine.

    In short my goal is to get the same pixel by pixel values for each JPEG frame as I would I have gotten if I was reading the JPEG images directly. Here is the stand-alone code I used. It includes cmake files to build code, and a couple of jpeg frames with the converted avi file to test this problem. (give —filetype png to test the png decoding).