Recherche avancée

Médias (1)

Mot : - Tags -/censure

Autres articles (10)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

  • À propos des documents

    21 juin 2013, par

    Que faire quand un document ne passe pas en traitement, dont le rendu ne correspond pas aux attentes ?
    Document bloqué en file d’attente ?
    Voici une liste d’actions ordonnée et empirique possible pour tenter de débloquer la situation : Relancer le traitement du document qui ne passe pas Retenter l’insertion du document sur le site MédiaSPIP Dans le cas d’un média de type video ou audio, retravailler le média produit à l’aide d’un éditeur ou un transcodeur. Convertir le document dans un format (...)

Sur d’autres sites (3587)

  • How to encode 3840 nb_samples to a codec that asks for 1024 using ffmpeg

    26 juillet 2018, par Gabulit

    FFmpeg has an example muxing code on https://ffmpeg.org/doxygen/4.0/muxing_8c-example.html

    This code generates frame by frame video and audio. What I am trying to do is to change

    ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,
                                          c->sample_rate, nb_samples);

    to

    ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,
                                          c->sample_rate, 3840);

    so that it generates 3840 samples per channel instead of 1024 samples which is the default for nb_samples (aac codec).

    I tried to combine code from https://ffmpeg.org/doxygen/4.0/transcode_aac_8c-example.html which has an example on buffering the frames.

    My resulting program crashes when generating audio samples after a couple of frames when assigning *q++ a new value at the first iteration :

    /* Prepare a 16 bit dummy audio frame of 'frame_size' samples and
    * 'nb_channels' channels. */
    static AVFrame *get_audio_frame(OutputStream *ost)
    {
       AVFrame *frame = ost->tmp_frame;
       int j, i, v;
       int16_t *q = (int16_t*)frame->data[0];
       /* check if we want to generate more frames */
       if (av_compare_ts(ost->next_pts, ost->enc->time_base,
                         STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)
           return NULL;
       for (j = 0; j nb_samples; j++) {
           v = (int)(sin(ost->t) * 10000);
           for (i = 0; i < ost->enc->channels; i++)
               *q++ = v;
           ost->t     += ost->tincr;
           ost->tincr += ost->tincr2;
       }
       frame->pts = ost->next_pts;
       ost->next_pts  += frame->nb_samples;
       return frame;
    }

    Maybe I don’t get the logic behind encoding.

    Here is the full source that i’ve come up with :

    https://paste.ee/p/b07qf

    The reason i am trying to accomplish this task is that I have a capture card sdk that outputs 2 channel 16 bit raw pcm 48000Hz which has 3840 samples per channel and I am trying to encode its output to aac. So basically if I get the muxing example to work with 3840 nb_samples this will help me understand the concept.

    I have already looked at How to encode resampled PCM-audio to AAC using ffmpeg-API when input pcm samples count not equal 1024 but the example uses "encodeFrame", which the examples on ffmpeg documentation doesn’t use or I am mistaken.

    Any help is greatly appreciated.

  • ffmpeg configuration difficulty with filter_complex and hls

    4 février 2020, par akc42

    I am trying to set up ffmpeg so that it will record from a microphone and encode the results at the same time into a .flac file for later syncing up with some video I will be making.

    The microphone is plugged into a raspberry pi (4B) and I am currently trying it with a blue yeti mic, but I think I can do the same with a focusrite scarlett 2i2 plugged in instead. However I was puzzling about how to start the server recording and decided I could do it from a web browser if I made a simple nodejs server that spawned ffmpeg as a child process.

    But then I was inspired by this sample ffmpeg command which displays (on my desktop with an graphical interface) a volume meter

    ffmpeg -hide_banner -i 'http://distribution.bbb3d.renderfarming.net/video/mp4/bbb_sunflower_1080p_30fps_normal.mp4' -filter_complex "showvolume=rate=25:f=0.95:o=v:m=p:dm=3:h=80:w=480:ds=log:s=2" -c:v libx264 -c:a aac -f mpegts - | ffplay -window_title "Peak Volume" -i -

    What if I could stream the video produced by the showvolume filter to the web browser that I am using to control the ffmpeg process (NOTE I don’t want to send the audio with this). So I tried to read up on hls (since the control device will be an ipad - in fact that is what I will record the video on), and came up with this command

    ffmpeg -hide_banner -f alsa -ac 2 -ar 48k -i hw:CARD=Microphone -filter_complex "asplit=2[main][vol],[vol]showvolume=rate=25:f=0.95:o=v:m=p:dm=3:h=80:w=480:ds=log:s=2[vid]" -map [main] -c:a:0 flac recordings/session_$(date +%a_%d_%b_%Y___%H_%M_%S).flac -map [vid] -preset veryfast -g 25 -an -sc_threshold 0 -c:v:1 libx264 -b:v:1 2000k -maxrate:v:1 2200k -bufsize:v:3000k -f hls -hls_time 4 -hls_flags independent_segments delete_segments -strftime 1 -hls_segment_filename recordings/volume-%Y%m%d-%s.ts recordings/volume.m3u8

    The problem is I am finding the documentation a bit opaque as to what happens once I have generated two streams - the main audio and a video stream, and this command throws both a warning and an error :-

    The warning is Guessed Channel Layout for Input Stream #0.0 : stereo

    and the error is

    [NULL @ 0x1baa130] Unable to find a suitable output format for 'hls'
    hls: Invalid argument

    What I am trying to do is set up stream labels [main] and [vol] as I split the incoming audio into two parts, then I pass [vol] through the "showvolume" filter and end up with stream [vid].

    I think I need to then use -map to specify encoding the [main] stream down to flac and writing it out to file (The file exists after I run the command although they have zero length), and use another -map to pass through to the -f hls section. But I think I have something wrong at this stage.

    Can someone help me get this command right.

  • Unable to read video streams on FFMPEG and send it to youTube RTMP server

    29 août 2024, par Rahul Bundele

    I'm trying to send two video stream from browser as array buffer (webcam and screen share video) to server via Web RTC data channels and want ffmpeg to add webcam as overlay on screen share video and send it to youtube RTMP server, the RTC connections are established and server does receives buffer , Im getting error in Ffmpeg..error is at bottom , any tips on to add overlay and send it to youtube RTMP server would be appreciated.

    


    Client.js

    


    `
const webCamStream = await navigator.mediaDevices.getUserMedia( video : true ,audio:true ) ;
const screenStream = await navigator.mediaDevices.getDisplayMedia( video : true ) ;

    


    const webcamRecorder = new MediaRecorder(webCamStream, { mimeType: 'video/webm' });
webcamRecorder.ondataavailable = (event) => {
    if (event.data.size > 0 && webcamDataChannel.readyState === 'open') {
        const reader = new FileReader();
        reader.onload = function () {
            const arrayBuffer = this.result;
            webcamDataChannel.send(arrayBuffer);
        };
        reader.readAsArrayBuffer(event.data);
    }
};
webcamRecorder.start(100);  // Adjust the interval as needed

// Send screen share stream data
const screenRecorder = new MediaRecorder(screenStream, { mimeType: 'video/webm' });
screenRecorder.ondataavailable = (event) => {
    if (event.data.size > 0 && screenDataChannel.readyState === 'open') {
        const reader = new FileReader();
        reader.onload = function () {
            const arrayBuffer = this.result;
            screenDataChannel.send(arrayBuffer);
        };
        reader.readAsArrayBuffer(event.data);
    }
};
screenRecorder.start(100); 


    


    `

    


    Server.js

    


    const youtubeRTMP = 'rtmp://a.rtmp.youtube.com/live2/youtube key';

// Create PassThrough streams for webcam and screen
const webcamStream = new PassThrough();
const screenStream = new PassThrough();

// FFmpeg arguments for processing live streams
const ffmpegArgs = [
  '-re',
  '-i', 'pipe:3',                  // Webcam input via pipe:3
  '-i', 'pipe:4',                  // Screen share input via pipe:4
  '-filter_complex',               // Complex filter for overlay
  '[0:v]scale=320:240[overlay];[1:v][overlay]overlay=10:10[out]',
  '-map', '[out]',                 // Map the output video stream
  '-c:v', 'libx264',               // Use H.264 codec for video
  '-preset', 'ultrafast',          // Use ultrafast preset for low latency
  '-crf', '25',                    // Set CRF for quality/size balance
  '-pix_fmt', 'yuv420p',           // Pixel format for compatibility
  '-c:a', 'aac',                   // Use AAC codec for audio
  '-b:a', '128k',                  // Set audio bitrate
  '-f', 'flv',                     // Output format (FLV for RTMP)
  youtubeRTMP                      // Output to YouTube RTMP server
];

// Spawn the FFmpeg process
const ffmpegProcess = spawn('ffmpeg', ffmpegArgs, {
  stdio: ['pipe', 'pipe', 'pipe', 'pipe', 'pipe']
});

// Pipe the PassThrough streams into FFmpeg
webcamStream.pipe(ffmpegProcess.stdio[3]);
screenStream.pipe(ffmpegProcess.stdio[4]);

ffmpegProcess.on('close', code => {
  console.log(`FFmpeg process exited with code ${code}`);
});

ffmpegProcess.on('error', error => {
  console.error(`FFmpeg error: ${error.message}`);
});

const handleIncomingData = (data, stream) => {
  const buffer = Buffer.from(data);
  stream.write(buffer);
};


    


    the server gets the video buffer via webrtc data channels

    


    pc.ondatachannel = event => {
        const dataChannel = event.channel;
        pc.dc = event.channel;
        pc.dc.onmessage = event => {
            // Spawn the FFmpeg process
            // console.log('Message from client:', event.data);
            const data = event.data;

            if (dataChannel.label === 'webcam') {
            handleIncomingData(data, webcamStream);
            } else if (dataChannel.label === 'screen') {
            handleIncomingData(data, screenStream);
            }
          
        };
        pc.dc.onopen = e=>{
            // recHead.innerText = "Waiting for user to send files"
            console.log("channel opened!")
        }
    };


    


    Im getting this error in ffmpeg

    


    [in#0 @ 0000020e585a1b40] Error opening input: Bad file descriptor
Error opening input file pipe:3.
Error opening input files: Bad file descriptor