Recherche avancée

Médias (0)

Mot : - Tags -/auteurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (101)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Configuration spécifique d’Apache

    4 février 2011, par

    Modules spécifiques
    Pour la configuration d’Apache, il est conseillé d’activer certains modules non spécifiques à MediaSPIP, mais permettant d’améliorer les performances : mod_deflate et mod_headers pour compresser automatiquement via Apache les pages. Cf ce tutoriel ; mode_expires pour gérer correctement l’expiration des hits. Cf ce tutoriel ;
    Il est également conseillé d’ajouter la prise en charge par apache du mime-type pour les fichiers WebM comme indiqué dans ce tutoriel.
    Création d’un (...)

Sur d’autres sites (9174)

  • I received connection refused error while trying to stream live video through RTMP with FFMPEG

    25 septembre 2020, par Femzy

    I am working on a nodeJs app that can send camera stream to third party plartform i.e Facebook and Youtube using the RTMP protoco ;.. It works well on my localhost but once i deploy to the server, it only give me errors. The error I get is below on this content..
Here is my codes

    


    server.js

    


    

    

    const child_process = require('child_process'); // To be used later for running FFmpeg
const express = require('express');
const http = require('http');
const WebSocketServer = require('ws').Server;

const app = express();
const server = http.createServer(app).listen(4000, () => {
  console.log('Listening...');
});

// Serve static files out of the www directory, where we will put our HTML page
app.use(express.static(__dirname + '/www'));


const wss = new WebSocketServer({
  server: server
});
wss.on('connection', (ws, req) => {
  
  
  
  const rtmpUrl = 'rtmp://a.rtmp.youtube.com/live2/MyStreamId';
  console.log('Target RTMP URL:', rtmpUrl);
  
  // Launch FFmpeg to handle all appropriate transcoding, muxing, and RTMP.
  // If 'ffmpeg' isn't in your path, specify the full path to the ffmpeg binary.
  const ffmpeg = child_process.spawn('ffmpeg', [
    // Facebook requires an audio track, so we create a silent one here.
    // Remove this line, as well as `-shortest`, if you send audio from the browser.
    //'-f', 'lavfi', '-i', 'anullsrc',
    
    // FFmpeg will read input video from STDIN
    '-i', '-',
    
    // Because we're using a generated audio source which never ends,
    // specify that we'll stop at end of other input.  Remove this line if you
    // send audio from the browser.
    //'-shortest',
    
    // If we're encoding H.264 in-browser, we can set the video codec to 'copy'
    // so that we don't waste any CPU and quality with unnecessary transcoding.
    // If the browser doesn't support H.264, set the video codec to 'libx264'
    // or similar to transcode it to H.264 here on the server.
    '-vcodec', 'copy',
    
    // AAC audio is required for Facebook Live.  No browser currently supports
    // encoding AAC, so we must transcode the audio to AAC here on the server.
    '-acodec', 'aac',
    
    // FLV is the container format used in conjunction with RTMP
    '-f', 'flv',
    
    // The output RTMP URL.
    // For debugging, you could set this to a filename like 'test.flv', and play
    // the resulting file with VLC.  Please also read the security considerations
    // later on in this tutorial.
    rtmpUrl 
  ]);
  
  // If FFmpeg stops for any reason, close the WebSocket connection.
  ffmpeg.on('close', (code, signal) => {
    console.log('FFmpeg child process closed, code ' + code + ', signal ' + signal);
    ws.terminate();
  });
  
  // Handle STDIN pipe errors by logging to the console.
  // These errors most commonly occur when FFmpeg closes and there is still
  // data to write.  If left unhandled, the server will crash.
  ffmpeg.stdin.on('error', (e) => {
    console.log('FFmpeg STDIN Error', e);
  });
  
  // FFmpeg outputs all of its messages to STDERR.  Let's log them to the console.
  ffmpeg.stderr.on('data', (data) => {
    console.log('FFmpeg STDERR:', data.toString());
  });

  // When data comes in from the WebSocket, write it to FFmpeg's STDIN.
  ws.on('message', (msg) => {
    console.log('DATA', msg);
    ffmpeg.stdin.write(msg);
  });
  
  // If the client disconnects, stop FFmpeg.
  ws.on('close', (e) => {
    ffmpeg.kill('SIGINT');
  });
  
});

    


    


    



    On the server.js file i create a websocket to receive stream data from the client side and then use FFMPEG to send the stream data over to youtube via the RTMP url

    


    Here is my client.js code

    


    

    

    const ws = new WebSocket(
             'wss://my-websocket-server.com'

        );
         ws.addEventListener('open', (e) => {
             console.log('WebSocket Open', e);
             drawVideosToCanvas();
             mediaStream = getMixedVideoStream(); // 30 FPS
             mediaRecorder = new MediaRecorder(mediaStream, {
               mimeType: 'video/webm;codecs=h264',
               //videoBitsPerSecond : 3000000000
               bitsPerSecond: 6000000
             });

             mediaRecorder.addEventListener('dataavailable', (e) => {
               ws.send(e.data);
             });
             mediaRecorder.onstop = function() {
              ws.close.bind(ws);
              isRecording = false;
              actionBtn.textContent = 'Start Streaming';
              actionBtn.onclick = startRecording;
             }
             mediaRecorder.onstart = function() {
              isRecording = true;
              actionBtn.textContent = 'Stop Streaming';
              actionBtn.onclick = stopRecording;
              screenShareBtn.onclick = startSharing;
              screenShareBtn.disabled = false;
             }
             //mediaRecorder.addEventListener('stop', ws.close.bind(ws));

             mediaRecorder.start(1000); // Start recording, and dump data every second

           });

    


    


    



    On my client.js file, i captured users camera and then open the websocket server to send the data to the server.. Every thing works fine on local host expect for when i deploy it to live server..
i am wondering if there is a bad configuration on the server.. The server is Centos 7.8 and the app was runing on Apache software
Here is how i configured the virtual host for the websocket domain

    


    

    

    ServerName my-websocket.com

  RewriteEngine on
  RewriteCond %{HTTP:Upgrade} websocket [NC]
  RewriteCond %{HTTP:Connection} upgrade [NC]
  RewriteRule .* "ws://127.0.0.1:3000/$1" [P,L]

  ProxyPass "/" "http://127.0.0.1:3000/$1"
  ProxyPassReverse "/" "http://127.0.0.1:3000/$1"
  ProxyRequests off

    


    


    



    I don't know much about server configuration but i just thought may be the configuration has to do with why FFMPEg can not open connection to RTMP protocol on the server.

    


    here is the error am getting

    


    

    

    FFmpeg STDERR: Input #0, lavfi, from &#x27;anullsrc&#x27;:&#xA;  Duration:&#xA;FFmpeg STDERR: N/A, start: 0.000000, bitrate: 705 kb/s&#xA;    Stream #0:0: Audio: pcm_u8, 44100 Hz, stereo, u8, 705 kb/s&#xA;&#xA;DATA <buffer 1a="1a">&#xA;DATA <buffer 45="45" df="df" a3="a3" 42="42" 86="86" 81="81" 01="01" f7="f7" f2="f2" 04="04" f3="f3" 08="08" 82="82" 88="88" 6d="6d" 61="61" 74="74" 72="72" 6f="6f" 73="73" 6b="6b" 87="87" 0442="0442" 85="85" 02="02" 18="18" 53="53" 80="80" 67="67" ff="ff" 53991="53991" more="more" bytes="bytes">&#xA;DATA <buffer 40="40" c1="c1" 81="81" 00="00" f0="f0" 80="80" 7b="7b" 83="83" 3e="3e" 3b="3b" 07="07" d6="d6" 4e="4e" 1c="1c" 11="11" b4="b4" 7f="7f" cb="cb" 5e="5e" 68="68" 9b="9b" d5="d5" 2a="2a" e3="e3" 06="06" c6="c6" f3="f3" 94="94" ff="ff" 29="29" 16="16" b2="b2" 60="60" 04ac="04ac" 37="37" fb="fb" 1a="1a" 15="15" ea="ea" 39="39" a0="a0" cd="cd" 02="02" b8="b8" 56206="56206" more="more" bytes="bytes">&#xA;FFmpeg STDERR: Input #1, matroska,webm, from &#x27;pipe:&#x27;:&#xA;  Metadata:&#xA;    encoder         :&#xA;FFmpeg STDERR: Chrome&#xA;  Duration: N/A, start: 0.000000, bitrate: N/A&#xA;    Stream #1:0(eng): Audio: opus, 48000 Hz, mono, fltp (default)&#xA;    Stream #1:1(eng): Video: h264 (Constrained Baseline), yuv420p(progressive), 1366x768, SAR 1:1 DAR 683:384, 30.30 fps, 30 tbr, 1k tbn, 60 tbc (default)&#xA;&#xA;FFmpeg STDERR: [tcp @ 0xe5fac0] Connection to tcp://a.rtmp.youtube.com:1935 failed (Connection refused), trying next address&#xA;[rtmp @ 0xe0fb80] Cannot open connection tcp://a.rtmp.youtube.com:1935&#xA;&#xA;FFmpeg STDERR: rtmp://a.rtmp.youtube.com/live2/mystreamid: Network is unreachable&#xA;&#xA;FFmpeg child process closed, code 1, signal null</buffer></buffer></buffer>

    &#xD;&#xA;

    &#xD;&#xA;

    &#xD;&#xA;&#xA;

    I will really appreciate if I could get some insight on what may be causing this issue or what i can do to solve it..Thanks in advance..

    &#xA;

  • Why does my ffmpeg audio sound slower and deeper - sample rate mismatch

    4 septembre 2020, par yogesh zinzu

    ok so this is a discord bot to record voice chat&#xA;https://hatebin.com/hgjlazacri&#xA;Now the bot works perfectly fine but the issue is that the audio sounds a bit deeper and slower than normal.. Why does it happen ? how can I make the audio sound 1:1..

    &#xA;

    &#xD;&#xA;
    &#xD;&#xA;
    const Discord = require(&#x27;discord.js&#x27;);&#xA;const client = new Discord.Client();&#xA;const ffmpegInstaller = require(&#x27;@ffmpeg-installer/ffmpeg&#x27;);&#xA;const ffmpeg = require(&#x27;fluent-ffmpeg&#x27;);&#xA;ffmpeg.setFfmpegPath(ffmpegInstaller.path);&#xA;const fs = require(&#x27;fs-extra&#x27;)&#xA;const mergeStream = require(&#x27;merge-stream&#x27;);&#xA;const config = require(&#x27;./config.json&#x27;);&#xA;const { getAudioDurationInSeconds } = require(&#x27;get-audio-duration&#x27;);&#xA;const cp = require(&#x27;child_process&#x27;);&#xA;const path1 = require(&#x27;path&#x27;);&#xA;const Enmap = require(&#x27;enmap&#x27;);&#xA;const UserRecords = require("./models/userrecords.js")&#xA;const ServerRecords = require("./models/serverrecords.js")&#xA;let prefix = `$`&#xA;class Readable extends require(&#x27;stream&#x27;).Readable { _read() {} }&#xA;let recording = false;&#xA;let currently_recording = {};&#xA;let mp3Paths = [];&#xA;const silence_buffer = new Uint8Array(3840);&#xA;const express = require(&#x27;express&#x27;)&#xA;const app = express()&#xA;const port = 3000&#xA;const publicIP = require(&#x27;public-ip&#x27;)&#xA;const { program } = require(&#x27;commander&#x27;);&#xA;const { path } = require(&#x27;@ffmpeg-installer/ffmpeg&#x27;);&#xA;const version = &#x27;0.0.1&#x27;&#xA;program.version(version);&#xA;let debug = false&#xA;let runProd = false&#xA;let fqdn = "";&#xA;const mongoose = require("mongoose");&#xA;const MongoClient = require(&#x27;mongodb&#x27;).MongoClient;&#xA;mongoose.connect(&#x27;SECRRET&#x27;,{&#xA;  useNewUrlParser: true&#xA;}, function(err){&#xA;  if(err){&#xA;    console.log(err);&#xA;  }else{&#xA;    console.log("Database connection initiated");&#xA;  }&#xA;});&#xA;require("dotenv").config()&#xA;function bufferToStream(buffer) {&#xA;    let stream = new Readable();&#xA;    stream.push(buffer);&#xA;    return stream;&#xA;}&#xA;&#xA;&#xA;&#xA;&#xA;&#xA;client.commands = new Enmap();&#xA;&#xA;client.on(&#x27;ready&#x27;, async () => {&#xA;    console.log(`Logged in as ${client.user.tag}`);&#xA;&#xA;    let host = "localhost"&#xA;&#xA;    &#xA;&#xA;    let ip = await publicIP.v4();&#xA;&#xA;    let protocol = "http";&#xA;    if (!runProd) {&#xA;        host = "localhost"&#xA;    } else {&#xA;        host = `35.226.244.186`;&#xA;    }&#xA;    fqdn = `${protocol}://${host}:${port}`&#xA;    app.listen(port, `0.0.0.0`, () => {&#xA;        console.log(`Listening on port ${port} for ${host} at fqdn ${fqdn}`)&#xA;    })&#xA;});&#xA;let randomArr = []&#xA;let finalArrWithIds = []&#xA;let variable = 0&#xA;client.on(&#x27;message&#x27;, async message => {&#xA;    console.log(`fuck`);&#xA;    if(message.content === `$record`){&#xA;        mp3Paths = []&#xA;        finalArrWithIds = []&#xA;        let membersToScrape = Array.from(message.member.voice.channel.members.values());&#xA;        membersToScrape.forEach((member) => {&#xA;            if(member.id === `749250882830598235`) {&#xA;                console.log(`botid`);&#xA;            }&#xA;            else {&#xA;                finalArrWithIds.push(member.id)&#xA;            }&#xA;            &#xA;        })&#xA;        const randomNumber = Math.floor(Math.random() * 100)&#xA;        randomArr = []&#xA;        randomArr.push(randomNumber)&#xA;    }&#xA;   &#xA;    &#xA;    const generateSilentData = async (silentStream, memberID) => {&#xA;        console.log(`recordingnow`)&#xA;        while(recording) {&#xA;            if (!currently_recording[memberID]) {&#xA;                silentStream.push(silence_buffer);&#xA;            }&#xA;            await new Promise(r => setTimeout(r, 20));&#xA;        }&#xA;        return "done";&#xA;    }&#xA;    console.log(generateSilentData, `status`)&#xA;    function generateOutputFile(channelID, memberID) {&#xA;        const dir = `./recordings/${channelID}/${memberID}`;&#xA;        fs.ensureDirSync(dir);&#xA;        const fileName = `${dir}/${randomArr[0]}.aac`;&#xA;        console.log(`${fileName} ---------------------------`);&#xA;        return fs.createWriteStream(fileName);&#xA;    }&#xA;    &#xA;    if (!fs.existsSync("public")) {&#xA;        fs.mkdirSync("public");&#xA;    }&#xA;    app.use("/public", express.static("./public"));&#xA;  if (!message.guild) return;&#xA;&#xA;  if (message.content === config.prefix &#x2B; config.record_command) {&#xA;    if (recording) {&#xA;        message.reply("bot is already recording");&#xA;        return&#xA;    }&#xA;    if (message.member.voice.channel) {&#xA;        recording = true;&#xA;        const connection = await message.member.voice.channel.join();&#xA;        const dispatcher = connection.play(&#x27;./audio.mp3&#x27;);&#xA;&#xA;        connection.on(&#x27;speaking&#x27;, (user, speaking) => {&#xA;            if (speaking.has(&#x27;SPEAKING&#x27;)) {&#xA;                currently_recording[user.id] = true;&#xA;            } else {&#xA;                currently_recording[user.id] = false;&#xA;            }&#xA;        })&#xA;&#xA;&#xA;        let members = Array.from(message.member.voice.channel.members.values());&#xA;        members.forEach((member) => {&#xA;&#xA;            if (member.id != client.user.id) {&#xA;                let memberStream = connection.receiver.createStream(member, {mode : &#x27;pcm&#x27;, end : &#x27;manual&#x27;})&#xA;&#xA;                let outputFile = generateOutputFile(message.member.voice.channel.id, member.id);&#xA;                console.log(outputFile, `outputfile here`);&#xA;                mp3Paths.push(outputFile.path);&#xA;                    &#xA;&#xA;                silence_stream = bufferToStream(new Uint8Array(0));&#xA;                generateSilentData(silence_stream, member.id).then(data => console.log(data));&#xA;                let combinedStream = mergeStream(silence_stream, memberStream);&#xA;&#xA;                ffmpeg(combinedStream)&#xA;                    .inputFormat(&#x27;s32le&#x27;)&#xA;                    .audioFrequency(44100)&#xA;                    .audioChannels(2)&#xA;                    .on(&#x27;error&#x27;, (error) => {console.log(error)})&#xA;                    .audioCodec(&#x27;aac&#x27;)&#xA;                    .format(&#x27;adts&#x27;) &#xA;                    .pipe(outputFile)&#xA;                    &#xA;            }&#xA;        })&#xA;    } else {&#xA;      message.reply(&#x27;You need to join a voice channel first!&#x27;);&#xA;    }&#xA;  }&#xA;&#xA;  if (message.content === config.prefix &#x2B; config.stop_command) {&#xA;&#xA;    let date = new Date();&#xA;    let dd = String(date.getDate()).padStart(2, &#x27;0&#x27;);&#xA;    let mm = String(date.getMonth() &#x2B; 1).padStart(2, &#x27;0&#x27;); &#xA;    let yyyy = date.getFullYear();&#xA;    date = mm &#x2B; &#x27;/&#x27; &#x2B; dd &#x2B; &#x27;/&#x27; &#x2B; yyyy;&#xA;&#xA;&#xA;&#xA;&#xA;&#xA;    let currentVoiceChannel = message.member.voice.channel;&#xA;    if (currentVoiceChannel) {&#xA;        recording = false;&#xA;        await currentVoiceChannel.leave();&#xA;&#xA;        let mergedOutputFolder = &#x27;./recordings/&#x27; &#x2B; message.member.voice.channel.id &#x2B; `/${randomArr[0]}/`;&#xA;        fs.ensureDirSync(mergedOutputFolder);&#xA;        let file_name = `${randomArr[0]}` &#x2B; &#x27;.aac&#x27;;&#xA;        let mergedOutputFile = mergedOutputFolder &#x2B; file_name;&#xA;    &#xA;        &#xA;    let download_path = message.member.voice.channel.id &#x2B; `/${randomArr[0]}/` &#x2B; file_name;&#xA;&#xA;        let mixedOutput = new ffmpeg();&#xA;        console.log(mp3Paths, `mp3pathshere`);&#xA;        mp3Paths.forEach((mp3Path) => {&#xA;             mixedOutput.addInput(mp3Path);&#xA;            &#xA;        })&#xA;        console.log(mp3Paths);&#xA;        //mixedOutput.complexFilter(&#x27;amix=inputs=2:duration=longest&#x27;);&#xA;        mixedOutput.complexFilter(&#x27;amix=inputs=&#x27; &#x2B; mp3Paths.length &#x2B; &#x27;:duration=longest&#x27;);&#xA;        &#xA;        let processEmbed = new Discord.MessageEmbed().setTitle(`Audio Processing.`)&#xA;        processEmbed.addField(`Audio processing starting now..`, `Processing Audio`)&#xA;        processEmbed.setThumbnail(`https://media.discordapp.net/attachments/730811581046325348/748610998985818202/speaker.png`)&#xA;        processEmbed.setColor(` #00FFFF`)&#xA;        const processEmbedMsg = await message.channel.send(processEmbed)&#xA;        async function saveMp3(mixedData, outputMixed) {&#xA;            console.log(`${mixedData} MIXED `)&#xA;            &#xA;            &#xA;            &#xA;            return new Promise((resolve, reject) => {&#xA;                mixedData.on(&#x27;error&#x27;, reject).on(&#x27;progress&#x27;,&#xA;                async (progress) => {&#xA;                    &#xA;                    let processEmbedEdit = new Discord.MessageEmbed().setTitle(`Audio Processing.`)&#xA;                    processEmbedEdit.addField(`Processing: ${progress.targetSize} KB converted`, `Processing Audio`)&#xA;                    processEmbedEdit.setThumbnail(`https://media.discordapp.net/attachments/730811581046325348/748610998985818202/speaker.png`)&#xA;                    processEmbedEdit.setColor(` #00FFFF`)&#xA;                    processEmbedMsg.edit(processEmbedEdit)&#xA;                    console.log(&#x27;Processing: &#x27; &#x2B; progress.targetSize &#x2B; &#x27; KB converted&#x27;);&#xA;                }).on(&#x27;end&#x27;, () => {&#xA;                    console.log(&#x27;Processing finished !&#x27;);&#xA;                    resolve()&#xA;                }).saveToFile(outputMixed);&#xA;                console.log(`${outputMixed} IT IS HERE`);&#xA;            })&#xA;        }&#xA;        // mixedOutput.saveToFile(mergedOutputFile);&#xA;        await saveMp3(mixedOutput, mergedOutputFile);&#xA;        console.log(`${mixedOutput} IN HEREEEEEEE`);&#xA;        // We saved the recording, now copy the recording&#xA;        if (!fs.existsSync(`./public`)) {&#xA;            fs.mkdirSync(`./public`);&#xA;        }&#xA;        let sourceFile = `${__dirname}/recordings/${download_path}`&#xA;        console.log(`DOWNLOAD PATH HERE ${download_path}`)&#xA;        const guildName = message.guild.id;&#xA;        const serveExist = `/public/${guildName}`&#xA;        if (!fs.existsSync(`.${serveExist}`)) {&#xA;            fs.mkdirSync(`.${serveExist}`)&#xA;        }&#xA;        let destionationFile = `${__dirname}${serveExist}/${file_name}`&#xA;&#xA;        let errorThrown = false&#xA;        try {&#xA;            fs.copySync(sourceFile, destionationFile);&#xA;        } catch (err) {&#xA;            errorThrown = true&#xA;            await message.channel.send(`Error: ${err.message}`)&#xA;        }&#xA;        const usersWithTag = finalArrWithIds.map(user => `\n &lt;@${user}>`);&#xA;        let timeSpent = await getAudioDurationInSeconds(`public/${guildName}/${file_name}`)&#xA;        let timesSpentRound = Math.floor(timeSpent)&#xA;        let finalTimeSpent = timesSpentRound / 60&#xA;        let finalTimeForReal = Math.floor(finalTimeSpent)&#xA;        if(!errorThrown){&#xA;            //--------------------- server recording save START&#xA;            class GeneralRecords {&#xA;                constructor(generalLink, date, voice, time) {&#xA;                  this.generalLink = generalLink;&#xA;                  this.date = date;&#xA;                  this.note = `no note`;&#xA;                  this.voice = voice;&#xA;                  this.time = time&#xA;                }&#xA;              }&#xA;              let newGeneralRecordClassObject = new GeneralRecords(`${fqdn}/public/${guildName}/${file_name}`, date, usersWithTag, finalTimeForReal)&#xA;              let checkingServerRecord = await ServerRecords.exists({userid: `server`})&#xA;              if(checkingServerRecord === true){&#xA;                  existingServerRecord = await ServerRecords.findOne({userid: `server`})&#xA;                  existingServerRecord.content.push(newGeneralRecordClassObject)&#xA;                  await existingServerRecord.save()&#xA;              }&#xA;              if(checkingServerRecord === false){&#xA;                let serverRecord = new ServerRecords()&#xA;                serverRecord.userid = `server`&#xA;                serverRecord.content.push(newGeneralRecordClassObject)&#xA;                await serverRecord.save()&#xA;              }&#xA;              //--------------------- server recording save STOP&#xA;        }&#xA;        &#xA;        //--------------------- personal recording section START&#xA;        for( member of finalArrWithIds) {&#xA;&#xA;        let personal_download_path = message.member.voice.channel.id &#x2B; `/${member}/` &#x2B; file_name;&#xA;        let sourceFilePersonal = `${__dirname}/recordings/${personal_download_path}`&#xA;        let destionationFilePersonal = `${__dirname}${serveExist}/${member}/${file_name}`&#xA;        await fs.copySync(sourceFilePersonal, destionationFilePersonal);&#xA;        const user = client.users.cache.get(member);&#xA;        console.log(user, `user here`);&#xA;        try {&#xA;            ffmpeg.setFfmpegPath(ffmpegInstaller.path);&#xA;          &#xA;            ffmpeg(`public/${guildName}/${member}/${file_name}`)&#xA;             .audioFilters(&#x27;silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-90dB&#x27;)&#xA;             .output(`public/${guildName}/${member}/personal-${file_name}`)&#xA;             .on(`end`, function () {&#xA;               console.log(`DONE`);&#xA;             })&#xA;             .on(`error`, function (error) {&#xA;               console.log(`An error occured` &#x2B; error.message)&#xA;             })&#xA;             .run();&#xA;             &#xA;          }&#xA;          catch (error) {&#xA;          console.log(error)&#xA;          }&#xA;        &#xA;&#xA;        // ----------------- SAVING PERSONAL RECORDING TO DATABASE START&#xA;        class PersonalRecords {&#xA;            constructor(generalLink, personalLink, date, time) {&#xA;              this.generalLink = generalLink;&#xA;              this.personalLink = personalLink;&#xA;              this.date = date;&#xA;              this.note = `no note`;&#xA;              this.time = time;&#xA;            }&#xA;          }&#xA;          let timeSpentPersonal = await getAudioDurationInSeconds(`public/${guildName}/${file_name}`)&#xA;          let timesSpentRoundPersonal = Math.floor(timeSpentPersonal)&#xA;          let finalTimeSpentPersonal = timesSpentRoundPersonal / 60&#xA;          let finalTimeForRealPersonal = Math.floor(finalTimeSpentPersonal)&#xA;          let newPersonalRecordClassObject = new PersonalRecords(`${fqdn}/public/${guildName}/${file_name}`, `${fqdn}/public/${guildName}/${member}/personal-${file_name}`, date, finalTimeForRealPersonal)&#xA;&#xA;           let checkingUserRecord = await UserRecords.exists({userid: member})&#xA;              if(checkingUserRecord === true){&#xA;                  existingUserRecord = await UserRecords.findOne({userid: member})&#xA;                  existingUserRecord.content.push(newPersonalRecordClassObject)&#xA;                  await existingUserRecord.save()&#xA;              }&#xA;              if(checkingUserRecord === false){&#xA;                let newRecord = new UserRecords()&#xA;                newRecord.userid = member&#xA;                newRecord.content.push(newPersonalRecordClassObject)&#xA;                await newRecord.save()&#xA;              }&#xA;&#xA;&#xA;       &#xA;        // ----------------- SAVING PERSONAL RECORDING TO DATABASE END&#xA;       &#xA;&#xA;        const endPersonalEmbed = new Discord.MessageEmbed().setTitle(`Your performance was amazing ! Review it here :D`)&#xA;        endPersonalEmbed.setColor(&#x27;#9400D3&#x27;)&#xA;        endPersonalEmbed.setThumbnail(`https://media.discordapp.net/attachments/730811581046325348/745381641324724294/vinyl.png`)&#xA;        endPersonalEmbed.addField(`1
  • avconv "select" filter doesn't discard first frames

    1er avril 2014, par user2152106

    I'm trying to segment a video using avconv's "select" filter to extract only a specific range of frames from the input file. As an example, imagine I have a 60fps video file called input.mp4, with 3000 frames (i.e. 50 seconds), and I run

    avconv -i input.mp4 -vf "select=&#39;lt(n,2000)&#39;" output1.mp4
    avconv -i input.mp4 -vf "select=&#39;gte(n,2000)&#39;" output2.mp4

    What I expect is that output1.mp4 has the first 2000 frames of input.mp4 (and lasts 33 seconds), and output2.mp4 has the last 1000 (and lasts 17 seconds).

    I count the frames by running

    avconv -i video.mp4 -vcodec copy -an -f null /dev/null 2>&amp;1 | grep &#39;frame=&#39;

    and checking the value assigned to 'frame'.

    What I actually get, is that output1.mp4 has 2000 frames and lasts 33 seconds, but output2.mp4 has 2999 frames, and still lasts the full 50 seconds. When I open output2.mp4, I notice that the first 2000 frames of the video are actually just a repetition of the 2000th frame of the input, i.e. the first 2000 frames seem to be correctly filtered, but replaced by the first of the accepted frames.

    This is not a pts problem. I check the number of packets and their relative pts using avprobe :

    avprobe -show_packets output2.mp4
    echo $(avprobe -show_packets output2.mp4 2>/dev/null | grep PACKET | wc -l)/2 | bc

    I see that there are actually 2999 packets.

    What am I doing wrong ?

    Side questions :

    • Assuming I'm doing something wrong, why does output2.mp4 contain 2999 rather than the full 3000 ?
    • The behaviour doesn't change whether I use the "gte" or "gt" function in the filter. Why could that be ?