Recherche avancée

Médias (91)

Autres articles (40)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (6349)

  • Send MP3 audio extracted from m3u8 stream to IBM Watson Speech To Text

    20 novembre 2018, par Link69

    I’m extracting audio in MP3 format from a M3U8 live url and the final goal is to send the live audio stream to IBM Watson Speech To Text. The m3u8 is obtained by calling an external script via a Process. Then I use FFMPEG script to get the audio in stdout. It works if I save the audio in a file but I don’t want to save the extracted audio, I need to send the datas directly to the STT service. So far I proceeded like this :

    SpeechToTextService speechToTextService = new SpeechToTextService(sttUsername, sttPassword);
    string m3u8Url = "https://something.m3u8";
    char[] buffer = new char[48000];
    Process ffmpeg = new ProcessHelper(@"ffmpeg\ffmpeg.exe", $"-v 0 -i {m3u8Url} -acodec mp3 -ac 2 -ar 48000 -f mp3 -");

    ffmpeg.Start();
    int count;
    while ((count = ffmpeg.StandardOutput.Read(buffer, 0, 48000)) > 0)
    {
       ffmpeg.StandardOutput.Read(buffer, 0, 48000);
       var answer = speechToTextService.RecognizeSessionless(
           audio: buffer.Select(c => (byte)c).ToArray(),
           contentType: "audio/mpeg",
           smartFormatting: true,
           speakerLabels: false,
           model: "en-US_BroadbandModel"
       );
       // Get answer.ResponseJson, deserializing, clean buffer, etc...
    }

    When requesting the transcribed audio I’m getting this error :

    An unhandled exception of type 'System.AggregateException' occurred in IBM.WatsonDeveloperCloud.SpeechToText.v1.dll: 'One or more errors occurred. (The API query failed with status code BadRequest: Bad Request | x-global-transaction-id: bd6cd203720a70d83b9a03451fe28973 | X-DP-Watson-Tran-ID: bd6cd203720a70d83b9a03451fe28973)'
    Inner exceptions found, see $exception in variables window for more details.
    Innermost exception     IBM.WatsonDeveloperCloud.Http.Exceptions.ServiceResponseException : The API query failed with status code BadRequest: Bad Request | x-global-transaction-id: bd6cd203720a70d83b9a03451fe28973 | X-DP-Watson-Tran-ID: bd6cd203720a70d83b9a03451fe28973
      at IBM.WatsonDeveloperCloud.Http.Filters.ErrorFilter.OnResponse(IResponse response, HttpResponseMessage responseMessage)
      at IBM.WatsonDeveloperCloud.Http.Request.<getresponse>d__30.MoveNext()
      at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
      at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
      at IBM.WatsonDeveloperCloud.Http.Request.<asmessage>d__23.MoveNext()
      at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
      at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
      at IBM.WatsonDeveloperCloud.Http.Request.<as>d__24`1.MoveNext()
    </as></asmessage></getresponse>

    ProcessHelper is just for convenience :

    class ProcessHelper : Process
    {
       private string command;
       private string arguments;
       public ProcessHelper(string command, string arguments, bool redirectStandardOutput = true)
       {
           this.command = command;
           this.arguments = arguments;
           StartInfo = new ProcessStartInfo()
           {
               FileName = this.command,
               Arguments = this.arguments,
               UseShellExecute = false,
               RedirectStandardOutput = redirectStandardOutput,
               CreateNoWindow = true
           };
       }
    }

    Pretty sure I’m doing it wrong, I’d love someone to shine a light on this. Thanks.

  • libav how to change stream codec

    9 novembre 2018, par Rodolfo Picoreti

    I am trying to reproduce with libav the same thing that the following ffmpeg command does :

    ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video 0 -f
    mpegts -codec:v mpeg1video -s 640x480 -b:v 1000k -bf 0 -muxdelay
    0.001 http://localhost:8081/supersecret

    I manage to reproduce most part of it. The problem is that when allocating the "mpegts" stream (line 23) the codec that is selected is "mpeg2video", but I need it to be "mpeg1video". I tried forcing the codec_id variable to be "mpeg1video" (line 25) and it kinda worked, although I get a lot of artifacts in the image so I am guessing this is not how you do it. How can I properly change the codec in this case (that is, the "-codec:v mpeg1video" part of the ffmpeg command) ?

    C++ code being used :

    #include <exception>
    #include <opencv2></opencv2>core.hpp>
    #include <opencv2></opencv2>highgui.hpp>
    #include <opencv2></opencv2>imgproc.hpp>

    extern "C" {
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavutil></libavutil>error.h>
    #include <libavutil></libavutil>frame.h>
    #include <libavutil></libavutil>imgutils.h>
    }

    struct AVStreamer {
     AVFormatContext* format_context;
     AVStream* stream;
     AVCodecContext* codec_context;
     AVCodec* codec;
     AVFrame* frame;
     int64_t next_pts;

     void init_format_context(const char* url) {
       avformat_alloc_output_context2(&amp;format_context, nullptr, "mpegts", url);
       if (format_context == nullptr) throw std::runtime_error("Could not create output context");
       format_context->oformat->video_codec = AV_CODEC_ID_MPEG1VIDEO;
     }

     void init_codec() {
       auto codec_id = format_context->oformat->video_codec;
       codec = avcodec_find_encoder(codec_id);
       if (codec == nullptr) throw std::runtime_error("Could not find encoder");
     }

     void init_stream() {
       stream = avformat_new_stream(format_context, nullptr);
       if (stream == nullptr) throw std::runtime_error("Failed to alloc stream");
       stream->id = format_context->nb_streams - 1;
     }

     void init_codec_context() {
       codec_context = avcodec_alloc_context3(codec);
       if (codec_context == nullptr) throw std::runtime_error("Failed to alloc encoding context");

       auto codec_id = format_context->oformat->video_codec;
       codec_context->codec_id = codec_id;
       codec_context->bit_rate = 400000;
       codec_context->width = 640;
       codec_context->height = 480;
       stream->time_base = AVRational{1, 30};
       codec_context->time_base = stream->time_base;
       codec_context->gop_size = 30;  // one intra frame every gop_size
       // codec_context->max_b_frames = 0;  // output delayed by max_b_frames
       codec_context->pix_fmt = AV_PIX_FMT_YUV420P;
       if (codec_context->codec_id == AV_CODEC_ID_MPEG1VIDEO) { codec_context->mb_decision = 2; }
       if (format_context->oformat->flags &amp; AVFMT_GLOBALHEADER) {
         codec_context->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }
     }

     void init_frame() {
       frame = av_frame_alloc();
       if (frame == nullptr) throw std::runtime_error("Failed to alloc frame");
       frame->format = codec_context->pix_fmt;
       frame->width = codec_context->width;
       frame->height = codec_context->height;

       auto status = av_frame_get_buffer(frame, 32);
       if (status &lt; 0) { throw std::runtime_error("Could not allocate frame data.\n"); }
     }

     void open_stream(const char* url) {
       int status = avcodec_open2(codec_context, codec, nullptr);
       if (status != 0) throw std::runtime_error("Failed to open codec");

       // copy the stream parameters to the muxer
       status = avcodec_parameters_from_context(stream->codecpar, codec_context);
       if (status &lt; 0) throw std::runtime_error("Could not copy the stream parameters");

       av_dump_format(format_context, 0, url, 1);

       if (!(format_context->oformat->flags &amp; AVFMT_NOFILE)) {
         status = avio_open(&amp;format_context->pb, url, AVIO_FLAG_WRITE);
         if (status &lt; 0) throw std::runtime_error("Could not open output file");
       }

       // Write the stream header, if any.
       status = avformat_write_header(format_context, nullptr);
       if (status &lt; 0) throw std::runtime_error("Error occurred when opening output file");
     }

     AVStreamer(const char* url) : next_pts(0) {
       init_format_context(url);
       init_codec();
       init_stream();
       init_codec_context();
       init_frame();
       open_stream(url);
     }

     virtual ~AVStreamer() {
       avformat_free_context(format_context);
       avcodec_free_context(&amp;codec_context);
       av_frame_free(&amp;frame);
     }

     void send(cv::Mat const&amp; image) {
       cv::cvtColor(image, image, CV_BGR2YUV);
       cv::Mat planes[3];
       cv::split(image, planes);
       cv::pyrDown(planes[1], planes[1]);
       cv::pyrDown(planes[2], planes[2]);

       if (av_frame_make_writable(frame) &lt; 0) {
         throw std::runtime_error("Failed to make frame writable");
       }

       frame->data[0] = planes[0].data;
       frame->linesize[0] = planes[0].step;
       frame->data[1] = planes[1].data;
       frame->linesize[1] = planes[1].step;
       frame->data[2] = planes[2].data;
       frame->linesize[2] = planes[2].step;
       frame->pts = next_pts++;

       AVPacket packet;
       av_init_packet(&amp;packet);

       int status = avcodec_send_frame(codec_context, frame);
       if (status &lt; 0) throw std::runtime_error("Send frame failed");

       status = avcodec_receive_packet(codec_context, &amp;packet);
       if (status == AVERROR(EAGAIN)) { return; }
       if (status &lt; 0) throw std::runtime_error("Receive packet failed");

       av_packet_rescale_ts(&amp;packet, codec_context->time_base, stream->time_base);
       packet.stream_index = stream->index;
       av_interleaved_write_frame(format_context, &amp;packet);
     }
    };

    int main(int argc, char** argv) {
     av_register_all();
     avformat_network_init();

     auto url = argc == 2 ? argv[1] : "http://localhost:8081/supersecret";
     AVStreamer streamer(url);

     cv::VideoCapture video(0);
     assert(video.isOpened() &amp;&amp; "Failed to open video");

     for (;;) {
       cv::Mat image;
       video >> image;
       streamer.send(image);
     }
    }
    </exception>
  • Need Help Making Java Script Discord Music Bot

    22 octobre 2018, par Gambino

    I have tried to use this code to make a discord music bot but i am getting error telling me i need ffmpeg, but how would I implement it into this code ?

    index.js file

    const Discord = require('discord.js');
    const bot = new Discord.Client();
    const config = require("./config.json");
    const ytdl = require('ytdl-core');

    var youtube = require('./youtube.js');


    var ytAudioQueue = [];
    var dispatcher = null;

    bot.on('message', function(message) {
       var messageParts = message.content.split(' ');
       var command = messageParts[0].toLowerCase();
       var parameters = messageParts.splice(1, messageParts.length);

       switch (command) {

           case "-join" :
           message.reply("Attempting to join channel " + parameters[0]);
           JoinCommand(parameters[0], message);
           break;
       
           case "-play" :
           PlayCommand(parameters.join(" "), message);
           break;

           case "-playqueue":
           PlayQueueCommand(message);
           break;
       }
    });


       function PlayCommand(searchTerm) {
           if(bot.voiceConnections.array().length == 0) {
               var defaultVoiceChannel = bot.channels.find(val => val.type === 'voice').name;
               JoinCommand(defaultVoiceChannel);
           }
           youtube.search(searchTerm, QueueYtAudioStream);
       }

       function PlayQueueCommand(message) {
           var queueString = "";

           for(var x = 0; x &lt; ytAudioQueue.length; x++) {
               queueString += ytAudioQueue[x].videoName + ", ";
           }
           queueString = queueString.substring(0, queueString.length - 2);
           message.reply(queueString);
       }

       function JoinCommand(ChannelName) {
           var voiceChannel = GetChannelByName(ChannelName);

           if (voiceChannel) {
               voiceChannel.join();
               console.log("Joined " + voiceChannel.name);
           }
           
           return voiceChannel;
           
       }

       /* Helper Methods */

       function GetChannelByName(name) {
           var channel = bot.channels.find(val => val.name === name);
           return channel;
       }

     

       function QueueYtAudioStream(videoId, videoName) {
           var streamUrl = youtube.watchVideoUrl + videoId;

           if (!ytAudioQueue.length) {
               ytAudioQueue.push(
                   {
                       'streamUrl' : streamUrl,
                       'videoName' : videoName
                   }
               );
               console.log('Queued audio ' + videoName);
               PlayStream(ytAudioQueue[0].streamUrl);
           }
           else {
               ytAudioQueue.push(
                   {
                       'streamUrl' : streamUrl,
                       'videoName' : videoName
                   }
               );
           }
           console.log("Queued audio " + videoName);
       }

       function PlayStream(streamUrl) {
           const streamOptions = {seek: 0, volume: 1};

           if (streamUrl) {
               const stream = ytdl(streamUrl, {filter: 'audioonly'});

               if (dispatcher == null) {
                   var voiceConnection = bot.voiceConnections.first();

                   if(voiceConnection) {
                       console.log("Now Playing " + ytAudioQueue[0].videoname);
                       dispatcher = bot.voiceConnections.first().playStream(stream, streamOptions);

                       dispatcher.on('end', () => {
                           dispatcher = null;
                           PlayNextStreamInQueue();
                       });

                       dispatcher.on('error', (err) => {
                           console.log(err);
                       });
                   }
               } else {
                   dispatcher = bot.voiceConnections.first().playStream(stream, streamOptions);
               }
               
           }
       }

       function PlayNextStreamInQueue() {
           ytAudioQueue.splice(0, 1);

           if (ytAudioQueue.length != 0) {
               console.log("now Playing " + ytAudioQueue[0].videoName);
               PlayStream(ytAudioQueue[0].streamUrl);
           }
       }


    bot.login(config.token);

    youtube.js file

    var request = require('superagent');

    const API_KEY = "My API KEY";
    const WATCH_VIDEO_URL = "https://www.youtube.com/watch?v=";

    exports.watchVideoUrl = WATCH_VIDEO_URL;

    exports.search = function search(searchKeywords, callback) {
     var requestUrl = 'https://www.googleapis.com/youtube/v3/search' + '?part=snippet&amp;q=' + escape(searchKeywords) + '&amp;key=' + API_KEY;

     request(requestUrl, (error, response) => {
       if (!error &amp;&amp; response.statusCode == 200) {

         var body = response.body;
         if (body.items.length == 0) {
           console.log("I Could Not Find Anything!");
           return;
         }
         for (var item of body.items) {
           if (item.id.kind == 'youtube#video') {
             callback(item.id.videoId, item.snippet.title);
             return;
           }
         }
       } else {
         console.log("Unexpected error!");
         return;
       }
     });

     return;

    };

    Error I am getting when I try to run code :

    Joined General

    C :\Discord Bot\node_modules\discord.js\src\client\voice\pcm\FfmpegConverterEngine.

    js:80

    throw new Error(

    ^

    Error : FFMPEG was not found on your system, so audio cannot be played. Please make
    sure FFMPEG is installed and in your PATH.