Recherche avancée

Médias (0)

Mot : - Tags -/optimisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (21)

  • À propos des documents

    21 juin 2013, par

    Que faire quand un document ne passe pas en traitement, dont le rendu ne correspond pas aux attentes ?
    Document bloqué en file d’attente ?
    Voici une liste d’actions ordonnée et empirique possible pour tenter de débloquer la situation : Relancer le traitement du document qui ne passe pas Retenter l’insertion du document sur le site MédiaSPIP Dans le cas d’un média de type video ou audio, retravailler le média produit à l’aide d’un éditeur ou un transcodeur. Convertir le document dans un format (...)

  • Modifier la date de publication

    21 juin 2013, par

    Comment changer la date de publication d’un média ?
    Il faut au préalable rajouter un champ "Date de publication" dans le masque de formulaire adéquat :
    Administrer > Configuration des masques de formulaires > Sélectionner "Un média"
    Dans la rubrique "Champs à ajouter, cocher "Date de publication "
    Cliquer en bas de la page sur Enregistrer

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (3861)

  • The encoding of ffmpeg does not work on iOS

    25 mai 2017, par Deric

    I would like to send encoded streaming encoded using ffmpeg.
    The encoding transfer developed under the source below does not work.
    Encoding Before packet operation with vlc player is done well, encoded packets can not operate.
    I do not know what’s wrong.
    Please help me.

    av_register_all();
    avformat_network_init();
    AVOutputFormat *ofmt = NULL;
    //Input AVFormatContext and Output AVFormatContext
    AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
    AVPacket pkt;
    //const char *in_filename, *out_filename;
    int ret, i;
    int videoindex=-1;
    int frame_index=0;
    int64_t start_time=0;

    av_register_all();
    //Network
    avformat_network_init();
    //Input
    if ((ret = avformat_open_input(&ifmt_ctx, "rtmp://", 0, 0)) < 0) {
       printf( "Could not open input file.");
    }
    if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
       printf( "Failed to retrieve input stream information");
    }


    AVCodecContext *context = NULL;

    for(i=0; inb_streams; i++) {
       if(ifmt_ctx->streams[i]->codecpar->codec_type==AVMEDIA_TYPE_VIDEO){

           videoindex=i;

           AVCodecParameters *params = ifmt_ctx->streams[i]->codecpar;
           AVCodec *codec = avcodec_find_decoder(params->codec_id);
           if (codec == NULL)  { return; };

           context = avcodec_alloc_context3(codec);

           if (context == NULL) { return; };

           ret = avcodec_parameters_to_context(context, params);
           if(ret < 0){
               avcodec_free_context(&context);
           }

           context->framerate = av_guess_frame_rate(ifmt_ctx, ifmt_ctx->streams[i], NULL);

           ret = avcodec_open2(context, codec, NULL);
           if(ret < 0) {
               NSLog(@"avcodec open2 error");
               avcodec_free_context(&context);
           }

           break;
       }
    }
    av_dump_format(ifmt_ctx, 0, "rtmp://", 0);

    //Output

    avformat_alloc_output_context2(&ofmt_ctx, NULL, "flv", "rtmp://"); //RTMP
    //avformat_alloc_output_context2(&ofmt_ctx, NULL, "mpegts", out_filename);//UDP

    if (!ofmt_ctx) {
       printf( "Could not create output context\n");
       ret = AVERROR_UNKNOWN;
    }
    ofmt = ofmt_ctx->oformat;
    for (i = 0; i < ifmt_ctx->nb_streams; i++) {
       //Create output AVStream according to input AVStream
       AVStream *in_stream = ifmt_ctx->streams[i];
       AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
       if (!out_stream) {
           printf( "Failed allocating output stream\n");
           ret = AVERROR_UNKNOWN;
       }

       out_stream->time_base = in_stream->time_base;

       //Copy the settings of AVCodecContext
       ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
       if (ret < 0) {
           printf( "Failed to copy context from input to output stream codec context\n");
       }

       out_stream->codecpar->codec_tag = 0;
       if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER) {
           out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }
    }
    //Dump Format------------------
    av_dump_format(ofmt_ctx, 0, "rtmp://", 1);
    //Open output URL
    if (!(ofmt->flags & AVFMT_NOFILE)) {
       ret = avio_open(&ofmt_ctx->pb, "rtmp://", AVIO_FLAG_WRITE);
       if (ret < 0) {
           printf( "Could not open output URL ");
      }
    }
    //Write file header
    ret = avformat_write_header(ofmt_ctx, NULL);
    if (ret < 0) {
       printf( "Error occurred when opening output URL\n");
    }

    // Encoding
    AVCodec *codec;
    AVCodecContext *c;

    AVStream *video_st = avformat_new_stream(ofmt_ctx, 0);
    video_st->time_base.num = 1;
    video_st->time_base.den = 25;

    if(video_st == NULL){
       NSLog(@"video stream error");
    }


    codec = avcodec_find_encoder(AV_CODEC_ID_H264);
    if(!codec){
       NSLog(@"avcodec find encoder error");
    }

    c = avcodec_alloc_context3(codec);
    if(!c){
       NSLog(@"avcodec alloc context error");
    }


    c->profile = FF_PROFILE_H264_BASELINE;
    c->width = ifmt_ctx->streams[videoindex]->codecpar->width;
    c->height = ifmt_ctx->streams[videoindex]->codecpar->height;
    c->time_base.num = 1;
    c->time_base.den = 25;
    c->bit_rate = 800000;
    //c->time_base = { 1,22 };
    c->pix_fmt = AV_PIX_FMT_YUV420P;
    c->thread_count = 2;
    c->thread_type = 2;

    AVDictionary *param = 0;

    av_dict_set(&param, "preset", "slow", 0);
    av_dict_set(&param, "tune", "zerolatency", 0);

    if (avcodec_open2(c, codec, NULL) < 0) {
       fprintf(stderr, "Could not open codec\n");
    }



    AVFrame *pFrame = av_frame_alloc();

    start_time=av_gettime();
    while (1) {

       AVPacket encoded_pkt;

       av_init_packet(&encoded_pkt);
       encoded_pkt.data = NULL;
       encoded_pkt.size = 0;

       AVStream *in_stream, *out_stream;
       //Get an AVPacket
       ret = av_read_frame(ifmt_ctx, &pkt);
       if (ret < 0) {
           break;
       }

       //FIX:No PTS (Example: Raw H.264)
       //Simple Write PTS
       if(pkt.pts==AV_NOPTS_VALUE){
           //Write PTS
           AVRational time_base1=ifmt_ctx->streams[videoindex]->time_base;
           //Duration between 2 frames (us)
           int64_t calc_duration=(double)AV_TIME_BASE/av_q2d(ifmt_ctx->streams[videoindex]->r_frame_rate);
           //Parameters
           pkt.pts=(double)(frame_index*calc_duration)/(double)(av_q2d(time_base1)*AV_TIME_BASE);
           pkt.dts=pkt.pts;
           pkt.duration=(double)calc_duration/(double)(av_q2d(time_base1)*AV_TIME_BASE);
       }
       //Important:Delay
       if(pkt.stream_index==videoindex){
           AVRational time_base=ifmt_ctx->streams[videoindex]->time_base;
           AVRational time_base_q={1,AV_TIME_BASE};
           int64_t pts_time = av_rescale_q(pkt.dts, time_base, time_base_q);
           int64_t now_time = av_gettime() - start_time;
           if (pts_time > now_time) {
               av_usleep(pts_time - now_time);
           }

       }

       in_stream  = ifmt_ctx->streams[pkt.stream_index];
       out_stream = ofmt_ctx->streams[pkt.stream_index];
       /* copy packet */
       //Convert PTS/DTS
       //pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
       //pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
       pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
       pkt.pos = -1;

       //Print to Screen
       if(pkt.stream_index==videoindex){
           //printf("Send %8d video frames to output URL\n",frame_index);
           frame_index++;
       }



       // Decode and Encode
       if(pkt.stream_index == videoindex) {

           ret = avcodec_send_packet(context, &pkt);

           if(ret<0){
               NSLog(@"avcode send packet error");
           }

           ret = avcodec_receive_frame(context, pFrame);
           if(ret<0){
               NSLog(@"avcodec receive frame error");
           }

           ret = avcodec_send_frame(c, pFrame);

           if(ret < 0){
               NSLog(@"avcodec send frame - %s", av_err2str(ret));
           }

           ret = avcodec_receive_packet(c, &encoded_pkt);

           if(ret < 0){
               NSLog(@"avcodec receive packet error");
           }

       }

       //ret = av_write_frame(ofmt_ctx, &pkt);

       //encoded_pkt.stream_index = pkt.stream_index;
       av_packet_rescale_ts(&encoded_pkt, c->time_base, ofmt_ctx->streams[videoindex]->time_base);


       ret = av_interleaved_write_frame(ofmt_ctx, &encoded_pkt);

       if (ret < 0) {
           printf( "Error muxing packet\n");
           break;
       }

       av_packet_unref(&encoded_pkt);
       av_free_packet(&pkt);

    }
    //Write file trailer
    av_write_trailer(ofmt_ctx);
  • A ffmpeg comman canwork in cmd but not in Python using subprocess.call() or os.system()

    6 juin 2018, par Starrysky

    I wanna transfer a .mp3 to .wav. This is my command :
    ffmpeg -i a.mp3 -ar 16000 -ac 1 -acodec pcm_s16le a.wav

    It worked well in cmd

    C:\Users\starrysky\Documents\GitHub\bing_pic\html>ffmpeg -i a.mp3 -ar 16000 -ac 1 -acodec pcm_s16le a.wav
    ffmpeg version N-86482-gbc40674 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 7.1.0 (GCC)
     configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
     libavutil      55. 66.100 / 55. 66.100
     libavcodec     57. 99.100 / 57. 99.100
     libavformat    57. 73.100 / 57. 73.100
     libavdevice    57.  7.100 / 57.  7.100
     libavfilter     6. 92.100 /  6. 92.100
     libswscale      4.  7.101 /  4.  7.101
     libswresample   2.  8.100 /  2.  8.100
     libpostproc    54.  6.100 / 54.  6.100
    Input #0, mp3, from 'a.mp3':
     Metadata:
       encoder         : Lavf54.6.100
     Duration: 00:00:01.87, start: 0.000000, bitrate: 8 kb/s
       Stream #0:0: Audio: mp3, 8000 Hz, mono, s16p, 8 kb/s
    Stream mapping:
     Stream #0:0 -> #0:0 (mp3 (native) -> pcm_s16le (native))
    Press [q] to stop, [?] for help
    Output #0, wav, to 'a.wav':
     Metadata:
       ISFT            : Lavf57.73.100
       Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 16000 Hz, mono, s16, 256 kb/s
       Metadata:
         encoder         : Lavc57.99.100 pcm_s16le
    size=      59kB time=00:00:01.87 bitrate= 256.3kbits/s speed= 187x
    video:0kB audio:58kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.130208%

    but when I moved it into my python program, something strange happened.

    >>> C:\Users\starrysky\Documents\GitHub\bing_pic\html\
    'ffmpeg' �����ڲ����ⲿ���Ҳ���ǿ����еij���
    ���������ļ���
    1 Command 'ffmpeg -i a.mp3 -ar 16000 -ac 1 -acodec pcm_s16le a.wav' returned non-zero exit status 1.
    文件错误啊,亲
    [WinError 2] 系统找不到指定的文件。: 'a.wav'

    This is part of my python code :

    @bot.register(wife, RECORDING)
    def translate_sound(msg):
       msg.get_file(save_path='a.mp3')
       path = os.path.abspath('.')+'\\'
       print(path)
       try:
           subprocess.check_call('ffmpeg -i a.mp3 -ar 16000 -ac 1 -acodec pcm_s16le a.wav', shell=True)
           # ''
       except Exception as e:
           print(1, e)
       wav_to_text('a.wav')
       try:
           os.remove('a.wav')
       except Exception as e:
           print(e)

    # 调用百度语音识别API
    def get_token():
       URL = 'http://openapi.baidu.com/oauth/2.0/token'
       _params = urllib.parse.urlencode({'grant_type': b'client_credentials',
                                         'client_id': b''
                                         'client_secret': b''})
       _res = urllib.request.Request(URL, _params.encode())
       _response = urllib.request.urlopen(_res)
       _data = _response.read()
       _data = json.loads(_data)
       return _data['access_token']


    def wav_to_text(wav_file):
       try:
           wav_file = open(wav_file, 'rb')
       except IOError:
           print('文件错误啊,亲')
           return
       wav_file = wave.open(wav_file)
       n_frames = wav_file.getnframes()
       print('n_frames ', n_frames)
       frame_rate = wav_file.getframerate()
       print("frame_rate ", frame_rate)
       if n_frames == 1 or frame_rate not in (8000, 16000):
           print('不符合格式')
           return
       audio = wav_file.readframes(n_frames)
       seconds = n_frames/frame_rate+1
       minute = int(seconds/60 + 1)
       for i in range(0, minute):
           sub_audio = audio[i*60*frame_rate:(i+1)*60*frame_rate]
           base_data = base64.b64encode(sub_audio)
           data = {"format": "wav",
                   "token": get_token(),
                   "len": len(sub_audio),
                   "rate": frame_rate,
                   "speech": base_data.decode(),
                   "cuid": "B8-AC-6F-2D-7A-94",
                   "channel": 1}
           data = json.dumps(data)
           res = urllib.request.Request('http://vop.baidu.com/server_api',
                                 data.encode(),
                                 {'content-type': 'application/json'})
           response = urllib.request.urlopen(res)
           res_data = json.loads(response.read())
           try:
               print(res_data['result'][0])
           except Exception as e:
               print(e)

    What happened ?

  • Best approach to real time http streaming to HTML5 video client

    28 juin 2017, par deandob

    I’m really stuck trying to understand the best way to stream real time output of ffmpeg to a HTML5 client using node.js, as there are a number of variables at play and I don’t have a lot of experience in this space, having spent many hours trying different combinations.

    My use case is :

    1) IP video camera RTSP H.264 stream is picked up by FFMPEG and remuxed into a mp4 container using the following FFMPEG settings in node, output to STDOUT. This is only run on the initial client connection, so that partial content requests don’t try to spawn FFMPEG again.

    liveFFMPEG = child_process.spawn("ffmpeg", [
                   "-i", "rtsp://admin:12345@192.168.1.234:554" , "-vcodec", "copy", "-f",
                   "mp4", "-reset_timestamps", "1", "-movflags", "frag_keyframe+empty_moov",
                   "-"   // output to stdout
                   ],  {detached: false});

    2) I use the node http server to capture the STDOUT and stream that back to the client upon a client request. When the client first connects I spawn the above FFMPEG command line then pipe the STDOUT stream to the HTTP response.

    liveFFMPEG.stdout.pipe(resp);

    I have also used the stream event to write the FFMPEG data to the HTTP response but makes no difference

    xliveFFMPEG.stdout.on("data",function(data) {
           resp.write(data);
    }

    I use the following HTTP header (which is also used and working when streaming pre-recorded files)

    var total = 999999999         // fake a large file
    var partialstart = 0
    var partialend = total - 1

    if (range !== undefined) {
       var parts = range.replace(/bytes=/, "").split("-");
       var partialstart = parts[0];
       var partialend = parts[1];
    }

    var start = parseInt(partialstart, 10);
    var end = partialend ? parseInt(partialend, 10) : total;   // fake a large file if no range reques

    var chunksize = (end-start)+1;

    resp.writeHead(206, {
                     'Transfer-Encoding': 'chunked'
                    , 'Content-Type': 'video/mp4'
                    , 'Content-Length': chunksize // large size to fake a file
                    , 'Accept-Ranges': 'bytes ' + start + "-" + end + "/" + total
    });

    3) The client has to use HTML5 video tags.

    I have no problems with streaming playback (using fs.createReadStream with 206 HTTP partial content) to the HTML5 client a video file previously recorded with the above FFMPEG command line (but saved to a file instead of STDOUT), so I know the FFMPEG stream is correct, and I can even correctly see the video live streaming in VLC when connecting to the HTTP node server.

    However trying to stream live from FFMPEG via node HTTP seems to be a lot harder as the client will display one frame then stop. I suspect the problem is that I am not setting up the HTTP connection to be compatible with the HTML5 video client. I have tried a variety of things like using HTTP 206 (partial content) and 200 responses, putting the data into a buffer then streaming with no luck, so I need to go back to first principles to ensure I’m setting this up the right way.

    Here is my understanding of how this should work, please correct me if I’m wrong :

    1) FFMPEG should be setup to fragment the output and use an empty moov (FFMPEG frag_keyframe and empty_moov mov flags). This means the client does not use the moov atom which is typically at the end of the file which isn’t relevant when streaming (no end of file), but means no seeking possible which is fine for my use case.

    2) Even though I use MP4 fragments and empty MOOV, I still have to use HTTP partial content, as the HTML5 player will wait until the entire stream is downloaded before playing, which with a live stream never ends so is unworkable.

    3) I don’t understand why piping the STDOUT stream to the HTTP response doesn’t work when streaming live yet if I save to a file I can stream this file easily to HTML5 clients using similar code. Maybe it’s a timing issue as it takes a second for the FFMPEG spawn to start, connect to the IP camera and send chunks to node, and the node data events are irregular as well. However the bytestream should be exactly the same as saving to a file, and HTTP should be able to cater for delays.

    4) When checking the network log from the HTTP client when streaming a MP4 file created by FFMPEG from the camera, I see there are 3 client requests : A general GET request for the video, which the HTTP server returns about 40Kb, then a partial content request with a byte range for the last 10K of the file, then a final request for the bits in the middle not loaded. Maybe the HTML5 client once it receives the first response is asking for the last part of the file to load the MP4 MOOV atom ? If this is the case it won’t work for streaming as there is no MOOV file and no end of the file.

    5) When checking the network log when trying to stream live, I get an aborted initial request with only about 200 bytes received, then a re-request again aborted with 200 bytes and a third request which is only 2K long. I don’t understand why the HTML5 client would abort the request as the bytestream is exactly the same as I can successfully use when streaming from a recorded file. It also seems node isn’t sending the rest of the FFMPEG stream to the client, yet I can see the FFMPEG data in the .on event routine so it is getting to the FFMPEG node HTTP server.

    6) Although I think piping the STDOUT stream to the HTTP response buffer should work, do I have to build an intermediate buffer and stream that will allow the HTTP partial content client requests to properly work like it does when it (successfully) reads a file ? I think this is the main reason for my problems however I’m not exactly sure in Node how to best set that up. And I don’t know how to handle a client request for the data at the end of the file as there is no end of file.

    7) Am I on the wrong track with trying to handle 206 partial content requests, and should this work with normal 200 HTTP responses ? HTTP 200 responses works fine for VLC so I suspect the HTML5 video client will only work with partial content requests ?

    As I’m still learning this stuff its difficult to work through the various layers of this problem (FFMPEG, node, streaming, HTTP, HTML5 video) so any pointers will be greatly appreciated. I have spent hours researching on this site and the net, and I have not come across anyone who has been able to do real time streaming in node but I can’t be the first, and I think this should be able to work (somehow !).