Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (46)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (9055)

  • Streaming binary data with FFmpeg

    17 novembre 2016, par diAblo

    I am using FFmpeg in a C++ library to live stream video and audio. What I want is to stream some binary data as a separate stream.

    If I’m reading correctly, Mp4 container supports "private streams" which can contain any kind of data. However I can’t find out any info on how to add such a stream with FFmpeg. My idea is to have a stream of type AV_MEDIA_TYPE_DATA that uses codec AV_CODEC_ID_BIN_DATA.

    What I want is very closely described here, but it wasn’t answered.

  • ffmpeg live stream latency

    22 août 2014, par Alex Fu

    I’m currently working on live streaming video from device A (source) to device B (destination) directly via local WiFi network.

    I’ve built FFMPEG to work on the Android platform and I have been able to stream video from A -> B successfully at the expense of latency (takes about 20 seconds for a movement or change to appear on screen ; as if the video was 20 seconds behind actual events).

    Initial start up is around 4 seconds. I’ve been able to trim that initial start up time down by lowering probesize and max_analyze_duration but the 20 second delay is still there.

    I’ve sprinkled some timing events around the code to try an figure out where the most time is being spent...

    • naInit : 0.24575 sec
    • naSetup : 0.043705 sec

    The first video frame isn’t obtained until 0.035342 sec after the decodeAndRender function is called. Subsequent decoding times can be illustrated here : enter image description here http://jsfiddle.net/uff0jdf7/1/ (interactive graph)

    From all the timing data i’ve recorded, nothing really jumps out at me unless I’m doing the timing wrong. Some have suggested that I am buffering too much data, however, as far as I can tell, I’m only buffering an image at a time. Is this too much ?

    Also, the source video that’s coming in is in the format of P264 ; it’s a custom implementation of H264 apparently.

    jint naSetup(JNIEnv *pEnv, jobject pObj, int pWidth, int pHeight) {
     width = pWidth;
     height = pHeight;

     //create a bitmap as the buffer for frameRGBA
     bitmap = createBitmap(pEnv, pWidth, pHeight);
     if (AndroidBitmap_lockPixels(pEnv, bitmap, &pixel_buffer) < 0) {
       LOGE("Could not lock bitmap pixels");
       return -1;
     }

     //get the scaling context
     sws_ctx = sws_getContext(codecCtx->width, codecCtx->height, codecCtx->pix_fmt,
         pWidth, pHeight, AV_PIX_FMT_RGBA, SWS_BILINEAR, NULL, NULL, NULL);

     // Assign appropriate parts of bitmap to image planes in pFrameRGBA
     // Note that pFrameRGBA is an AVFrame, but AVFrame is a superset
     // of AVPicture
     av_image_fill_arrays(frameRGBA->data, frameRGBA->linesize, pixel_buffer, AV_PIX_FMT_RGBA, pWidth, pHeight, 1);
     return 0;
    }

    void decodeAndRender(JNIEnv *pEnv) {
     ANativeWindow_Buffer windowBuffer;
     AVPacket packet;
     AVPacket outputPacket;
     int frame_count = 0;
     int got_frame;

     while (!stop && av_read_frame(formatCtx, &packet) >= 0) {
       // Is this a packet from the video stream?
       if (packet.stream_index == video_stream_index) {

         // Decode video frame
         avcodec_decode_video2(codecCtx, decodedFrame, &got_frame, &packet);

         // Did we get a video frame?
         if (got_frame) {
           // Convert the image from its native format to RGBA
           sws_scale(sws_ctx, (uint8_t const * const *) decodedFrame->data,
               decodedFrame->linesize, 0, codecCtx->height, frameRGBA->data,
               frameRGBA->linesize);

           // lock the window buffer
           if (ANativeWindow_lock(window, &windowBuffer, NULL) < 0) {
             LOGE("Cannot lock window");
           } else {
             // draw the frame on buffer
             int h;
             for (h = 0; h < height; h++) {
               memcpy(windowBuffer.bits + h * windowBuffer.stride * 4,
                      pixel_buffer + h * frameRGBA->linesize[0],
                      width * 4);
             }
             // unlock the window buffer and post it to display
             ANativeWindow_unlockAndPost(window);

             // count number of frames
             ++frame_count;
           }
         }
       }

       // Free the packet that was allocated by av_read_frame
       av_free_packet(&packet);
     }

     LOGI("Total # of frames decoded and rendered %d", frame_count);
    }
  • Live Stream using .m3u8 and .ts files with iPhone as server

    26 février 2015, par Bhumit

    I am trying to accomplish a task to live stream from iPhone camera. I have done some research and found that i can use .m3u8 files for streaming live video with should contain .ts(Mpeg-2) files .

    Now the file which i have on my iPhone is .mp4 file and it does not work with .m3u8, so i figured out i will have to convert .mp4 to .ts for that , but i have not succeeded in doing so.

    I found that it is possible to convert video ffmpeg lib as mentioned in this article here. I have successfully imported ffmpeg library but not able figure out how can i use it to convert a video as i am using this for first time.

    One another thing apple documentation says

    There are a number of hardware and software encoders that can create
    MPEG-2 transport streams carrying MPEG-4 video and AAC audio in real
    time.

    What is being said here ? is there any other way i can use .mp4 files for live streaming without converting them from iOS ?

    Let me know if i am not clear, i can provide more information .Any suggestion is appreciated. I would like to know am i on a right path here ?

    EDIT

    I am adding more info to my question, so basically what i am asking is , we can convert .mp4 video to .ts using following command

    ffmpeg -i file.mp4 -acodec libfaac -vcodec libx264 -an -map 0 -f segment -segment_time 10 -segment_list test.m3u8 -segment_format mpegts -vbsf h264_mp4toannexb -flags -global_header stream%05d.ts

    How can i use ffmpeg library to do what this command does in iOS.