Recherche avancée

Médias (0)

Mot : - Tags -/tags

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (3)

  • Encodage et transformation en formats lisibles sur Internet

    10 avril 2011

    MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
    Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
    Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...)

  • Qualité du média après traitement

    21 juin 2013, par

    Le bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
    Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (3589)

  • DXGI Desktop Duplication : encoding frames to send them over the network

    13 novembre 2016, par prazuber

    I’m trying to write an app which will capture a video stream of the screen and send it to a remote client. I’ve found out that the best way to capture a screen on Windows is to use DXGI Desktop Duplication API (available since Windows 8). Microsoft provides a neat sample which streams duplicated frames to screen. Now, I’ve been wondering what is the easiest, but still relatively fast way to encode those frames and send them over the network.

    The frames come from AcquireNextFrame with a surface that contains the desktop bitmap and metadata which contains dirty and move regions that were updated. From here, I have a couple of options :

    1. Extract a bitmap from a DirectX surface and then use an external library like ffmpeg to encode series of bitmaps to H.264 and send it over RTSP. While straightforward, I fear that this method will be too slow as it isn’t taking advantage of any native Windows methods. Converting D3D texture to a ffmpeg-compatible bitmap seems like unnecessary work.
    2. From this answer : convert D3D texture to IMFSample and use MediaFoundation’s SinkWriter to encode the frame. I found this tutorial of video encoding, but I haven’t yet found a way to immediately get the encoded frame and send it instead of dumping all of them to a video file.

    Since I haven’t done anything like this before, I’m asking if I’m moving in the right direction. In the end, I want to have a simple, preferably low latency desktop capture video stream, which I can view from a remote device.

    Also, I’m wondering if I can make use of dirty and move regions provided by Desktop Duplication. Instead of encoding the frame, I can send them over the network and do the processing on the client side, but this means that my client has to have DirectX 11.1 or higher available, which is impossible if I would want to stream to a mobile platform.

  • AVSampleBufferDisplayLayer with VTDecompressionSession

    7 juillet 2016, par user1784317

    I’ve been struggling with AVSampleBufferDisplayLayer being really choppy with a lot of motion. When there was motion in my live stream, it would be come pixelated and half frozen with multiple frames displaying at once. However, once I added the following piece of code, everything was solved :

    VTDecodeFrameFlags flags = kVTDecodeFrame_EnableAsynchronousDecompression
    | kVTDecodeFrame_EnableTemporalProcessing;
    VTDecodeInfoFlags flagOut;
    VTDecompressionSessionDecodeFrame(decompressionSession, sampleBuffer, flags,
    (void*)CFBridgingRetain(NULL), &flagOut);

    Note that I did create the decompression session before, but I don’t actually do anything in the callback. I am still calling enqueueSampleBuffer : on AVSampleBufferDisplayLayer and that is how the video is displayed on the screen.

    Do you have to call VTDecompressionSessionDecodeFrame for AVSampleBufferDisplayLayer to display correctly ? I thought AVSampleBufferDisplayLayerr would use VTDecompressionSessionDecodeFrame internally. Is this due to being on the iOS Simulator ?

  • Ffmpeg decoder yuv420p

    4 décembre 2015, par user2466514

    I work on a video player yuv420p with ffmpeg but it’s not working and i can’t find out why. I spend the whole week on it...

    So i have a test which just decode some frame and read it, but the output always differ, and it’s really weird.

    I use a video (mp4 yuv420p) which color one black pixel in more each frame :

    For the video, put http://sendvid.com/b1sgf8r1 on a website like http://www.telechargerunevideo.com/en/

    VideoContext is just a little struct :

    struct  VideoContext {

    unsigned int      currentFrame;
    std::size_t       size;
    int               width;
    int               height;
    bool              pause;
    AVFormatContext*  formatCtx;
    AVCodecContext*   codecCtxOrig;
    AVCodecContext*   codecCtx;
    int               streamIndex;
    };

    So i have a function to count the number of black pixels :

    std::size_t  checkFrameNb(const AVFrame* frame) {

       std::size_t  nb = 0;

       for (int y = 0; y < frame->height; ++y) {
         for (int x = 0 ; x < frame->width; ++x) {

           if (frame->data[0][(y * frame->linesize[0]) + x] == BLACK_FRAME.y
               && frame->data[1][(y / 2 * frame->linesize[1]) + x / 2] == BLACK_FRAME.u
               && frame->data[2][(y / 2 * frame->linesize[2]) + x / 2] == BLACK_FRAME.v)
             ++nb;
         }
       }
       return nb;
     }

    And this is how i decode one frame :

    const AVFrame*  VideoDecoder::nextFrame(entities::VideoContext& context) {

     int frameFinished;
     AVPacket packet;

     // Allocate video frame
     AVFrame*  frame = av_frame_alloc();
     if(frame == nullptr)
       throw;

     // Initialize frame->linesize
     avpicture_fill((AVPicture*)frame, nullptr, AV_PIX_FMT_YUV420P, context.width, context.height);

     while(av_read_frame(context.formatCtx, &packet) >= 0) {

       // Is this a packet from the video stream?
       if(packet.stream_index == context.streamIndex) {
         // Decode video frame
         avcodec_decode_video2(context.codecCtx, frame, &frameFinished, &packet);

         // Did we get a video frame?
         if(frameFinished) {

           // Free the packet that was allocated by av_read_frame
           av_free_packet(&packet);
           ++context.currentFrame;
           return frame;
         }
       }
     }

     // Free the packet that was allocated by av_read_frame
     av_free_packet(&packet);
     throw core::GlobalException("nextFrame", "Frame decode failed");
    }

    There is already something wrong ?

    Maybe the context initialization will be useful :

    entities::VideoContext  VideoLoader::loadVideoContext(const char* file,
                                                         const int width,
                                                         const int height) {

     entities::VideoContext  context;

     // Register all formats and codecs
     av_register_all();

     context.formatCtx = avformat_alloc_context();


     // Open video file
     if(avformat_open_input(&context.formatCtx, file, nullptr, 0) != 0)
       throw; // Couldn't open file

     // Retrieve stream information
     if(avformat_find_stream_info(context.formatCtx, nullptr) > 0)
       throw; // Couldn't find stream information

     // Dump information about file onto standard error
     //av_dump_format(m_formatCtx, 0, file, 1);


     // Find the first video stream because we don't need more
     for(unsigned int i = 0; i < context.formatCtx->nb_streams; ++i)
       if(context.formatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
         context.streamIndex = i;
         context.codecCtx = context.formatCtx->streams[i]->codec;
         break;
       }
     if(context.codecCtx == nullptr)
       throw; // Didn't find a video stream


     // Find the decoder for the video stream
     AVCodec*  codec = avcodec_find_decoder(context.codecCtx->codec_id);
     if(codec == nullptr)
       throw; // Codec not found
     // Copy context
     if ((context.codecCtxOrig = avcodec_alloc_context3(codec)) == nullptr)
       throw;
     if(avcodec_copy_context(context.codecCtxOrig, context.codecCtx) != 0)
       throw; // Error copying codec context
     // Open codec
     if(avcodec_open2(context.codecCtx, codec, nullptr) < 0)
       throw; // Could not open codec

     context.currentFrame = 0;
     decoder::VideoDecoder::setVideoSize(context);
     context.pause = false;
     context.width = width;
     context.height = height;

     return std::move(context);
    }

    I know it’s not a little piece of code, if you have any idea too make an exemple more brief, go on.

    And if someone have an idea about this issue, there is my output :

    9 - 10 - 12 - 4 - 10 - 14 - 11 - 8 - 9 - 10

    But i want :

    1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 - 10

    PS :
    get fps and video size are copy paste code of opencv