Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (46)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Personnaliser l’affichage de mon Médiaspip

    27 mai 2013

    Vous pouvez modifier la configuration du squelette afin de personnaliser votre Médiaspip Voir aussi plus d’informations en suivant ce lien
    Comment supprimer le nombre de vues d’affichage d’un média ?
    Administrer > Gestion du squelette > Pages des articles et médias Cocher dans "Informations non affichées sur les pages de médias" les paramètres que vous ne souhaitez pas afficher.
    Comment supprimer le titre de mon Médiaspip dans le bandeau horizontal ?
    Administrer > Gestion du squelette > (...)

Sur d’autres sites (7437)

  • Frames are different when extracted from FFMPEG and Android Tablets (Through TextureView)

    20 décembre 2019, par Keyang

    I am extracting frames from the same clip through FFMPEG and Android Tablets. The clip is encoded using h264 with pixel format yuv420p. The frames generated on two ends are visually different. See below

    Frame from FFMPEG
    Frame from FFMPEG

    Frame from Android TextureView
    enter image description here

    Both frames are in 72x72 resolution. But frame from Android Tablet has obvious less feature compared to that of FFMPEG (e.g. the face of the lady is ’smoother’ on frame from android).

    The frames extraction for FFMPEG is like :

    ffmpeg -i sample.mp4 ./frames/%05d.png

    While on Android device, TextureView is used to render decoded Video onto an OpenGL Texture. Then TextureView.getBitmap() is called to retrieve the bitmap when "onSurfaceTextureUpdated" is called. So basically :

    // mediaPlayer is MediaPlayer set properly
    // textureView is a TextureView component

    mediaPlayer.setSurface(textureViewSurface)
    onSurfaceTextureUpdated-> {
    bmp=textureView.getBitmap();
    bmp.compress(PNG,100,outputStream);
    }

    FFMPEG uses sRGB internally and OpenGL uses linear RGB. I have tried to adjust gamma but does not work quite well.

    Anyone knows the reason and how to resolve the issue so that frames extracted from both ends look the same ?

  • How to fill an AVFrame structure in order to encode an YUY2 video (or UYVY) into H265

    22 avril, par Rich Deng

    I want to compress a video stream in YUY2 or UYVY format to, say H265. If I understand the answers given this thread correctly, I should be able use the function av_image_fill_arrays() to fill the data and linesize arrays of an AVFrame object, call avcodec_send_frame(), and then avcodec_receive_packet() to get encoded data :

    


    bool VideoEncoder::Init(const AM_MEDIA_TYPE* pMediaType)
{
    // we should have a valid pointer
    if (pMediaType)
    {
        m_mtInput.Empty();
        m_mtInput.Set(*pMediaType);
    }
    else
        return false;

        // find encoder
    m_pCodec = m_spAVCodecDlls->avcodec_find_encoder(AV_CODEC_ID_HEVC);
    m_pCodecCtx = m_spAVCodecDlls->avcodec_alloc_context3(m_pCodec);
    if (!m_pCodec || !m_pCodecCtx)
    {
        Log.Log(_T("Failed to find or allocate codec context!"));
        return false;
    }

    AVPixelFormat ePixFmtInput = GetInputPixelFormat();
    if (CanConvertInputFormat(ePixFmtInput) == false)
    {
        return false;
    }

    // we are able to convert
    // so continue with setting it up
    int nWidth = m_mtInput.GetWidth();
    int nHeight = m_mtInput.GetHeight();

    // Set encoding parameters

    // Set bitrate (4 Mbps for 1920x1080)
    m_pCodecCtx->bit_rate = (((int64)4000000 * nWidth / 1920) * nHeight / 1080);  

    m_pCodecCtx->width = nWidth;  
    m_pCodecCtx->height = nHeight;


    // use reference time as time_base
    m_pCodecCtx->time_base.den = 10000000;  
    m_pCodecCtx->time_base.num = 1;

    SetAVRational(m_pCodecCtx->framerate, m_mtInput.GetFrameRate());
    //m_pCodecCtx->framerate = (AVRational){ 30, 1 };
    m_pCodecCtx->gop_size = 10;  // GOP size
    m_pCodecCtx->max_b_frames = 1;

    // set pixel format
    m_pCodecCtx->pix_fmt = ePixFmtInput;  // YUV 4:2:0 format or YUV 4:2:2

    // Open the codec
    if (m_spAVCodecDlls->avcodec_open2(m_pCodecCtx, m_pCodec, NULL) < 0)
    {
        return false;
    }

        return true;
}

bool VideoEncoder::AllocateFrame()
{

    m_pFrame = m_spAVCodecDlls->av_frame_alloc();
    if (m_pFrame == NULL)
    {
        Log.Log(_T("Failed to allocate frame object!"));
        return false;
    }

    m_pFrame->format = m_pCodecCtx->pix_fmt;
    m_pFrame->width = m_pCodecCtx->width;
    m_pFrame->height = m_pCodecCtx->height;

    m_pFrame->time_base.den = m_pCodecCtx->time_base.den;
    m_pFrame->time_base.num = m_pCodecCtx->time_base.num;


    return true;
}

bool VideoEncoder::Encode(IMediaSample* pSample)
{
    if (m_pFrame == NULL)
    {
        return false;
    }

    // get the time stamps
    REFERENCE_TIME rtStart, rtEnd;
    HRESULT hr = pSample->GetTime(&rtStart, &rtEnd);
    m_rtInputFrameStart = rtStart;
    m_rtInputFrameEnd = rtEnd;


    // get length
    int nLength = pSample->GetActualDataLength();

    // get pointer to actual sample data
    uint8_t* pData = NULL;
    hr = pSample->GetPointer(&pData);

    if (FAILED(hr) || NULL == pData)
        return false;

    m_pFrame->flags = (S_OK == pSample->IsSyncPoint()) ? (m_pFrame->flags | AV_FRAME_FLAG_KEY) : (m_pFrame->flags & ~AV_FRAME_FLAG_KEY);

    // clear old data
    for (int n = 0; n < AV_NUM_DATA_POINTERS; n++)
    {
        m_pFrame->data[n] = NULL;// (uint8_t*)aryData[n];
        m_pFrame->linesize[n] = 0;// = aryStride[n];
    }


    int nRet = 0;
    int nStride = m_mtInput.GetStride();
    nRet = m_spAVCodecDlls->av_image_fill_arrays(m_pFrame->data, m_pFrame->linesize, pData, ePixFmt, m_pFrame->width, m_pFrame->height, 32);
    if (nRet < 0)
    {
        return false;
    }

    m_pFrame->pts = (int64_t) rtStart;
    m_pFrame->duration = rtEnd - rtStart;
    nRet = m_spAVCodecDlls->avcodec_send_frame(m_pCodecCtx, m_pFrame);
    if (nRet == AVERROR(EAGAIN))
    {
        ReceivePacket();
        nRet = m_spAVCodecDlls->avcodec_send_frame(m_pCodecCtx, m_pFrame);
    }

    if (nRet < 0)
    {
        return false;
    }

    // Receive the encoded packets
    ReceivePacket();

    return true;
}

bool VideoEncoder::ReceivePacket()
{
    bool bRet = true;
    AVPacket* pkt = m_spAVCodecDlls->av_packet_alloc();
    while (m_spAVCodecDlls->avcodec_receive_packet(m_pCodecCtx, pkt) == 0)
    {
        // Write pkt->data to output file or stream
        m_pCallback->VideoEncoderWriteEncodedSample(pkt);
        if (m_OutFile.IsOpen())
            m_OutFile.Write(pkt->data, pkt->size);
        m_spAVCodecDlls->av_packet_unref(pkt);
    }
    m_spAVCodecDlls->av_packet_free(&pkt);

    return bRet;
}


    


    I must have done something wrong. The result is not correct. For example, rather than a video with a person's face showing in the middle of the screen, I get a mostly green screen with parts of the face showing up at the lower left and lower right corners.

    


    Can someone help me ?

    


  • SOLVED - AVI to MP4 - ffmpeg conversion

    26 mai 2014, par Emmanuel Brunet

    I’m running a debian 7.5 machine with ffmpeg-2.2 installed following these instructions

    Issue

    I’m trying to display a mp4 video inside my browser. The original file has an AVI container format. I can successfully convert it to mp4 and the targetfile is readable (video + sound) with the totem movie player. So I thought everything would be OK displaying the bellow page

    HTML5 web page

       


    <video width="640" height="480" controls="controls">
     <source src="/path/to/output.mp4" type="video/mp4">
    <h3>Your browser does not support the video tag</h3>
    </source></video>

    Input probe

    $ ffprobe -show_streams input.avi

     Duration: 00:08:22.90, start: 0.000000, bitrate: 1943 kb/s
       Stream #0:0: Audio: mp3 (U[0][0][0] / 0x0055), 48000 Hz, stereo, s16p, 64 kb/s
       Stream #0:1: Video: mpeg4 (Advanced Simple Profile) (XVID / 0x44495658), yuv420p, 720x540 [SAR 1:1 DAR 4:3], 1870 kb/s, 29.97 fps, 25 tbr, 29.97 tbn, 25 tbc

    Convert

    $ ffmpeg -y -fflags +genpts -i input.avi -acodec copy -vcodec copy ouput.mp4

    Html browser

    Opening the above html file plays sound but no video is displayed.

    When I use other .mp4 files, videos succesfully displayed so I’m sure I face a conversion issue.

    Not : I’ve tryed a lot of other ffmpeg options but without success.

    Any idea ?

    Thanks in advance.