Recherche avancée

Médias (1)

Mot : - Tags -/MediaSPIP

Autres articles (6)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

Sur d’autres sites (2838)

  • Dividing, processing and merging files with ffmpeg

    9 novembre 2015, par João Carlos Santos

    I am trying to build an application that will divide an input video file (usually mp4) into chunks so that I can apply some processing to them concurrently and then merge them back into a single file.

    To do this, I have outlined 4 steps :

    1. Forcing keyframes at specific intervals so to make sure that each
      chunk can be played on its own. For this I am using the following
      command :

      ffmpeg -i input.mp4 -force_key_frames
      "expr:gte(t,n_forced*chunk_length)" keyframed.mp4

      where chunk_length is the duration of each chunk.

    2. Dividing keyframed.mp4 into multiple chunks.
      Here is where I have my problem. I am using the following command :

      `ffmpeg -i keyframed.mp4 -ss 00:00:00 -t chunk_length -vcodec copy -acodec copy test1.mp4`

      to get the first chunk from my keyframed file but it isn’t capturing
      the output correctly, since it appears to miss the first keyframe.

      On other chunks, the duration of the output is also sometimes
      slightly less than chunk_length, even though I am always using the
      same -t chunk_length option

    3. Processing each chunk For this task, I am using the following
      commands :

      ffmpeg -y -i INPUT_FILE -threads 1 -pass 1 -s 1280x720 -preset
      medium -vprofile baseline -c:v libx264 -level 3.0 -vf
      "format=yuv420p" -b:v 2000k -maxrate:v 2688k -bufsize:v 2688k -r 25
      -g 25 -keyint_min 50 -x264opts "keyint=50:min-keyint=50:no-scenecut" -an -f mp4 -movflags faststart /dev/null
      ffmpeg -y -i INPUT_FILE -threads 1 -pass 2 -s 1280x720 -preset
      medium -vprofile baseline -c:v libx264 -level 3.0 -vf
      "format=yuv420p" -b:v 2000k -maxrate:v 2688k -bufsize:v 2688k -r 25
      -g 25 -keyint_min 50 -x264opts "keyint=50:min-keyint=50:no-scenecut" -acodec libfaac -ac 2 -ar 48000 -ab 128k -f mp4 -movflags faststart OUTPUT_FILE.mp4

      This commands are not allowed to be modified, since my goal here is
      to parallelize this process.

    4. Finally, to merge the files I am using concat and a list of the
      outputs of the 2nd step, as follows :

      ffmpeg -f concat -i mylist.txt -c copy final.mp4

    In conclusion, I am trying to find out a way to solve the problem with step 2 and also get some opinions if there is a better way to do this.

  • Render an IDirect3DSurface9 from DXVA2 ?

    18 janvier 2019, par TTGroup

    I got a IDirect3DSurface9 from DXVA2 video decoder using hardware acceleration.

    I’m try to Render this hardware IDirect3DSurface9 on My Window via its handle. The following is my summary code.

    The first, I call dxva2_init(AVCodecContext *s, HWND hwnd) ; with hwnd is window’s handle

    int dxva2_init(AVCodecContext *s, HWND hwnd)
    {
       InputStream *ist = (InputStream *)s->opaque;
       int loglevel = (ist->hwaccel_id == HWACCEL_AUTO) ? AV_LOG_VERBOSE : AV_LOG_ERROR;
       DXVA2Context *ctx;
       int ret;

       if (!ist->hwaccel_ctx) {
           ret = dxva2_alloc(s);
           if (ret < 0)
               return ret;
       }
       ctx = (DXVA2Context *)ist->hwaccel_ctx;
       ctx->deviceHandle = hwnd;
       if (s->codec_id == AV_CODEC_ID_H264 &&
           (s->profile & ~FF_PROFILE_H264_CONSTRAINED) > FF_PROFILE_H264_HIGH) {
           av_log(NULL, loglevel, "Unsupported H.264 profile for DXVA2 HWAccel: %d\n", s->profile);
           return AVERROR(EINVAL);
       }

       if (ctx->decoder)
           dxva2_destroy_decoder(s);

       ret = dxva2_create_decoder(s);
       if (ret < 0) {
           av_log(NULL, loglevel, "Error creating the DXVA2 decoder\n");
           return ret;
       }

       return 0;
    }

    After Decoding successful, I got got a IDirect3DSurface9, and I Render it by following function.

    int dxva2_render(AVCodecContext *s, AVFrame *frame)
    {
       LPDIRECT3DSURFACE9 surface = (LPDIRECT3DSURFACE9)frame->data[3];
       InputStream        *ist = (InputStream *)s->opaque;
       DXVA2Context       *ctx = (DXVA2Context *)ist->hwaccel_ctx;

       try
       {
           lockRenderCS.Enter();
           HRESULT hr = ctx->d3d9device->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(255, 0, 0), 1.0f, 0);
           if (hr != D3D_OK)
               return 0;

           hr = ctx->d3d9device->BeginScene();
           if (hr != D3D_OK)
               return 0;

           hr = ctx->d3d9device->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &pBackBuffer);
           if (hr != D3D_OK)
               return 0;

           hr = ctx->d3d9device->StretchRect(surface, NULL, pBackBuffer, NULL, D3DTEXF_LINEAR);
           if (hr != D3D_OK)
               return 0;

           hr = ctx->d3d9device->EndScene();
           if (hr != D3D_OK)
               return 0;

           hr = ctx->d3d9device->Present(NULL, NULL, NULL, NULL);
           if (hr != D3D_OK)
               return 0;
       }
       finally
       {
           lockRenderCS.Leave();
       }
       return 0;
    }

    Note : All D3D function above : Clear(), BeginScene(), GetBackBuffer(), StretchRect(), EndScene(), Present() were return Successful. But the frame was not display on My Window.

    I Guess that, I miss some code for integrated My Window Handle with DXVA2Context. In this code I only assign : ctx->deviceHandle = hwnd; in function dxva2_init().

    I search many times, but so far I still cannot find the solution, Anyone can help me ?

    Many Thanks !

  • Issue with image using ffmpeg to combining it on PHP

    8 mai 2017, par 7sega7

    I’m trying to combine images with ffmpeg on PHP.

    First, I save the image with this and resizing then :

    $imagen = file_get_contents($_SESSION['img']);
    file_put_contents('img_prod.jpg', $imagen);
    $img = resize_imagejpg('img_prod.jpg', 650,360);
    imagejpeg($img, 'img_prod.jpg');

    Then I create a png with only a title or info. At the moment I’m giving the text with a form, later I’ll do with BD.
    For this I’m using the imagettftext :

    titulo = explode("-", $_SESSION['title']);
    $im = imagecreatetruecolor(400, 30);
    $white= imagecolorallocate($im, 255, 255, 255);
    $black= imagecolorallocate($im, 0, 0, 0);
    imagefilledrectangle($im, 0, 0, 399, 29, $white);
    $font= 'Comfortaa.ttf';
    imagettftext($im, 10, 0, 15, 10, $black, $font, $titulo[$i]);
    imagepng($im, "img_con_texto.png");
    imagedestroy($im);

    What I’m doing here is, first I get the Title for my form, and with the explode, every time I read a "-", I put all the text in another line. I simplified this on the code quote por easy read.
    Then I create the color, I filled the all rectangle with one color, pick the font and put the font, color font and the text on a image and destroy the variable for using again with another text.

    At the end, I use ffmpeg to put the image for "file_get_contents" and the image with the text.

    echo shell_exec('ffmpeg -y -i marco.png -i img_prod.jpg -filter_complex "overlay=20:(main_h-overlay_h)/2" out_img_prod.jpg')

    The "marco.png" it is just a white image with no contents. I open paint, change the resize pixels for 850/480 and saving that as marco.png.

    Then I add the text on the right side with the last image :

    echo shell_exec('ffmpeg -y -i out_img_prod.jpg -i img_con_texto.png -filter_complex "overlay=500:140" prod_title.jpg');

    The issue is that, when I do this, the text have a light gray color on the background. When I create the img with the text, It have white background color and it is a .png So I don’t know why It have that gray background color.
    This is the image from the last code : image with text

    If I did the same with paint or gimp, adding mannualy the text on the image on the left, the background gray color dissapear so, It is possible to quit that light grey color ?

    PD : I quit some code for a simple read, so maybe I miss to quit variables. Also, If the img is more smaller and the text don’t collide with the text, the light gray color still there.