Recherche avancée

Médias (91)

Autres articles (29)

  • Qualité du média après traitement

    21 juin 2013, par

    Le bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
    Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

Sur d’autres sites (4140)

  • Is there an effective and cheap/free way to host video for a mobile app that must be approved by an admin before going live ? [on hold]

    9 août 2013, par user2658775

    We are building a mobile app for the iOS and Android operating systems. The app is to be a communication platform for members within an organization. Content is generated by users and submitted to the admin. Once approved by the admin the content is pushed to the app. One feature of the app is the ability to upload video.

    We are having a tough time attempting to figure out the best way to do this. Because the app will be representing the organization, the organization must have control over the approval process.

    So far we have come up with the following options :

    Option 1 : purchase a dedicated server from hosting service provider. The basic package with Blue host is $150/month which is fairly expensive.

    Option 2 : have the users post to YouTube using their personal accounts. Upon posting to YouTube (via the app) the app would send a notification to the admin that a new video has been posted. Admin would review the video and if acceptable admin would user url link to post video to app. This option, while free, requires many steps that will bog down the submittal process.

    Does anyone know of an effective way to post video to an app that requires approval by an admin ?

  • How to force "full range" flag on export

    28 janvier 2019, par Emanuele Vissani

    I have an ffmpeg command to remap audio tracks to descrete channels in a ProRes 4444 quicktime file. Even if the input video is copied to the output, the exported file is interpreted by a professional video player software as video range (16-234 values) instead of the original full range (0-255 values), making it look more contrasted.
    The content is actually correct, changing manually the range setting in the player software gives back the right light range, so I think the output file just lose some kind of range flag.

    I already tried the following options without results :

    -colorspace bt709 -movflags +write_colr

    -dst_range 1 -color_range 2

    -vf scale=out_range=full

    -vf scale=in_range=full:out_range=full

    Original command is :

    ffmpeg -i F:\_IMPORT\TST_ProRes4444_4k.mov -map 0:0 -c copy -map 0:1 -c copy -map_channel 0.2.0:0.2 -c:a pcm_s24le F:\_EXPORT\TEST\test.mov

    Thank you for your help.

  • RGB to YUV conversion with libav (ffmpeg) triplicates image

    17 avril 2021, par José Tomás Tocino

    I'm building a small program to capture the screen (using X11 MIT-SHM extension) on video. It works well if I create individual PNG files of the captured frames, but now I'm trying to integrate libav (ffmpeg) to create the video and I'm getting... funny results.

    


    The furthest I've been able to reach is this. The expected result (which is a PNG created directly from the RGB data of the XImage file) is this :

    


    Expected result

    


    However, the result I'm getting is this :

    


    Obtained result

    


    As you can see the colors are funky and the image appears cropped three times. I have a loop where I capture the screen, and first I generate the individual PNG files (currently commented in the code below) and then I try to use libswscale to convert from RGB24 to YUV420 :

    


    while (gRunning) {
        printf("Processing frame framecnt=%i \n", framecnt);

        if (!XShmGetImage(display, RootWindow(display, DefaultScreen(display)), img, 0, 0, AllPlanes)) {
            printf("\n Ooops.. Something is wrong.");
            break;
        }

        // PNG generation
        // snprintf(imageName, sizeof(imageName), "salida_%i.png", framecnt);
        // writePngForImage(img, width, height, imageName);

        unsigned long red_mask = img->red_mask;
        unsigned long green_mask = img->green_mask;
        unsigned long blue_mask = img->blue_mask;

        // Write image data
        for (int y = 0; y < height; y++) {
            for (int x = 0; x < width; x++) {
                unsigned long pixel = XGetPixel(img, x, y);

                unsigned char blue = pixel & blue_mask;
                unsigned char green = (pixel & green_mask) >> 8;
                unsigned char red = (pixel & red_mask) >> 16;

                pixel_rgb_data[y * width + x * 3] = red;
                pixel_rgb_data[y * width + x * 3 + 1] = green;
                pixel_rgb_data[y * width + x * 3 + 2] = blue;
            }
        }

        uint8_t* inData[1] = { pixel_rgb_data };
        int inLinesize[1] = { in_w };

        printf("Scaling frame... \n");
        int sliceHeight = sws_scale(sws_context, inData, inLinesize, 0, height, pFrame->data, pFrame->linesize);

        printf("Obtained slice height: %i \n", sliceHeight);
        pFrame->pts = framecnt * (pVideoStream->time_base.den) / ((pVideoStream->time_base.num) * 25);

        printf("Frame pts: %li \n", pFrame->pts);
        int got_picture = 0;

        printf("Encoding frame... \n");
        int ret = avcodec_encode_video2(pCodecCtx, &pkt, pFrame, &got_picture);

//                int ret = avcodec_send_frame(pCodecCtx, pFrame);

        if (ret != 0) {
            printf("Failed to encode! Error: %i\n", ret);
            return -1;
        }

        printf("Succeed to encode frame: %5d - size: %5d\n", framecnt, pkt.size);

        framecnt++;

        pkt.stream_index = pVideoStream->index;
        ret = av_write_frame(pFormatCtx, &pkt);

        if (ret != 0) {
            printf("Error writing frame! Error: %framecnt \n", ret);
            return -1;
        }

        av_packet_unref(&pkt);
    }


    


    I've placed the entire code at this gist. This question right here looks pretty similar to mine, but not quite, and the solution did not work for me, although I think this has something to do with the way the line stride is calculated.