Recherche avancée

Médias (1)

Mot : - Tags -/book

Autres articles (29)

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

  • Activation de l’inscription des visiteurs

    12 avril 2011, par

    Il est également possible d’activer l’inscription des visiteurs ce qui permettra à tout un chacun d’ouvrir soit même un compte sur le canal en question dans le cadre de projets ouverts par exemple.
    Pour ce faire, il suffit d’aller dans l’espace de configuration du site en choisissant le sous menus "Gestion des utilisateurs". Le premier formulaire visible correspond à cette fonctionnalité.
    Par défaut, MediaSPIP a créé lors de son initialisation un élément de menu dans le menu du haut de la page menant (...)

  • Diogene : création de masques spécifiques de formulaires d’édition de contenus

    26 octobre 2010, par

    Diogene est un des plugins ? SPIP activé par défaut (extension) lors de l’initialisation de MediaSPIP.
    A quoi sert ce plugin
    Création de masques de formulaires
    Le plugin Diogène permet de créer des masques de formulaires spécifiques par secteur sur les trois objets spécifiques SPIP que sont : les articles ; les rubriques ; les sites
    Il permet ainsi de définir en fonction d’un secteur particulier, un masque de formulaire par objet, ajoutant ou enlevant ainsi des champs afin de rendre le formulaire (...)

Sur d’autres sites (3812)

  • moviepy black border around png when compositing into an MP4

    27 août 2022, par OneWorld

    compositing a png into an MP4 video creates a black border around the edge.

    


    This is using moviepy 1.0.0

    


    Code below reproduces the MP4 with the attached red text png.

    


    enter image description here

    


    import numpy as np
import moviepy.editor as mped
def composite_txtpng_on_colour():
    bg_color = mped.ColorClip(size=[400, 300], color=np.array([0, 255, 0]).astype(np.uint8),
                          duration=2).set_position((0, 0))
    text_png_postition = [5, 5]
    text_png = mped.ImageClip("./txtpng.png", duration=3).set_position((text_png_postition))

    canvas_size = bg_color.size
    stacked_clips = mped.CompositeVideoClip([bg_color, text_png], size=canvas_size).set_duration(2)
    stacked_clips.write_videofile('text_with_black_border_video.mp4', fps=24)

composite_txtpng_on_colour()


    


    The result is an MP4 that can be played in VLC player. A screenshot of the black edge can be seen below :-

    


    enter image description here

    


    Any suggestions to remove the black borders would be much appreciated.

    


    Update : It looks like moviepy does a blit instead of alpha compositing.

    


    def blit(im1, im2, pos=None, mask=None, ismask=False):
    """ Blit an image over another.  Blits ``im1`` on ``im2`` as position ``pos=(x,y)``, using the
    ``mask`` if provided. If ``im1`` and ``im2`` are mask pictures
    (2D float arrays) then ``ismask`` must be ``True``.
    """
    if pos is None:
        pos = [0, 0]

    # xp1,yp1,xp2,yp2 = blit area on im2
    # x1,y1,x2,y2 = area of im1 to blit on im2
    xp, yp = pos
    x1 = max(0, -xp)
    y1 = max(0, -yp)
    h1, w1 = im1.shape[:2]
    h2, w2 = im2.shape[:2]
    xp2 = min(w2, xp + w1)
    yp2 = min(h2, yp + h1)
    x2 = min(w1, w2 - xp)
    y2 = min(h1, h2 - yp)
    xp1 = max(0, xp)
    yp1 = max(0, yp)

    if (xp1 >= xp2) or (yp1 >= yp2):
        return im2

    blitted = im1[y1:y2, x1:x2]

    new_im2 = +im2

    if mask is None:
        new_im2[yp1:yp2, xp1:xp2] = blitted
    else:
        mask = mask[y1:y2, x1:x2]
        if len(im1.shape) == 3:
            mask = np.dstack(3 * [mask])
        blit_region = new_im2[yp1:yp2, xp1:xp2]
        new_im2[yp1:yp2, xp1:xp2] = (1.0 * mask * blitted + (1.0 - mask) * blit_region)
    
    return new_im2.astype('uint8') if (not ismask) else new_im2


    


    and so, Rotem is right.

    


    new_im2[yp1:yp2, xp1:xp2] = (1.0 * mask * blitted + (1.0 - mask) * blit_region)


    


    is

    


    (alpha * img_rgb + (1.0 - alpha) * bg)


    


    and this is how moviepy composites. And this is why we see black at the edges.

    


  • Writing Live-Multimedia-Application using OpenGL & Co. saving output to disc [closed]

    21 janvier 2013, par user1997286

    I want to write an application that does the following thing :

    • Getting Commands via ArtNET (DMX over Ethernet, a Control Protocol) for each object (called Layer)
    • each Layer could be one of the following : Live Camera Stream, Movie, Image
    • each layer could be translated, rotated or stretched
    • on each layer I can set filters (Like a Kaleidoscope Effect, Blur, Color Correction, etc.)
    • the rsulting video-stream is in the 3d-space
    • I want to display each part of the image on one Projector (in total up to 3 ones) using a TripleHead2GO (3 Projectors display a different region of my DVI-Output). Each Projecector-Image should have own Soft-Edge and Keystone parameters.
    • the resulting image will also be shown on a Preview-Screen with some Information overlay.

    I think all that should be possible with opengl and openal (for the movie audio)

    I think I'll use C++, OpenGL for Graphics, OpenAL for Audio, if needed ffmpeg for Video conversion, Ubuntu/Debian as OS.

    The software is used to do Multimedia-Shows on Concerts including Cameras & Co.

    All that should happen Live (On a FullHD output), Having i7 3770, GLX 670 and 16GB of Ram for at least 8 Layers. (4 Live-Images at once + Some Overlays like the Actors Name and some Logos)

    But now comes the question.

    Is it also Posible to do the following with that setting :

    • Writing the output Image with all the 3d translations to a Movie File (To Master a DVD later) with Audio
    • Mixing Audio from different Inputs & Files (Ambience Mics, Signal from the Sound Mixer, Playbacks from my own application) to more than one Mix (eg. one Mix for the Recording, one Mix for Live)
    • Stream that Output Complete or in Parts (e.g. the left Part of the Image) over the Network (For example, Projector 1 is near the Server, so I connect it using DVI, Projector 2+3 is connected to a Computer that receives the streams for that two projectors (with soft edge on each stream) and Screen 4 is outside the Concert Hall and shows the complete Live-Stream.
    • What GUI-Framework should I use for that ?
    • is it perhaps event performant enough to use Java for that ?
    • is it posible to use that mechanism for just rendering (eg. I have stored the cut points on Disc and saved every single camera stream to change some errors later or cut out some parts)
  • WebRTC Stream Freezes When Picture Complexity Increases

    12 octobre 2021, par user1259576

    I am developing an application that uses WebRTC to display a live video stream being captured from a V4L2 source. The stream originates from a Linux box that has a DVI-USB capture card, is encoded to H264 by ffmpeg and sent to RTP, received by a Janus WebRTC server which is accessed by the web interface.

    


    Here is my current ffmpeg command - pretty simple :
ffmpeg -f v4l2 -i /dev/video0 -vf "transpose=1,scale=768:1024" -vcodec libx264 -profile:v baseline -pix_fmt yuv420p -f rtp rtp ://10.116.80.86:8004

    


    I can't go into details, but the DVI source generates a portrait 768x1024 image that initially is a simple image where the only movement is a small clock near the center that increments every second. At this stage, everything appears to work great. The image is high-quality and continuous/smooth in the browser.

    


    Once I interact with the DVI source, a more complex image is generated, with some text/lines in the upper half. Still not very complex - only 2 colors involved and some basic 1px line shapes, and only the little clock is moving. At this point, the video starts to freeze frequently, and only updates once in a while for a few seconds. Bandwidth should not be an issue here, and the bitrate appears to stay high. However, many fewer frames are decoded.

    


    I have also tried scaling the video down to 480x640 from 768x1024 and with that change the issue does not occur. However, I really need the full resolution and, again, there should not be a bandwidth issue here.

    


    I have also tried capturing the output of ffmpeg to a file rather than streaming to RTP and in the file everything is good.

    


    Here is a screenshot of the WebRTC internals (in Edge) for this stream. You can clearly see when the video image changes from the simple clock to including more shapes & text (nothing is changed here other than the image from the DVI source) :

    


    WebRTC internals plots

    


    In Firefox, the video just freezes whenever frames are not decoded. In Edge, the video goes black after a moment with no frames decoded.

    


    Any ideas as to what might be causing this ?