Recherche avancée

Médias (1)

Mot : - Tags -/censure

Autres articles (87)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (5325)

  • WebRTC predictions for 2016

    17 février 2016, par silvia

    I wrote these predictions in the first week of January and meant to publish them as encouragement to think about where WebRTC still needs some work. I’d like to be able to compare the state of WebRTC in the browser a year from now. Therefore, without further ado, here are my thoughts.

    WebRTC Browser support

    I’m quite optimistic when it comes to browser support for WebRTC. We have seen Edge bring in initial support last year and Apple looking to hire engineers to implement WebRTC. My prediction is that we will see the following developments in 2016 :

    • Edge will become interoperable with Chrome and Firefox, i.e. it will publish VP8/VP9 and H.264/H.265 support
    • Firefox of course continues to support both VP8/VP9 and H.264/H.265
    • Chrome will follow the spec and implement H.264/H.265 support (to add to their already existing VP8/VP9 support)
    • Safari will enter the WebRTC space but only with H.264/H.265 support

    Codec Observations

    With Edge and Safari entering the WebRTC space, there will be a larger focus on H.264/H.265. It will help with creating interoperability between the browsers.

    However, since there are so many flavours of H.264/H.265, I expect that when different browsers are used at different endpoints, we will get poor quality video calls because of having to negotiate a common denominator. Certainly, baseline will work interoperably, but better encoding quality and lower bandwidth will only be achieved if all endpoints use the same browser.

    Thus, we will get to the funny situation where we buy ourselves interoperability at the cost of video quality and bandwidth. I’d call that a “degree of interoperability” and not the best possible outcome.

    I’m going to go out on a limb and say that at this stage, Google is going to consider strongly to improve the case of VP8/VP9 by improving its bandwidth adaptability : I think they will buy themselves some SVC capability and make VP9 the best quality codec for live video conferencing. Thus, when Safari eventually follows the standard and also implements VP8/VP9 support, the interoperability win of H.264/H.265 will become only temporary overshadowed by a vastly better video quality when using VP9.

    The Enterprise Boundary

    Like all video conferencing technology, WebRTC is having a hard time dealing with the corporate boundary : firewalls and proxies get in the way of setting up video connections from within an enterprise to people outside.

    The telco world has come up with the concept of SBCs (session border controller). SBCs come packed with functionality to deal with security, signalling protocol translation, Quality of Service policing, regulatory requirements, statistics, billing, and even media service like transcoding.

    SBCs are a total overkill for a world where a large number of Web applications simply want to add a WebRTC feature – probably mostly to provide a video or audio customer support service, but it could be a live training session with call-in, or an interest group conference all.

    We cannot install a custom SBC solution for every WebRTC service provider in every enterprise. That’s like saying we need a custom Web proxy for every Web server. It doesn’t scale.

    Cloud services thrive on their ability to sell directly to an individual in an organisation on their credit card without that individual having to ask their IT department to put special rules in place. WebRTC will not make progress in the corporate environment unless this is fixed.

    We need a solution that allows all WebRTC services to get through an enterprise firewall and enterprise proxy. I think the WebRTC standards have done pretty well with firewalls and connecting to a TURN server on port 443 will do the trick most of the time. But enterprise proxies are the next frontier.

    What it takes is some kind of media packet forwarding service that sits on the firewall or in a proxy and allows WebRTC media packets through – maybe with some configuration that is necessary in the browsers or the Web app to add this service as another type of TURN server.

    I don’t have a full understanding of the problems involved, but I think such a solution is vital before WebRTC can go mainstream. I expect that this year we will see some clever people coming up with a solution for this and a new type of product will be born and rolled out to enterprises around the world.

    Summary

    So these are my predictions. In summary, they address the key areas where I think WebRTC still has to make progress : interoperability between browsers, video quality at low bitrates, and the enterprise boundary. I’m really curious to see where we stand with these a year from now.

    It’s worth mentioning Philipp Hancke’s tweet reply to my post :

    — we saw some clever people come up with a solution already. Now it needs to be implemented 🙂

    The post WebRTC predictions for 2016 first appeared on ginger’s thoughts.

  • aarch64 : vp9itxfm16 : Do a simpler half/quarter idct16/idct32 when possible

    25 février 2017, par Martin Storsjö
    aarch64 : vp9itxfm16 : Do a simpler half/quarter idct16/idct32 when possible
    

    This work is sponsored by, and copyright, Google.

    This avoids loading and calculating coefficients that we know will
    be zero, and avoids filling the temp buffer with zeros in places
    where we know the second pass won’t read.

    This gives a pretty substantial speedup for the smaller subpartitions.

    The code size increases from 21512 bytes to 31400 bytes.

    The idct16/32_end macros are moved above the individual functions ; the
    instructions themselves are unchanged, but since new functions are added
    at the same place where the code is moved from, the diff looks rather
    messy.

    Before :
    vp9_inv_dct_dct_16x16_sub1_add_10_neon : 284.6
    vp9_inv_dct_dct_16x16_sub2_add_10_neon : 1902.7
    vp9_inv_dct_dct_16x16_sub4_add_10_neon : 1903.0
    vp9_inv_dct_dct_16x16_sub8_add_10_neon : 2201.1
    vp9_inv_dct_dct_16x16_sub12_add_10_neon : 2510.0
    vp9_inv_dct_dct_16x16_sub16_add_10_neon : 2821.3
    vp9_inv_dct_dct_32x32_sub1_add_10_neon : 1011.6
    vp9_inv_dct_dct_32x32_sub2_add_10_neon : 9716.5
    vp9_inv_dct_dct_32x32_sub4_add_10_neon : 9704.9
    vp9_inv_dct_dct_32x32_sub8_add_10_neon : 10641.7
    vp9_inv_dct_dct_32x32_sub12_add_10_neon : 11555.7
    vp9_inv_dct_dct_32x32_sub16_add_10_neon : 12499.8
    vp9_inv_dct_dct_32x32_sub20_add_10_neon : 13403.7
    vp9_inv_dct_dct_32x32_sub24_add_10_neon : 14335.8
    vp9_inv_dct_dct_32x32_sub28_add_10_neon : 15253.6
    vp9_inv_dct_dct_32x32_sub32_add_10_neon : 16179.5

    After :
    vp9_inv_dct_dct_16x16_sub1_add_10_neon : 282.8
    vp9_inv_dct_dct_16x16_sub2_add_10_neon : 1142.4
    vp9_inv_dct_dct_16x16_sub4_add_10_neon : 1139.0
    vp9_inv_dct_dct_16x16_sub8_add_10_neon : 1772.9
    vp9_inv_dct_dct_16x16_sub12_add_10_neon : 2515.2
    vp9_inv_dct_dct_16x16_sub16_add_10_neon : 2823.5
    vp9_inv_dct_dct_32x32_sub1_add_10_neon : 1012.7
    vp9_inv_dct_dct_32x32_sub2_add_10_neon : 6944.4
    vp9_inv_dct_dct_32x32_sub4_add_10_neon : 6944.2
    vp9_inv_dct_dct_32x32_sub8_add_10_neon : 7609.8
    vp9_inv_dct_dct_32x32_sub12_add_10_neon : 9953.4
    vp9_inv_dct_dct_32x32_sub16_add_10_neon : 10770.1
    vp9_inv_dct_dct_32x32_sub20_add_10_neon : 13418.8
    vp9_inv_dct_dct_32x32_sub24_add_10_neon : 14330.7
    vp9_inv_dct_dct_32x32_sub28_add_10_neon : 15257.1
    vp9_inv_dct_dct_32x32_sub32_add_10_neon : 16190.6

    Signed-off-by : Martin Storsjö <martin@martin.st>

    • [DH] libavcodec/aarch64/vp9itxfm_16bpp_neon.S
  • How do I send a mediaStream from the electron renderer process to a background ffmpeg process ?

    26 juillet 2020, par Samamoma_Vadakopa

    Goal (to avoid the XY problem) :

    &#xA;

    I'm building a small linux desktop application using webRTC, electron, and create-react-app. The application should receive a mediaStream via a webRTC peer connection, display the stream to the user, create a virtual webcam device, and send the stream to the virtual webcam so it can be selected as the input on most major videoconferencing platforms.

    &#xA;

    Problem :

    &#xA;

    The individual parts all work : receiving the stream (webRTC), creating the webcam device (v4l2loopback), creating a child process of ffmpeg from within electron, passing the video stream to the ffmpeg process, streaming the video to the virtual device using ffmpeg, and selecting the virtual device and seeing the video stream in a videoconference meeting.

    &#xA;

    But I'm currently stuck on tying the parts together.&#xA;The problem is, the mediaStream object is available inside electron's renderer process (as state in a deeply nested react component, FWIW). As far as I can tell, I can only create a node.js child process of ffmpeg from within electron's main process. That implies that I need to get the mediaStream from the renderer to the main process. To communicate between processes, electron uses an IPC system. Unfortunately, it seems that IPC doesn't support sending a complex object like a video stream.

    &#xA;

    What I've tried :

    &#xA;

      &#xA;
    • starting ffmpeg child process (using child_process.spawn) from within renderer process throws 'fs.fileexistssync' error. Browsing SO indicates that only the main process can start these background processes.

      &#xA;

    • &#xA;

    • creating separate webRTC connection between renderer and main to re-stream the video. I'm using IPC to facilitate the connection, but offer/answer descriptions aren't reaching the other peer over IPC - my guess is this is due to the same limitations on IPC as before.

      &#xA;

    • &#xA;

    &#xA;

    My next step is to create a separate node server on app startup which ingests the incoming RTC stream and rebroadcasts it to the app's renderer process, as well as to a background ffmpeg process.

    &#xA;

    Before I try that, though, does anyone have suggestions for approaches I should consider ? (this is my first SO question, so any advice on how to improve it is appreciated).

    &#xA;