
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (87)
-
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...) -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)
Sur d’autres sites (5045)
-
Web-based video editor
10 octobre 2014, par DannyWe have a web-based editor currently that allows users to build animated web apps. The apps are made up of shapes, text, images, and videos. Except for videos, all other elements can also be animated around the screen. The result of building a animated app is basically a big blob of JSON.
The playback code for the web app is web-based as well. It takes the JSON blob and constructs the HTML, which ends up playing back in some sort of browser environment. The problem is that most of the time this playback occurs on lower-end hardware like televisions and set-top boxes.
These performance issues go away if there is some way to be able to convert a digital sign to video. Then the STB/smart TV simply plays a video, which is much more performant than playing back animations in a web view.
Given a blob of JSON describing each layer and how to draw each type of object, its animation points, etc, how could I somehow take that and convert it to video on the server ?
My first attempt at this was using PhantomJS to load the playback page in a headless browser, take a series of screenshots, and then use ffmpeg to merge those screenshots into a video. That worked great so long as there is no video. But it does not work with video since there is no HTML5 video tag support in PhantomJS, and even if there was, I would lose any audio.
The other way I was thinking of doing it would be to again load the playback page in PhantomJS, but turn off the video layers and leave them transparent, then take screenshots as a series of PNGs with transparency. I would then combine these with the video layers.
None of this feels very elegant though. I know there are web-based video editors out there that basically do what I’m trying to accomplish, so how do they do it ?
-
Web-based video editor
13 avril 2021, par DannyWe have a web-based editor currently that allows users to build animated web apps. The apps are made up of shapes, text, images, and videos. Except for videos, all other elements can also be animated around the screen. The result of building a animated app is basically a big blob of JSON.



The playback code for the web app is web-based as well. It takes the JSON blob and constructs the HTML, which ends up playing back in some sort of browser environment. The problem is that most of the time this playback occurs on lower-end hardware like televisions and set-top boxes.



These performance issues go away if there is some way to be able to convert a digital sign to video. Then the STB/smart TV simply plays a video, which is much more performant than playing back animations in a web view.



Given a blob of JSON describing each layer and how to draw each type of object, its animation points, etc, how could I somehow take that and convert it to video on the server ?



My first attempt at this was using PhantomJS to load the playback page in a headless browser, take a series of screenshots, and then use ffmpeg to merge those screenshots into a video. That worked great so long as there is no video. But it does not work with video since there is no HTML5 video tag support in PhantomJS, and even if there was, I would lose any audio.



The other way I was thinking of doing it would be to again load the playback page in PhantomJS, but turn off the video layers and leave them transparent, then take screenshots as a series of PNGs with transparency. I would then combine these with the video layers.



None of this feels very elegant though. I know there are web-based video editors out there that basically do what I'm trying to accomplish, so how do they do it ?


-
avutils/hwcontext : When deriving a hwdevice, search for existing device in both direc...
25 novembre 2021, par Soft Worksavutils/hwcontext : When deriving a hwdevice, search for existing device in both directions
The test /libavutil/tests/hwdevice checks that when deriving a device
from a source device and then deriving back to the type of the source
device, the result is matching the original source device, i.e. the
derivation mechanism doesn't create a new device in this case.Previously, this test was usually passed, but only due to two different
kind of flaws :1. The test covers only a single level of derivation (and back)
It derives device Y from device X and then Y back to the type of X and
checks whether the result matches X.What it doesn't check for, are longer chains of derivation like :
CUDA1 > OpenCL2 > CUDA3 and then back to OpenCL4
In that case, the second derivation returns the first device (CUDA3 ==
CUDA1), but when deriving OpenCL4, hwcontext.c was creating a new
OpenCL4 context instead of returning OpenCL2, because there was no link
from CUDA1 to OpenCL2 (only backwards from OpenCL2 to CUDA1)If the test would check for two levels of derivation, it would have
failed.This patch fixes those (yet untested) cases by introducing forward
references (derived_device) in addition to the existing back references
(source_device).2. hwcontext_qsv didn't properly set the source_device
In case of QSV, hwcontext_qsv creates a source context internally
(vaapi, dxva2 or d3d11va) without calling av_hwdevice_ctx_create_derived
and without setting source_device.This way, the hwcontext test ran successful, but what practically
happened, was that - for example - deriving vaapi from qsv didn't return
the original underlying vaapi device and a new one was created instead :
Exactly what the test is intended to detect and prevent. It just
couldn't do so, because the original device was hidden (= not set as the
source_device of the QSV device).This patch properly makes these setting and fixes all derivation
scenarios.(at a later stage, /libavutil/tests/hwdevice should be extended to check
longer derivation chains as well)Reviewed-by : Lynne <dev@lynne.ee>
Reviewed-by : Anton Khirnov <anton@khirnov.net>
Tested-by : Wenbin Chen <wenbin.chen@intel.com>
Signed-off-by : softworkz <softworkz@hotmail.com>
Signed-off-by : Haihao Xiang <haihao.xiang@intel.com>