
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (89)
-
Le plugin : Gestion de la mutualisation
2 mars 2010, parLe plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
Installation basique
On installe les fichiers de SPIP sur le serveur.
On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
< ?php (...) -
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)
Sur d’autres sites (6809)
-
ImageJ / Fiji shows wrong number of frames in video (FFMPEG import)
28 avril 2023, par locoric_polskaI am counting the number of animals in a an area using Fiji. I import the video through the FFMPEG plug-in (videos are mp4 with mpeg-4 codec). However, I noticed that when I import the videos Fiji uploads the wrong number of frames, and I cannot understand why and how.


An example. I have a video shot at 25fps which is 1582s long. If I do the calculations the video should have 39550 frames in total (1582*25). When I open it through a Computer vision package in R, I see that the video correctly contains 39550 frames. However, when loaded in Fiji, the shown number of frames is 49511. So Fiji is adding 9961 frames to the video. This is consistent across all videos that are recorded in 25fps, while it does not appear in videos shot at 24fps.


Curiously, I found that the ratio between the number of frames read by Fiji and the 'real' number of frames is consistent between 0.79 and 0.80. This makes me think that Fiji is expecting the video to be 30fps and (possibly) duplicating frames to adjust the video to this assumption.


Unfortunately, I discovered all this after finishing my analysis and while trying to merge this dataset with another obtained through CV. The number of frame does not match between datasets and I am not sure how to solve this.


Any help would be greatly appreciated !!


An idea is to multiply all the frame numbers by 0.8 to adjust them to the old assumption. This solution assumes that Fiji is duplicating frames throughout the video in a consistent way


-
Encode and stream from Xbox 360 kinect using ffmpeg
17 juin 2015, par user3288346I want to live stream content obtained from Kinect onto my internal network.
I have one physical machine which is my server and has ubuntu 14.04 Server on it. I connect remotely to it. I have installed ffmpeg and ffserver and can encode and stream stored video files on the server. However, I have a few problems when using the Xbox Kinect.
I have xbox 360 kinect which I have attached through usb. I have followed this https://bitbucket.org/samirmenon/scl-manips-v2/wiki/vision/kinect, however I couldn’t get through the OpenCV part. When I run
$ cmake-gui ..
I get
cmake-gui: cannot connect to X server
I don’t have physical access to the machine. Probably, its due to accessing it remotely.
When I do
test@cloud-node-2:~/kinnect$ lsusb
Bus 002 Device 006: ID 045e:02ae Microsoft Corp. Xbox NUI Camera
Bus 002 Device 004: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor
Bus 002 Device 005: ID 045e:02ad Microsoft Corp. Xbox NUI Audio
Bus 002 Device 003: ID 0409:005a NEC Corp. HighSpeed Hub
Bus 002 Device 002: ID 0bda:0181 Realtek Semiconductor Corp.
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubWhen I do
test@cloud-node-2:~/kinnect$ ls -ltrh /dev/video*
ls: cannot access /dev/video*: No such file or directoryTherefore, I am not able to capture the video using ffmpeg.
-
How do I create and initialise a DXGI_FORMAT_NV12 resource in DX12 (source is AVFrame)
5 janvier 2023, par mikeI'm trying to create an NV12 resource as source for a video encoder in DX12. While I intend to eventually populate a resource from GPU, what I'm trying to do now is take an ffmpeg
AVFrame
I already have (inAV_PIX_FMT_YUV420P
format) and create a texture inDXGI_FORMAT_NV12
format using that data.

I understand the NV12 format (https://learn.microsoft.com/en-us/windows/win32/medfound/recommended-8-bit-yuv-formats-for-video-rendering#nv12) has U and V interleaved while the
AV_PIX_FMT_YUV420P
doesn't.

My main question is what does the
D3D12_RESOURCE_DESC
look like for an NV12 texture - do I tell it I need more than one array/mip level to make it planar ? Or do I just give it a single memory address with both planes layed out as per the NV12 format, and it figures out subresources for me based on the format ?

I understand that to read the data I define two SRVs, one for Y mapped to the Red channel and a second for U and V, but it's how I initialise it that's confusing me.