Recherche avancée

Médias (0)

Mot : - Tags -/metadatas

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (70)

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

  • Récupération d’informations sur le site maître à l’installation d’une instance

    26 novembre 2010, par

    Utilité
    Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
    Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (2476)

  • c# RTSP audio to FFMPEG to SpeechRecognitionEngine

    18 avril 2016, par MrH40XX

    I’m trying to get an audiostream (from any source file/other stream/...) into the microsoft speech recognition engine.

    So far I’ve got :

    ffmpeg.exe -rtsp_transport tcp -i rtsp ://%_return1%/audio -acodec pcm_u16le -f rtp rtp ://localhost:2222

    Then I have inside my code :

    SpeechRecognitionEngine _engine = new SpeechRecognitionEngine(CultureInfo.CurrentCulture);    
    this._engine.SetInputToAudioStream(this._rtpClient.AudioStream, new SpeechAudioFormatInfo(16000, AudioBitsPerSample.Sixteen, AudioChannel.Mono));

    Then I have the events registered :

    this._engine.SpeechRecognized += this.SpeechRegocnized;

    this._engine.SpeechDetected += this.EngineOnSpeechDetected;

    I’m not sure about the codec settings... I’ve tried other codecs but doesn’t work.

  • Is there any open source solution to display a remote stream inside a Hololens2 UWP Vuforia application ?

    19 avril 2023, par T777

    What do we need ?

    


    We are trying to develop an application for quality management in which we show an hologram on a metal part as an assitance marking. (using Hololen2 + Vuforia + ModleTargets) The employee uses an sensor to follow this assitance marking and the data will be analyzed live by a test device. The results are outputed on a screen / are visible at an closed source application of the manufacturer of the test device.

    


    Capturing of the video output :
The current plan is to capture the video stream of the test device via capture card. Add a via mrtk2 videopanel inside the vuforia app and stream the captured video to the Hololens2 using obs or an OpenCV python script for screen recording.

    


    What we have tried so far

    


    1) Sending Raw udp stream
via RMTP and decoding + converting with gstreamer server and writing an own library in Unity for Receiving
Result : Temporary stopped, because receiving the udp streams needs connection/ session management (signalling) frame syncing and agreement on video size, color format, frame rate etc.. and we have no solution.
An own implementation of any of this would have high complexity is consuming a lot of time.

    


    2) Using available protocols that i can find on the web
Actually there are some protocols already developed for session creation and streaming :

    


      

    • HTTP streaming (HLS) (Transport + Session)
    • 


    • RTMP (Transport + Session),
    • 


    • RTP (Transport) + RTPS (Session),
    • 


    • WebRTC : Is possible with different protocol stacks
RTMP/TCP/UDP (Transport) + SDP (standardized format for video paramaters) + ICE (p2p)/ WHIP (http, client-server) / Websocket(client-server) (signaling protocols) that can be used and some good open source streaming servers (gstreamer, mediamtx and srs)
    • 


    


    When using these the video will be encoded typcially with xh264 and need to be decoded on the HoloLens 2. There are APIs to C/C++ native (hardware) decoding libraries like unity-vlc and ffmpeg.NET that needing media library ffmpeg. I could figure out (not tested) that there is an hardware h264 decoder on the HoloLens2 but I have no clue how to access it. Since there I couldnt disvocer any information about HoloLens2 media libraries.

    


    3) Using Unity packages

    


    


    Will be testing other compile options tomorrow..

    


      

    • Mixed Reality WebRTC (https://github.com/microsoft/MixedReality-WebRTC) :
Various protocol support, Microsoft brought Webrtc specifically to HoloLens.
Deprecated, as fas as I can see just support for Hololens1 and ARM32. So i can not evaluate if trying it with this is worth it.
    • 


    


    What are the next options ?

    


      

    • Developing a raw udp streaming library with untiy directly.
    • 


    • Rebuilding the application with visionlib (ARM32) compatible and MixedRealityWebRTC (ARM32)
    • 


    • Porting ffmpeg + API to UWP ?
    • 


    • Also there seem some affords to make WebRTC in general available to UWP platforms : https://github.com/microsoft/winrtc
    • 


    


    The questions

    


      

    • Does Vuforia support ARM32 ?
    • 


    • How to access hardware decoder of Hololens2 via Unity Code ?
    • 


    


  • Using OpenCV 2.4.4 with FFmpeg in Windows

    22 décembre 2015, par aardvarkk

    I know there are other questions dealing with FFmpeg usage in OpenCV, but most of them appear to be outdated.

    By opening up the makefiles in CMake, I can verify that I’ve got the WITH_FFMPEG flag on. My output folder for the OpenCV build contains a bin folder, within which are Debug and Release folders, each containing a copy of a .dll file entitled opencv_ffmpeg244.dll. I can step into the source code of OpenCV when I create a VideoWriter and verify that the function pointers to the .dll get filled correctly. That much appears to be working.

    If I use the FOURCC code of CV_FOURCC_PROMPT, the following codecs work properly :

    • Microsoft Video 1
    • Intel IYUV codec
    • Logitech Video (I420)
    • Cinepak Codec by Radius
    • Full Frames (Uncompressed)

    The following codecs do not work properly (ie. produce a 0kb video file) :

    • Microsoft RLE

    If my understanding is correct, using FFMPEG should allow for encoding video using a whole bunch of new codecs (x264, DIVX, XVID, and so on). However, none of these appear in the prompt. Manually setting them by their FOURCC codes using the macro CV_FOURCC(...) also doesn’t work. For instance, using this : CV_FOURCC('X','2','6','4') produces the message :

    Could not find encoder for codec id 28: Encoder not found

    and makes a video file of size 0kb.

    Using this : CV_FOURCC('X','V','I','D') produces no error message, and makes a video file of 6kb that will not play in Windows Media Player or VLC.

    I tried manually downloaded the Xvid codec from Xvid.org. Once that was installed, it appeared under the VFW selection in the prompt, and the encoding worked properly. So it’s close to a solution, but if I try to set the FOURCC code directly, it still fails as above ! I have to pick it from the prompt every time. Isn’t FFmpeg supposed to include a whole bunch of codecs ? If so, why am I manually downloading the codec instead of using the one built into FFmpeg ?

    What am I missing here ? Is there a way to check that FFMPEG is "enabled" ? It seems like the only codecs available in the prompt are VFW codecs, not the FFMPEG ones. The .dll has been built and is sitting in the same folder as the executable, but it appears it’s not being used in any way.

    Lots of related questions here. Hoping to find somebody knowledgeable about the FFmpeg implementation in OpenCV and with some knowledge of how all of these pieces fit together.