
Recherche avancée
Autres articles (72)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (9285)
-
How to "unconcatenate" MP4 file ?
17 avril 2023, par arneI used Losseless Cut (which uses ffmpeg) to concatenate a bunch of MP4 files using ffmpeg concat demuxer. Unfortunately I didnt' check the result before deleting the source clips. The first clip had different format than the rest. The resulting clip has all the audio and video from the first clip. Rest of the video gives decoding errors.


I'm looking for a way how to "unconcatenate" the file and get back the original video. I believe it's in the container, just incorrectly labeled as 1920x1080x50 hevc video.


I believe I should export the audio and video streams, cut away the first frames from both streams up to the point where the first clip ends, then change the format of the video stream and finally put them back into container.


I'm not sure what tools and commands to use the cut the video stream and what commands and tools to use force the correct video format on the stream.


I've read ffmpeg documentation but it's vast and my use case isn't directly covered. I'm planning to play around with different tools, but I'm new to the subject and thought to ask first.


-
Audio alternative to FFmpeg - Core Audio IOS
9 juin 2021, par cs guyI have been using FFmpeg Android for a music app I'm working on. I built a custom audio engine from stratch with C++ and FFmpeg and it works amazing and it fulfilled all my needs. However, Due to FFmpeg being Lgpl lisence, it seems to me after some researching it is not possible to use a lgpl lisence due to app stores policy. Im not a lawyer or have the money to hire a lawyer for a commercial advise. So I am thinking to replace ffmpeg with another audio decoder, processor library. I am planning to feed the custom decoded data to audio devices through Apples core audio library.


Here are my needs :


- 

- Need to decode ogg files
- Need to encode pcm data as aac file
- Need to add post process FX to decoded data such as low pass filter etc








So what I am asking for is an answer to one of the following :


- 

- Could FFmpeg really not be used in app store due to lgpl static linking issues ? (I looked at the most famous apps that use FFmpeg on Android, all of them does not use FFmpeg on IOS)
- If I were to use another library for FFmpeg what is the best alternative to work with ? Did anyone actually had experienced the same situation that I am in ?






I also tried using AudioKit but it has a critical problem that does not meet with my requirement so I dropped it.


I am looking for advice here. Thanks !


-
Livestream WebVTT with HLS
10 novembre 2022, par kltyeI've implemented an HLS service with ffmpeg (which pulls a live stream from nginx-rtmp). That all works fine, but now I'm wondering what kind of programming pattern I should be using to get live captioning to work.


I'm planning on using ffmpeg to output the incoming mp4 stream to multiple WAV chunks (i.e., the same way HLS fMP4 parts are created), and then sending those chunks over to Azure Cognitive Services for speech-to-text recognition. My question is, what do I do when I receive the speech results ? Do I dump that vtt file into the same directory as my HLS chunks, and then serve that up using a single m3u8 file (with audio/video tracks along with the text track) ?


Currently ffmpeg is updating the m3u8 playlist for HLS clients ; would it be possible for me to create the m3u8 playlist just for the vtt files, and serve that concurrently with the "regular" HLS playlist ? Also, time synchronization would seem to be difficult, because I'll be sending discrete WAV files over to Azure, so the vtt timestamps are going to be relative to the chunk I'm sending.


Help ! I've done searches online, and I grasp the various issues, but I'm not sure how to plumb them all together.