
Recherche avancée
Médias (2)
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (18)
-
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...) -
Menus personnalisés
14 novembre 2010, parMediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
Menus créés à l’initialisation du site
Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...) -
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)
Sur d’autres sites (4826)
-
FFmpeg trim filter fails to use sexagisimal time specification and the output stream is empty. Is it a bug and is there a fix ?
29 novembre 2020, par Link-akroWith ffmpeg the
trim
filter (or its audio variantatrim
) malfunctions whenever i try to write a sexagesimal time specification for the boundaries or duration parameters of the filter : the result is always an empty stream.

According to the documentation i already linked, which was generated on November 26, 2020 so far, the time duration specification should be supported.




start, end, and duration are expressed as time duration specifications




Quote of the spec.




[-][HH :]MM:SS[.m...]






[-]S+[.m...][s|ms|us]




I am working in a Windows_10-64bit and scripting in its CMD.exe command-line, should it matter.


Here is an instance of trimming a video stream out of a media file with hardcoded value for simplicity. The audio is retained unless i use
atrim
as well.
We may or may not append thesetpts
filter as recommended in the documentation henceforthsetpts=PTS-STARTPTS
if we want to shift the stream to start at the beginning of the cut range.

ffmpeg -i sample-counter.mp4 -vf "trim=start=start='1:2':end='1:5'" sample-counter-trimmed.mp4



If i use decimal it works as intended.


ffmpeg -i sample-counter.mp4 -vf "trim=start=start='2':end='5'" sample-counter-trimmed.mp4



This is the banner of my ffmpeg build. We may see it is the latest build by gyan.dev at the moment i post this.


ffmpeg version 4.3.1-2020-11-19-full_build-www.gyan.dev Copyright (c) 2000-2020 the FFmpeg developers
 built with gcc 10.2.0 (Rev5, Built by MSYS2 project)
 configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-lzma --enable-libsnappy --enable-zlib --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libdav1d --enable-libzvbi --enable-librav1e --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint



The incorrect stream is always reported as 0kB weight in the output of ffmpeg. I confirmed with
ffprobe Movie_Countdown-trim-2-5.mov -show_streams -show_entries format=duration
that the incorrect stream is duration zero.

Should i report it as a bug ? Is there some correction or workaround ?


I would rather a solution with the trim filter itself, but if not possible a CMD batch scripting.
We should not need a different filter like select or seeking option like there are tutorials and questions/answers everywhere already. Scripting would be straight-forward in a proper *nix shell+distrib and taught everywhere so it is not worth caring while CMD answers on the other hand are rare so a scripting workaround for CMD would have some microcosmic worth since making one is considerably less straight-forward than a shell with modern mathematics and parsing abilities built-in or packaged.


-
Discord.Net sending audio with ffmpeg only works once (the first time)
16 janvier 2023, par EskoContext :


So I am writing a small bot that should play local audio files on command. I am using Discord.Net, ffmpeg, opus and libsodium.
I have a Speak() function that should, in Theory,


Open ffmpeg -> encode .mp3 -> create PCM stream -> pump the encoded .mp3 to the output(PCM Stream).


That looks like this :


public async Task Speak(IGuild guild, Sound.SoundName soundName)
{
 IAudioClient client;
 if (ConnectedChannels.TryGetValue(guild.Id, out client))
 {
 using (var ffmpeg = CreateStream(sound.Filename))
 using (var stream = client.CreatePCMStream(AudioApplication.Voice, 98304))
 using (var output = ffmpeg.StandardOutput.BaseStream)
 {
 try
 {
 await output.CopyToAsync(stream); 
 }
 catch (Exception ex) { Console.WriteLine("Error " + ex.Message); Console.WriteLine($"- {ex.StackTrace}"); }
 finally { await stream.FlushAsync(); }
 Console.WriteLine("Spoken!");
 }
}

private Process CreateStream(string _path) 
{
 var process = Process.Start(new ProcessStartInfo
 {
 FileName = "ffmpeg",
 Arguments = $"-hide_banner -loglevel panic -i \"{_path}\" -ac 2 -f s16le -ar 48000 pipe:1",
 UseShellExecute = false,
 RedirectStandardOutput = true,
 });
 return process;
}



And it does this, but only one time. A bit more specific, When my bot joins a Voice Channel, it saves the VoiceChannel ID and the IAudioClient in a
private static readonly ConcurrentDictionary ConnectedChannels = new ConcurrentDictionary();
. After that it automatically calls the Speak() function to play a hello.mp3 audio. This Process is done like this :

[Command("join", RunMode = RunMode.Async)]
[RequireUserPermission(GuildPermission.MentionEveryone)]
public async Task JoinChannel(IVoiceChannel channel = null) 
{
 // Get the Voice channel
 channel = channel ?? (Context.User as IGuildUser)?.VoiceChannel;
 if (channel == null) { await Context.Channel.SendMessageAsync("You are not in a VoiceChannel"); return; }

 // Saving the AudioClient
 var audioClient = await channel.ConnectAsync();
 Console.WriteLine($"Connected to channel {channel.Name}");

 ConnectedChannels.TryAdd(channel.Guild.Id, audioClient);

 await Task.Delay(1000);
 await Speak(Context.Guild, Sound.SoundName.hello);
}



This works pefectly fine, The audio plays.


Now that the bot is connected to a VoiceChannel and the IAudioClient is saved to the Dictionary. I should be able to call the Speak() function whenever i want and from Whereever i want, as long as the bot is in the VoiceChannel, right ?


No it doesnt.


and that takes me to my


Problem :


After the bot is now sitting silently in the voice channel i call a "speak" command that looks in code like this :


[Command("speak")]
public async Task speakSingle() 
{
 await Speak(Context.Guild, Sound.SoundName.Random);
}



But the bot remains silent, even though the speak indicator in Discord lights up ! What am I missing ? I dont get it. Is it sending an empty stream ?
Even when I disconnect the bot from the VoiceChannel an reconnect it It wont send audio. The only thing that helps is Reconectiong the Bot From the Server.
I am pretty new to C# and Streams and async programing. So could somebody help me out finding the problem and fix it ?

Errors (CommandPromt Ouput) :


1#
This occures when i run the "speak" command, im getting a NullReferenceException for "Discord.WebSocket.pdb not loaded" in VisualStudio. Allthough I couldn't find anything thats null...

17:34:17 Audio #1 System.Exception: WebSocket connection was closed
 ---> Discord.Net.WebSocketClosedException: The server sent close 4008: "Rate limited."
 at Discord.Net.WebSockets.DefaultWebSocketClient.RunAsync(CancellationToken cancelToken)
 --- End of inner exception stack trace ---
 at Discord.ConnectionManager.<>c__DisplayClass29_0.<<startasync>b__0>d.MoveNext()
</startasync>


2#
This occures when the bot rejoined the Voice Channel and automatically executes the Speak() function.

Error A task was canceled.
- at Discord.Audio.Streams.BufferedWriteStream.WriteAsync(Byte[] data, Int32 offset, Int32 count, CancellationToken cancelToken)
 at Discord.Audio.Streams.OpusEncodeStream.WriteAsync(Byte[] buffer, Int32 offset, Int32 count, CancellationToken cancelToken)
 at System.IO.Stream.CopyToAsyncInternal(Stream destination, Int32 bufferSize, CancellationToken cancellationToken)
 at Bot.Modules.VoiceChannel.Speak(IGuild guild, SoundName soundName) in Y:\Dokumente\Coding\C#\Vs_Studio\Bot\Bot\Modules\VoiceChannel.cs:line 90



This is my first Question on StackOverflow and i hope i provieded enough context, if not tell me please.


-
avformat/hls : Fixes overwriting existing #EXT-X-PROGRAM-DATE-TIME value in HLS playlist
1er décembre 2020, par Vignesh Ravichandranavformat/hls : Fixes overwriting existing #EXT-X-PROGRAM-DATE-TIME value in HLS playlist
fix ticket : 8989
This is is due to the following behavior in the current code :
1. The initial_prog_date_time gets set to the current local time
2. The existing playlist (.m3u8) file gets parsed and the segments
present are added to the variant stream
3. The new segment is created and added
4. The existing segments and the new segment are written to the
playlist file. The initial_prog_date_time from point 1 is used
for calculating "#EXT-X-PROGRAM-DATE-TIME" for the segments,
which results in incorrect "#EXT-X-PROGRAM-DATE-TIME" values
for existing segments
The following approach fixes this bug :
1. Add a new variable "discont_program_date_time" of type double
to HLSSegment struct
2. Store the "EXT-X-PROGRAM-DATE-TIME" value from the existing
segments in this variable
3. When writing to playlist file if "discont_program_date_time"
is set, then use that value for "EXT-X-PROGRAM-DATE-TIME" else
use the value present in vs->initial_prog_date_timeSigned-off-by : Vignesh Ravichandran <vignesh.ravichandran02@gmail.com>
Signed-off-by : liuqi05 <liuqi05@kuaishou.com>