
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (49)
-
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...) -
Activation de l’inscription des visiteurs
12 avril 2011, parIl est également possible d’activer l’inscription des visiteurs ce qui permettra à tout un chacun d’ouvrir soit même un compte sur le canal en question dans le cadre de projets ouverts par exemple.
Pour ce faire, il suffit d’aller dans l’espace de configuration du site en choisissant le sous menus "Gestion des utilisateurs". Le premier formulaire visible correspond à cette fonctionnalité.
Par défaut, MediaSPIP a créé lors de son initialisation un élément de menu dans le menu du haut de la page menant (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (7850)
-
Error writing trailer of feed1.ffm Broken pipe
28 août 2016, par Bullgodi’m using ffserver to streaming videos but i have a problem when i create the playlist to stream.
The problem is :av_interleaved_write_frame(): Broken pipe
Error writing trailer of http://localhost:8090/feed1.ffm: Broken pipeThe Command with the problem is :
# ffmpeg -i 16portrait.mp4 http://localhost:8090/feed1.ffm
ffmpeg version git-2016-08-27-dc7e5ad Copyright (c) 2000-2016 the FFmpeg developers
built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-17)
configuration : —prefix=/root/ffmpeg_build —extra-cflags=-I/root/ffmpeg_build/include —extra-ldflags=-L/root/ffmpeg_build/lib —bindir=/root/bin —extra-libs=-ldl —enable-gpl —enable-nonfree —enable-libfdk_aac —enable-libmp3lame —enable-libopus —enable-libvorbis —enable-libvpx —enable-libx264 —enable-libfreetype —enable-libspeex —enable-libtheora
libavutil 55. 29.100 / 55. 29.100
libavcodec 57. 54.100 / 57. 54.100
libavformat 57. 48.100 / 57. 48.100
libavdevice 57. 0.102 / 57. 0.102
libavfilter 6. 57.100 / 6. 57.100
libswscale 4. 1.100 / 4. 1.100
libswresample 2. 1.100 / 2. 1.100
libpostproc 54. 0.100 / 54. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from ’16portrait.mp4’ :
Metadata :
major_brand : isom
minor_version : 512
compatible_brands : isomiso2avc1mp41
date : 2013-11-29T13:19:09+0100
encoder : Lavf57.41.100
location-fra : +48.8789+002.3376+48.927994/
location : +48.8789+002.3376+48.927994/
Duration : 00:00:02.12, start : -0.025057, bitrate : 1226 kb/s
Stream #0:0(und) : Video : h264 (High) (avc1 / 0x31637661), yuv420p, 320x568, 978 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc
(default)
Metadata :
handler_name : VideoHandler
Stream #0:1(und) : Audio : mp3 (mp4a / 0x6134706D), 44100 Hz, stereo, s16p, 256 kb/s (default)
Metadata :
handler_name : SoundHandler
[tcp @ 0x3f08e20] Connection to tcp ://localhost:8090 failed (Connection refused), trying next address
[tcp @ 0x3f29ee0] Connection to tcp ://localhost:8090 failed (Connection refused), trying next address
[libvpx @ 0x3f15cc0] v1.6.0
[ffm @ 0x3f8c640] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
Last message repeated 1 times
Output #0, ffm, to ’http://localhost:8090/feed1.ffm’ :
Metadata :
major_brand : isom
minor_version : 512
compatible_brands : isomiso2avc1mp41
date : 2013-11-29T13:19:09+0100
creation_time : now
location-fra : +48.8789+002.3376+48.927994/
location : +48.8789+002.3376+48.927994/
encoder : Lavf57.48.100
Stream #0:0(und) : Audio : vorbis (libvorbis), 22050 Hz, mono, fltp, 64 kb/s (default)
Metadata :
handler_name : SoundHandler
encoder : Lavc57.54.100 libvorbis
Stream #0:1(und) : Video : vp8 (libvpx), yuv420p, 720x576, q=10-42, 400 kb/s, 29.97 fps, 1000k tbn, 25 tbc (default)
Metadata :
handler_name : VideoHandler
encoder : Lavc57.54.100 libvpx
Side data :
cpb : bitrate max/min/avg : 0/0/0 buffer size : 800000 vbv_delay : -1
Stream mapping :
Stream #0:1 -> #0:0 (mp3 (native) -> vorbis (libvorbis))
Stream #0:0 -> #0:1 (h264 (native) -> vp8 (libvpx))
Press [q] to stop, [?] for help
av_interleaved_write_frame() : Broken pipe
Error writing trailer of http://localhost:8090/feed1.ffm : Broken pipeframe= 3 fps=0.0 q=0.0 Lsize= 60kB time=00:00:00.08
bitrate=6143.9kbits/s dup=1 drop=0 speed=0.44x
video:50kB audio:0kB subtitle:0kB other streams:0kB global headers:4kB muxing overhead : 19.768417%
Conversion failed ! -
VPS as video relay server using ffmpeg and ffserver ?
15 juillet 2016, par Luke EI’m working on a project where I need to send a video stream from a laptop through a router I don’t have control of to an Android phone on another network I don’t have control of. The project also involves sending a stream of sensor data the opposite direction, for which I’m using a VPS and TCP hole punching quite successfully.
So, I thought to myself, why not just stream the video from ffmpeg on my laptop to ffmpeg on the VPS, and then relay that via ffserver to my android phone ? That’s what I’ve been trying to do for a good while now, but haven’t had much luck. The stream is recognized by the server but when putting it out over ffserver I get errors. Here is my ffserver.conf :
HTTPPort 9000
HTTPBindAddress
RTSPPort 5454
RTSPBindAddress
<feed>
File /tmp/feed1.ffm
FileMaxSize 50M
</feed>
<stream>
Feed feed1.ffm
Format rtp
VideoSize 960x540
VideoBufferSize 0
VideoFrameRate 30
NoAudio
AVOptionVideo flags +global_header
</stream>And here’s the command I’m using to take the received stream on the VPS and transmit it to ffserver :
./ffmpeg -protocol_whitelist file,udp,rtp -i foo.sdp -an -c copy -f rtp rtsp://:5454/feed1.ffm
When doing this command, I get the error "rtsp ://:5454/feed1.ffm : Protocol not found"
Any advice ?
I also thought about doing this with netcat and just and trying to forward all data received on the VPS at a certain port 1234 to any client connected to say port 1235, but this appears to be much more difficult than I thought.
Thanks again for any help, and hopefully this problem is interesting to others as well.
-
Why is my DSharpPlus Slash Command not playing my desired sound using FFMPEG in C# ?
19 mai 2023, par IngeniousThoughtsI'm having a problem with my ffmpeg setup the commands work fine but my play command doesn't play my desired sound.


//The command.
 [SlashCommand("play", "plays a sound in a voice channel.")]
 public async Task HowlCommand(InteractionContext ctx, [Choice("ChoiceName", "C:\\My\\Program\\Directory\\Name\\MySound.mp3")]
 [Option("Sound", "Please select a Sound")] string filepath)
 {
 //Creates a slash command used response.
 //Also removes the error message.
 await ctx.CreateResponseAsync(InteractionResponseType.ChannelMessageWithSource, new DiscordInteractionResponseBuilder()
 .WithContent("Playing sound in voice channel. Please wait just a moment!"));
 
 //Checks if the user is not a bot to send the message.
 if (ctx.Member.IsBot)
 {
 return;
 }
 else
 {
 if(filepath != "C:\\My\\Program\\Directory\\Name\\MySound.mp3")
 {
 var embedmessage = new DiscordMessageBuilder()
 .AddEmbed(new DiscordEmbedBuilder()
 
 .WithAuthor("BotName", null, ctx.Client.CurrentApplication.Icon)
 .WithTitle("Please select the following sound to play:")
 .WithImageUrl(ctx.Client.CurrentApplication.Icon)
 .WithFooter("VoiceChannel Error.", "ImageURL.png")
 .WithTimestamp(DateTime.Now)
 .Build()
 
 );
 
 //Makes the command wait 5 seconds before sending the rest of the command data.
 await Task.Delay(TimeSpan.FromSeconds(5));
 
 //Sends the embed in a message.
 await ctx.Channel.SendMessageAsync(embedmessage);
 }
 else
 {
 //Makes the command wait 5 seconds before sending the rest of the command data.
 await Task.Delay(TimeSpan.FromSeconds(5));
 
 
 var vnext = ctx.Client.GetVoiceNext();
 var vnc = vnext.GetConnection(ctx.Guild);
 
 //if null throws exception.
 if (vnc == null)
 throw new System.InvalidOperationException("Not connected in this guild.");
 
 
 //Gets the mp3 file to use.
 var ffmpeg = Process.Start(new ProcessStartInfo
 {
 FileName = "ffmpeg",
 Arguments = $@"-i ""{filepath}"" -ac 2 -f s16le -ar 48000 pipe:1",
 RedirectStandardOutput = true,
 UseShellExecute = false
 });
 Stream pcm = ffmpeg.StandardOutput.BaseStream;
 
 VoiceTransmitSink transmit = vnc.GetTransmitSink();
 await pcm.CopyToAsync(transmit);
 vnc.GetTransmitSink().VolumeModifier = 5;
 
 //Makes the command wait 10 seconds before sending the rest of the command data.
 await Task.Delay(TimeSpan.FromSeconds(10));
 
 //Disconnects the bot from the voice channel.
 vnc.Disconnect();
 }
 }
 }



//The command.
 [SlashCommand("join", "Joins a voice channel.")]
 public async Task JoinChannel(InteractionContext ctx, [Choice("MyVoiceChannel", "VoiceChannelName")]
 [Option("VoiceChannel", "Please choose a Voice Channel.")] DiscordChannel channel)
 {
 //Creates a slash command used response.
 //Also removes the error message.
 await ctx.CreateResponseAsync(InteractionResponseType.ChannelMessageWithSource, new DiscordInteractionResponseBuilder()
 .WithContent("Joining voice channel. Please wait just a moment!"));
 
 //Checks if the user is not a bot to send the message.
 if (ctx.Member.IsBot)
 {
 return;
 }
 else
 {
 if (channel.Name != "MyVoiceChannelName")
 {
 var embedmessage = new DiscordMessageBuilder()
 .AddEmbed(new DiscordEmbedBuilder()
 
 .WithAuthor("BotName", null, ctx.Client.CurrentApplication.Icon)
 .WithTitle("Please Create The Following Voice Channel:")
 .WithImageUrl(ctx.Client.CurrentApplication.Icon)
 .AddField("VoiceChannel:", "**BotName**" + Environment.NewLine + "Is Case Sensitive: **Yes**")
 .WithFooter("VoiceChannel Error.", "ImageURL.png")
 .WithTimestamp(DateTime.Now)
 .Build()
 
 );
 
 //Makes the command wait 5 seconds before sending the rest of the command data.
 await Task.Delay(TimeSpan.FromSeconds(5));
 
 //Sends the embed in a message.
 await ctx.Channel.SendMessageAsync(embedmessage);
 }
 else
 {
 //Makes the command wait 5 seconds before sending the rest of the command data.
 await Task.Delay(TimeSpan.FromSeconds(5));
 
 
 channel = ctx.Member.VoiceState?.Channel;
 await channel.ConnectAsync();
 
 }
 }
 }
 
 }
}



public sealed class Program
 {
 public static DiscordClient Client { get; private set; }
 public static InteractivityExtension Interactivity { get; private set; }
 public static CommandsNextExtension Commands { get; private set; }
 public static VoiceNextExtension VoiceNext { get; private set; }
 
 
 static async Task Main(string[] args)
 {
 
 //Main Window configs specifying the title name and color.
 Console.BackgroundColor = ConsoleColor.Black;
 Console.ForegroundColor = ConsoleColor.Magenta;
 Console.Title = "BotName";
 
 //1. Get the details of your config.json file by deserialising it
 var configJsonFile = new JSONReader();
 await configJsonFile.ReadJSON();
 
 //2. Setting up the Bot Configuration
 var discordConfig = new DiscordConfiguration()
 {
 Intents = DiscordIntents.All,
 Token = configJsonFile.token,
 TokenType = TokenType.Bot,
 AutoReconnect = true
 };
 
 //3. Apply this config to our DiscordClient
 Client = new DiscordClient(discordConfig);
 
 //4. Set the default timeout for Commands that use interactivity
 Client.UseInteractivity(new InteractivityConfiguration()
 {
 Timeout = TimeSpan.FromMinutes(2)
 });
 
 //5. Set up the Task Handler Ready event
 Client.Ready += OnClientReady;
 
 //6. Set up the Commands Configuration
 var commandsConfig = new CommandsNextConfiguration()
 {
 StringPrefixes = new string[] { configJsonFile.prefix },
 EnableMentionPrefix = true,
 EnableDms = true,
 EnableDefaultHelp = false,
 };
 
 Commands = Client.UseCommandsNext(commandsConfig);
 
 //7. Register your commands
 var slashCommandsConfig = Client.UseSlashCommands();
 slashCommandsConfig.RegisterCommands<mysoundscommand>(MyGuildID);
 
 //8. Allows usage of voice channels.
 var VoiceNext = Client.UseVoiceNext();
 
 //9. Connect to get the Bot online
 await Client.ConnectAsync();
 await Task.Delay(-1);
 }
 
 private static Task OnClientReady(DiscordClient sender, ReadyEventArgs e)
 {
 return Task.CompletedTask;
 }
 }
</mysoundscommand>


SourceCode Link :




playing the playsound slash command but wasn't expecting it to not play the mp3 file.


everything else worked fine except when it transmits the sound it doesn't play it.