Recherche avancée

Médias (0)

Mot : - Tags -/auteurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (8)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

  • Encodage et transformation en formats lisibles sur Internet

    10 avril 2011

    MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
    Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
    Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

Sur d’autres sites (2826)

  • MacOS - how to choose audio device from terminal

    9 octobre 2024, par jon_two

    I've been working on a Python program to create audio and also play back existing sound files. I can spawn multiple processes and have them all play to the laptop speakers, but I was wondering if it was possible to send each signal to a separate sound device. This is so I can apply effects to some processes but not all together.

    


    I'm using a MacBook and python simpleaudio, which calls AudioToolbox to connect to the output device. I've also got ffmpeg installed, so could use ffplay if that is easier. The pydub library uses this - it exports the current wave to a temp file then uses subprocess and ffplay to play it back.

    


    I can get a list of devices, but am not sure how to use this list to choose a device.

    


    % ffplay -devices
Devices:
 D. = Demuxing supported
 .E = Muxing supported
 --
  E audiotoolbox    AudioToolbox output device
 D  avfoundation    AVFoundation input device
 D  lavfi           Libavfilter virtual input device
  E sdl,sdl2        SDL2 output device
 D  x11grab         X11 screen capture, using XCB


    


    I did see a post that suggested using ffmpeg to list devices, again I can't figure out how to use this list.

    


    % ffmpeg -f lavfi -i sine=r=44100 -f audiotoolbox -list_devices true -
Input #0, lavfi, from 'sine=r=44100':
  Duration: N/A, start: 0.000000, bitrate: 705 kb/s
  Stream #0:0: Audio: pcm_s16le, 44100 Hz, mono, s16, 705 kb/s
Stream mapping:
  Stream #0:0 -> #0:0 (pcm_s16le (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
[AudioToolbox @ 0x135e3f230] CoreAudio devices:
[AudioToolbox @ 0x135e3f230] [0]               Background Music, (null)
[AudioToolbox @ 0x135e3f230] [1]   Background Music (UI Sounds), BGMDevice_UISounds
[AudioToolbox @ 0x135e3f230] [2]         MacBook Air Microphone, BuiltInMicrophoneDevice
[AudioToolbox @ 0x135e3f230] [3]           MacBook Air Speakers, BuiltInSpeakerDevice
[AudioToolbox @ 0x135e3f230] [4]               Aggregate Device, ~:AMS2_Aggregate:0
Output #0, audiotoolbox, to 'pipe:':
  Metadata:
    encoder         : Lavf59.27.100
  Stream #0:0: Audio: pcm_s16le, 44100 Hz, mono, s16, 705 kb/s
    Metadata:
      encoder         : Lavc59.37.100 pcm_s16le
size=N/A time=00:00:05.06 bitrate=N/A speed=0.984x    
video:0kB audio:436kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Exiting normally, received signal 2.


    


    This does at least give me a recognisable list of devices. If I add more Aggregate Devices, can I play back different files to each device ?

    


  • c# how to capture audio from nvlc and raise Accord.Audio.NewFrameEventArgs

    30 septembre 2018, par MATRIX81

    I’m working on the application in c# that record video stream from IP cameras.

    I’m using Accord.Video.FFMPEG.VideoFileWriter and nVLC C# wrapper.
    I have a class that captures audio from the stream using nVLC, which should implement the IAudioSource interface, so I’ve used CustomAudioRendere to capture sound data, then raised the event NewFrame that contains the signal object.
    The problem is when saving the signal to video file, the sound is terrifying(discrete) when the record from RTSP stream, but in good quality when the record from the local mic(from the laptop).
    Here is the code that raises the event :

    public void Start()
       {
           _mFactory = new MediaPlayerFactory();
           _mPlayer = _mFactory.CreatePlayer<iaudioplayer>();
           _mMedia = _mFactory.CreateMedia<imedia>(Source);
           _mPlayer.Open(_mMedia);

           var fc = new Func(SoundFormatCallback);
           _mPlayer.CustomAudioRenderer.SetFormatCallback(fc);
           var ac = new AudioCallbacks { SoundCallback = SoundCallback };
           _mPlayer.CustomAudioRenderer.SetCallbacks(ac);

           _mPlayer.Play();
       }

       private void SoundCallback(Sound newSound)
       {

           var data = new byte[newSound.SamplesSize];
           Marshal.Copy(newSound.SamplesData, data, 0, (int)newSound.SamplesSize);

           NewFrame(this, new Accord.Audio.NewFrameEventArgs(new Signal(data,Channels, data.Length, SampleRate, Format)));
       }

       private SoundFormat SoundFormatCallback(SoundFormat arg)
       {


           Channels = arg.Channels;
           SampleRate = arg.Rate;
           BitPerSample = arg.BitsPerSample;

           return arg;

       }
    </imedia></iaudioplayer>

    And here is the code that captures the event :

    private void source_NewFrame(object sender, NewFrameEventArgs eventArgs)
       {
           Signal sig = eventArgs.Signal;

           duration += eventArgs.Signal.Duration;
           if (videoFileWrite == null)
           {


               videoFileWrite = new VideoFileWriter();
               videoFileWrite.AudioBitRate = sig.NumberOfSamples*sig.NumberOfChannels*sig.SampleSize;
               videoFileWrite.SampleRate = sig.SampleRate;
               videoFileWrite.FrameSize = sig.NumberOfSamples/sig.NumberOfFrames;


               videoFileWrite.Open("d:\\output.mp4");
           }
           if (isStartRecord)
           {
               DoneWriting = false;

               using (MemoryStream ms = new MemoryStream())
               {
                   encoder = new WaveEncoder(ms);
                   encoder.Encode(eventArgs.Signal);
                   ms.Seek(0, SeekOrigin.Begin);
                   decoder = new WaveDecoder(ms);
                   Signal s = decoder.Decode();
                   videoFileWrite.WriteAudioFrame(s);

                   encoder.Close();
                   decoder.Close();

               }
               DoneWriting = true;
           }
       }
  • Problem with ffplay from webcam stream using complex filters

    29 mai 2022, par efelbar

    I'm trying to stream video from a webcam (at /dev/video2) through ffplay to scale and recolor it, add some text, and then reduce the number of colors with palettes. I don't get any errors, but running the ffplay command :

    &#xA;

    ffplay -i /dev/video2 -vf "hflip,\&#xA;  colorbalance=\&#xA;    rs=0.4:\&#xA;    bs=-0.4\&#xA;  ,\&#xA;  scale=\&#xA;    trunc(iw/8):\&#xA;    trunc(ih/8)\&#xA;  ,\&#xA;  drawtext=\&#xA;    text=\&#xA;      &#x27;efelbar&#x27;:\&#xA;      fontcolor=white:\&#xA;      fontsize=10:\&#xA;      box=1:\&#xA;      boxcolor=black:\&#xA;      boxborderw=5:\&#xA;      x=(w-text_w)/2:\&#xA;      y=(h-text_h)/2\&#xA;  ,\&#xA;  split[s0][s1];\&#xA;  [s0]palettegen=\&#xA;    max_colors=16\&#xA;  [p];\&#xA;  [s1][p]paletteuse"&#xA;

    &#xA;

    seems to stall, and fails to produce video output.

    &#xA;

    Running the simpler command ffplay -i /dev/video2 -vf "split[s0][s1];[s0]palettegen=max_colors=16[p];[s1][p]paletteuse", which takes a stream from a webcam and (should) reduce the number of colors, results in it just sitting there without showing the actual output stream. This might just be a performance issue because I'm on older hardware, but it doesn't give output relfective of that.

    &#xA;

    The output of that command is as follows :

    &#xA;

    ffplay version n5.0 Copyright (c) 2003-2022 the FFmpeg developers&#xA;  built with gcc 11.2.0 (GCC)&#xA;  configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-amf --enable-avisynth --enable-cuda-llvm --enable-lto --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libdav1d --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libiec61883 --enable-libjack --enable-libmfx --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librav1e --enable-librsvg --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-libzimg --enable-nvdec --enable-nvenc --enable-shared --enable-version3&#xA;  libavutil      57. 17.100 / 57. 17.100&#xA;  libavcodec     59. 18.100 / 59. 18.100&#xA;  libavformat    59. 16.100 / 59. 16.100&#xA;  libavdevice    59.  4.100 / 59.  4.100&#xA;  libavfilter     8. 24.100 /  8. 24.100&#xA;  libswscale      6.  4.100 /  6.  4.100&#xA;  libswresample   4.  3.100 /  4.  3.100&#xA;  libpostproc    56.  3.100 / 56.  3.100&#xA;Input #0, video4linux2,v4l2, from &#x27;/dev/video2&#x27;:B sq=    0B f=0/0   &#xA;  Duration: N/A, start: 254970.739108, bitrate: 147456 kb/s&#xA;  Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 640x480, 147456 kb/s, 30 fps, 30 tbr, 1000k tbn&#xA;

    &#xA;

    I'm running this on a thinkpad t420s, so I definitely wouldn't be surprised if my laptop just can't process video that quickly. If that is the case, suggestions for optimizations would be great !

    &#xA;