Recherche avancée

Médias (91)

Autres articles (69)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (5049)

  • ffmpeg cache whole video stream and save

    19 juillet 2016, par Luiz

    I have a security DVR that can stream a video recording using the RTSP protocol. I can record the playback using ffmpeg and save it to a file, but for a 20 min video, I have to watch the video to save or wait the whole playback time.
    I’m using :

    ffmpeg -i "rtsp://mystream" -r 15 -acodec copy -vcodec copy myvideo.mp4

    to do this

    Is there any way to buffer the whole stream and just save it to a file using ffmpeg, without the need to watch or wait the whole 20 minutes playback ? I don’t need to reencode anything since the stream video format is good enough for me.

    Thanks in advance

  • LibAV - what approach to take for realtime audio and video capture ?

    26 juillet 2012, par pollux

    I'm using libav to encode raw RGB24 frames to h264 and muxing it to flv. This works
    all fine and I've streamed for more then 48 hours w/o any problems ! My next step
    is to add audio to the stream. I'll be capturing live audio and I want to encode it
    in real time using speex, mp3 or nelly moser.

    Background info

    I'm new to digital audio and therefore I might be doing things wrong. But basically my application gets a "float" buffer with interleaved audio. This "audioIn" function gets called by the application framework I'm using. The buffer contains 256 samples per channel,
    and I have 2 channels. Because I might be mixing terminology, this is how I use the
    data :

    // input = array with audio samples
    // bufferSize = 256
    // nChannels = 2
    void audioIn(float * input, int bufferSize, int nChannels) {
       // convert from float to S16
           short* buf = new signed short[bufferSize * 2];
       for(int i = 0; i < bufferSize; ++i) {  // loop over all samples
           int dx = i * 2;
           buf[dx + 0] = (float)input[dx + 0] * numeric_limits<short>::max();  // convert frame  of the first channel
           buf[dx + 1] = (float)input[dx + 1] * numeric_limits<short>::max();  // convert frame  of the second channel
       }

           // add this to the libav wrapper.
       av.addAudioFrame((unsigned char*)buf, bufferSize, nChannels);

       delete[] buf;
    }
    </short></short>

    Now that I have a buffer, where each sample is 16 bits, I pass this short* buffer, to my
    wrapper av.addAudioFrame() function. In this function I create a buffer, before I encode
    the audio. From what I read, the AVCodecContext of the audio encoder sets the frame_size. This frame_size must match the number of samples in the buffer when calling avcodec_encode_audio2(). Why I think this, is because of what is documented here.

    Then, especially the line :
    If it is not set, frame->nb_samples must be equal to avctx->frame_size for all frames except the last.*(Please correct me here if I'm wrong about this).

    After encoding I call av_interleaved_write_frame() to actually write the frame.
    When I use mp3 as codec my application runs for about 1-2 minutes and then my server, which is receiving the video/audio stream (flv, tcp), disconnects with a message "Frame too large: 14485504". This message is generated because the rtmp-server is getting a frame which is way to big. And this is probably due to the fact I'm not interleaving correctly with libav.

    Questions :

    • There quite some bits I'm not sure of, even when going through the source code of libav and therefore I hope if someone has an working example of encoding audio which comes from a buffer which which comes from "outside" libav (i.e. your own application). i.e. how do you create a buffer which is large enough for the encoder ? How do you make the "realtime" streaming work when you need to wait on this buffer to fill up ?

    • As I wrote above I need to keep track of a buffer before I can encode. Does someone else has some code which does this ? I'm using AVAudioFifo now. The functions which encodes the audio and fills/read the buffer is here too : https://gist.github.com/62f717bbaa69ac7196be

    • I compiled with —enable-debug=3 and disable optimizations, but I'm not seeing any
      debug information. How can I make libav more verbose ?

    Thanks !

  • change wav, aiff or mov audio sample rate of MOV or WAV WITHOUT changing number of samples

    6 mars 2013, par John Pilgrim

    I need a very precise way to speed up audio.
    I am preparing films for OpenDCP, an open-source tool to make Digital Cinema Packages, for screening in theaters.
    My source files are usually quicktime MOV files at 23.976fps with 48.000kHz audio.
    Sometimes my audio is a separate 48.000kHz WAV.
    (FWIW, the video frame rate of the source is actually 24/100.1 frames per second, which is a repeating decimal.)

    The DCP standard is based around a 24.000fps and 48.000kHz program, so the audio and video of the source need to be sped up.
    The image processing workflow inherently involves converting the MOV to a TIF sequence, frame-per-frame, which is then assumed to be 24.000fps, so I don't have to get involved in the internals of the QT Video Media Handler.

    But speeding up the audio to match is proving to be difficult. Most audio programs cannot get the number of audio samples to line up with the retimed image frames. A 0.1% speed increase in Audacity results in the wrong number of samples. The only pathway that I have found that works is to use Apple Cinema Tools to conform the 23.976fps/48.000kHz MOV to 24.000fps/48.048kHz (which it does by changing the Quicktime headers) and then using Quicktime Player to export the audio from that file at 48.000kHz, resampling it. This is frame accurate.

    So my question is : are there settings in ffmpeg or sox that will precisely speed up the audio in a MOV or in a WAV or AIFF precisely ? I would like a cross platform solution, so I am not dependent on Cinema Tools, which is only MacOS.

    I know this is a LOT of background. Feel free to ask clarifying questions !