Recherche avancée

Médias (91)

Autres articles (43)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • MediaSPIP Init et Diogène : types de publications de MediaSPIP

    11 novembre 2010, par

    À l’installation d’un site MediaSPIP, le plugin MediaSPIP Init réalise certaines opérations dont la principale consiste à créer quatre rubriques principales dans le site et de créer cinq templates de formulaire pour Diogène.
    Ces quatre rubriques principales (aussi appelées secteurs) sont : Medias ; Sites ; Editos ; Actualités ;
    Pour chacune de ces rubriques est créé un template de formulaire spécifique éponyme. Pour la rubrique "Medias" un second template "catégorie" est créé permettant d’ajouter (...)

Sur d’autres sites (4682)

  • Custom IO writes only header, rest of frames seems omitted

    18 septembre 2023, par Daniel

    I'm using libavformat to read packets from rtsp and remux it to mp4 (fragmented).

    


    Video frames are intact, meaning I don't want to transcode/modify/change anything.
Video frames shall be remuxed into mp4 in their original form. (i.e. : NALUs shall remain the same).

    


    I have updated libavformat to latest (currently 4.4).

    


    Here is my snippet :

    


    //open input, probesize is set to 32, we don't need to decode anything
avformat_open_input

//open output with custom io
avformat_alloc_output_context2(&ofctx,...);
ofctx->pb = avio_alloc_context(buffer, bufsize, 1/*write flag*/, 0, 0, &writeOutput, 0);
ofctx->flags |= AVFMT_FLAG_NOBUFFER | AVFMT_FLAG_FLUSH_PACKETS | AVFMT_FLAG_CUSTOM_IO;

avformat_write_header(...);

//loop
av_read_frame()
LOGPACKET_DETAILS //<- this works, packets are coming
av_write_frame() //<- this doesn't work, my write callback is not called. Also tried with av_write_interleaved_frame, not seem to work.

int writeOutput(void *opaque, uint8_t *buffer, int buffer_size) {
  printf("writeOutput: writing %d bytes: ", buffer_size);
}


    


    avformat_write_header works, it prints the header correctly.

    


    I'm looking for the reason on why my custom IO is not called after a frame has been read.

    


    There must be some more flags should be set to ask avformat to don't care about decoding, just write out whatever comes in.

    


    More information :
Input stream is a VBR encoded H264. It seems av_write_frame calls my write function only in case an SPS, PPS or IDR frame. Non-IDR frames are not passed at all.

    


    Update

    


    I found out if I request IDR frame at every second (I can ask it from the encoder), writeOutput is called at every second.

    


    I created a test : after a client joins, I requested the encoder to create IDRs @1Hz for 10 times. Libav calls writeOutput at 1Hz for 10 seconds, but then encoder sets itself back to create IDR only at every 10 seconds. And then libav calls writeOutput only at every 10s, which makes my decoder fail. In case 1Hz IDRs, decoder is fine.

    


  • Run ffmpeg audio under Windows with Flutter

    11 janvier, par Chris

    I'd like to stream audio comming from my microphone in my flutter app in the windows desktop version.

    


    Since there is no library that seems to do such thing while supporting windows desktop app, I have tried using Process like this :

    


    // Start the FFmpeg process to capture audio
ffmpegProcess = await Process.start(
  'ffmpeg',
  [
    '-f', 'dshow', // Specify DirectShow input for Windows
    '-i', 'audio="$selectedMic"', // Input audio device (selected mic)
    '-f', 'wav', // Set audio format
    '-ar', '44100', // Set audio sample rate
    '-ac', '1', // Mono channel
    '-b:a', '128k', // Set audio bitrate
    'pipe:1', // Output to stdout
  ],
);

// Listen for data on stdout (audio stream)
ffmpegProcess.stdout.listen((data) async {
  // Send audio data to the server as it comes in
  await sendAudioToServer(data);
});


    


    I've tested the command directly in my terminal (not in the flutter app) and it works fine.

    


    When I run this code in my Flutter app, my task manager also shows a "ffmpeg" process, but somehow there is no stream output in flutter, even after ensuring that the selectedMic variable is correct or even when its hardcoded.

    


    Since this command runs without issue in my terminal and even in python, I am wondering why it does not work in Flutter.

    


    Starting my vscode as administrator also don't solve the issue (I wanted to check if it's a permission issue).

    


    Also relevant : When I run the "ffmpeg -version" command, I get an output for the version, meaning that this is not an installation problem (ffmpeg bin folder is in my PATH environment variable). The problem seems to come from recording from the microphone in flutter, but I don't get why.

    


    ffmpegProcess = await Process.start('ffmpeg', ['-version']);  // this works


    


    I'd love to get some suggestions where the problem could come from or any kind of alternative solutions.

    


  • avformat/sccdec : Don't use uninitialized data, fix crash, simplify logic

    1er octobre 2021, par Andreas Rheinhardt
    avformat/sccdec : Don't use uninitialized data, fix crash, simplify logic
    

    Up until now, the scc demuxer not only read the line that it intends
    to process, but also the next line, in order to be able to calculate
    the duration of the current line. This approach leads to unnecessary
    complexity and also to bugs : For the last line, the timing of the
    next subtitle is not only logically indeterminate, but also
    uninitialized and the same applies to the duration of the last packet
    derived from it.* Worse yet, in case of e.g. an empty file, it is not
    only the duration that is uninitialized, but the whole timing as well
    as the line buffer itself.** The latter is used in av_strtok(), which
    could lead to crashes. Furthermore, the current code always outputs
    at least one packet, even for empty files.

    This commit fixes all of this : It stops using two lines at a time ;
    instead only the current line is dealt with and in case there is
    a packet after that, the duration of the last packet is fixed up
    after having already parsed it ; consequently the duration of the
    last packet is left in its default state (meaning "unknown/up until
    the next subtitle"). If no further line could be read, processing
    is stopped ; in particular, no packet is output for an empty file.

    * : Due to stack reuse it seems to be zero quite often ; for the same
    reason Valgrind does not report any errors for a normal input file.
    ** : While ff_subtitles_read_line() claims to always zero-terminate
    the buffer like snprintf(), it doesn't do so if it didn't read anything.
    And even if it did, it would not necessarily help here : The current
    code jumps over 12 bytes that it deems to have read even when it
    hasn't.

    Reviewed-by : Paul B Mahol <onemda@gmail.com>
    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] libavformat/sccdec.c