Recherche avancée

Médias (1)

Mot : - Tags -/biographie

Autres articles (15)

  • Qu’est ce qu’un éditorial

    21 juin 2013, par

    Ecrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
    Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
    Vous pouvez personnaliser le formulaire de création d’un éditorial.
    Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

Sur d’autres sites (2491)

  • Streaming mp4a to localhost using udp and ffmpeg

    2 août 2017, par noswoscar

    I am using the following command to stream a video and it’s audio to localhost :
    ffmpeg -re -i out.mp4 -map 0:0 -vcodec libx264 -f h264 udp ://127.0.0.1:1234 -map 0:1 -acodec libfaac -f mp4a udp ://127.0.0.1:2020

    FFmpeg is not recognising my audio codec and my audio format so I get the following error message :
    Error

    What audio format and codec do I need to use ? The codec information of the video I wish to send is as follows :
    Codecs used

    When I convert the audio track to mp3 I can run the above command and stream the video and audio properly. However I dont want to convert all my video audio-tracks to mp3.

    (I am confused by all the encoders, decoders, codec names in the ffmpeg documentation) Is there a way of finding the right encoder to use with the mp4a audio codec other than reading the whole list of codecs and options ?

    Thanks.

  • Your guide to cookies, web analytics, and GDPR compliance

    25 février 2020, par Joselyn Khor — Analytics Tips, Privacy, Security

    It’s been almost two years since the GDPR came into effect and turned the online world on its head. Confusion around cookies/cookie consent/cookie compliance remains till today. So we’d like to take this chance to talk more about the supposed “big bad” of the latest century. 

    Online cookies seem to have a bad reputation, but are they as bad as they seem ?

    To start, what are cookies on the internet ?

    An internet cookie a.k.a. an HTTP cookie, is a small piece of data sent from websites that is stored on your computer or mobile when you visit that site.

    Are all cookies bad ?

    No. Cookies themselves are usually harmless as they can’t infect computers with malware. 

    They can also be helpful for both websites who use them and individuals visiting those websites. For example, when online shopping, cookies on ecommerce sites keep track of what you’re shopping for. If you didn’t have that tracking, your cart would be empty every time you moved away from that site.

    For businesses/websites, cookies can be used for authentication (logins) and tracking website user experience. For example, tracking multiple visits to the same site in order to provide better experiences to customers visiting their website.

    internet cookies tracking

    The not-so-sweet types of cookies :

    Cookies that contain personal data

    Another example of a bad cookie is when cookies contain personal data directly in the cookie itself. For example, when websites store demographics or your name in a cookie ; or when a website stores survey results in a cookie. Use of cookies in these ways is considered bad practice nowadays.

    Third-party cookies

    They can be used by websites to learn about your visit and activity across multiple websites. Cookies can enter harmful territory when employed for “big brother” types of tracking i.e. when they’re used to build a virtual fingerprint of individuals after their activity is tracked from website to website. For example most advertising networks create third party cookies in your browser when you view an ad, which lets these advertisers track users across these websites and let companies buy more targeted ads.

    Why does Matomo use cookies ?

    web analytics cookies

    For accurate reporting of new and returning visitors. Matomo uses cookies to store some information about visitors between visits. We also use cookies to remember if someone gave consent to tracking, or opted out of tracking. 

    Types of cookies Matomo uses :

    • Matomo by default uses first-party cookies, set on the domain of your site.
    • Cookies created by Matomo start with : _pk_ref_pk_cvar_pk_id_pk_ses. See a list of all Matomo cookies : https://matomo.org/faq/general/faq_146/

    Cookie-less tracking - disable cookies and ensure cookie compliance :

    It’s possible to disable tracking cookies in Matomo by adding a line on the javascript code. When cookies are disabled, Matomo data will become slightly less accurateAlso, when cookies are disabled, there may still be a few cookies created in specific cases.

    If you disable cookies, Matomo tries to detect unique visitors by a fingerprint based on a few browser attributes : operating system, browser, browser plugins, IP address and browser language.

    By disabling tracking cookies, you may also use Matomo without needing to display a cookie consent screen. You can also keep tracking when they reject cookie consent by keeping cookies disabled.

    Cookies and the GDPR

    In some countries and according to the GDPR, websites need to provide a way for users to opt-out of all tracking, in particular tracking cookies.

    The GDPR regulates the use of cookies when they compromise an individual’s privacy. When cookies can identify an individual, it is considered personal data.

    cookies and GDPR

    Cookie compliance and the GDPR

    To be GDPR compliant you must :

    • Receive user consent before using any cookies (except strictly necessary cookies). Read more on cookies that are “clearly exempt from consent”.
    • Provide accurate and specific information about the data each cookie tracks and its purpose in plain language before consent is received.
    • Document and store consent received from users.
    • Allow users to access your service even if they refuse to allow the use of certain cookies
    • Make it as easy for users to withdraw their consent as it was for them to give their consent in the first place.

    Source : https://gdpr.eu/cookies/

    When does GDPR require cookie consent ?

    The purpose of the GDPR is to give individuals control over their personal data. As such this regulation has provisions and requirements which regulate the processing of personal data to protect the privacy of individuals. 

    This means in order to use cookies, you will sometimes need explicit consent from those individuals.

    When does GDPR not require cookie consent ?

    Then there are many cookies that generally do NOT require consent (Source : https://wikis.ec.europa.eu/display/WEBGUIDE/04.+Cookies). 

    These are :

    • user input cookies, for the duration of a session
    • authentication cookies, for the duration of a session
    • user-centric security cookies, used to detect authentication abuses and linked to the functionality explicitly requested by the user, for a limited persistent duration
    • multimedia content player session cookies, such as flash player cookies, for the duration of a session
    • load balancing session cookies and other technical cookies, for the duration of session
    • user interface customisation cookies, for a browser session or a few hours, when additional information in a prominent location is provided (e.g. “uses cookies” written next to the customisation feature)

    Tracking cookies and consent vs legitimate interest

    cookie consent and GDPR legitimate interests

    User consent is not always required :

    We understand that whenever you collect and process personal data, you need – almost always – to ask for their consent. However, there are instances where you have to process data under “legitimate interests”. The GDPR states that processing of personal data is lawful “if processing is necessary for the purposes of the legitimate interests”. This means if you have “legitimate interests” you can avoid asking for consent for collecting and processing personal information. Learn more : https://cookieinformation.com/resources/blog/what-is-legitimate-interest-under-the-gdpr 

    A lawful basis for processing personal data (proceeding with caution) :

    We’ve also written about having a lawful basis for processing personal data under GDPR with Matomo. The caveat here is you need to have a strong argument for legitimate interests. If you are processing personal data which may represent a risk to the final user, then getting consent is, for us, still the right lawful basis. If you are not sure, at the time of writing ICO is providing a tool in order to help you make this decision.

    How is Matomo Analytics GDPR compliant ?

    Matomo can be configured to automatically anonymise data so you don’t process any personal data. This allows you to completely avoid GDPR. If you decide to process personal data, Matomo provides you with 12 steps to easily comply with the GDPR guidelines.

    New developments on cookies and the GDPR

    In the early days of the GDPR, a spate of cookie management platforms (CMPs) popped up to help websites and people comply with GDPR rules around cookies.

    These have become problematic in recent years. Europe’s highest court ruled pre-checked box for cookie boxes does not give enough consent

    As well as that, new research suggests most cookie consent pop-ups in the EU fall short of GDPR. A new study called, ‘Dark Patterns after the GDPR’ from MIT, UCL and Aarhus University found that a vast majority of websites aren’t following GDPR rules around cookies. The study found most cookie consent pop-ups in the EU to be undermining the GDPR by finding sneaky ways to convince website visitors to click ‘accept’.

    Disclaimer

    We are not lawyers and don’t claim to be. The information provided here is to help give an introduction to issues you may encounter when dealing cookies. We encourage every business and website to take data privacy seriously and discuss these issues with your lawyer if you have any concerns. 

    Additional resources :

  • Decode mp3 using FFMpeg, Android NDK - What is wrong with my AVFormatContext ?

    27 février 2020, par michpohl

    I am trying to decode am MP3 file to a raw PCM stream using FFMpeg via JNI on Android. I have compiled the latest FFMpeg version (4.2) and added it to my app. This did not make any problems.
    The goal is to be able to use mp3 files from the device’s storage for playback with oboe

    Since I am relatively inexperienced with both C++ and FFMpeg, my approach is based upon this :
    oboe’s RhythmGame example

    I have based my FFMpegExtractorclass on the one found in the example here. With the help of StackOverflow the AAssetManageruse was removed and instead a MediaSource helper class now serves as a wrapper for my stream (see here)

    But unfortunately, creating the AVFormatContext doesn’t work right - and I can’t seem to understand why. Since I have very limited understanding of correct pointer usage and C++ memory management, I suspect it’s most likely I’m doing something wrong in that area. But honestly, I have no idea.

    This is my FFMpegExtractor.h :

    #define MYAPP_FFMPEGEXTRACTOR_H

    extern "C" {
    #include <libavformat></libavformat>avformat.h>
    #include <libswresample></libswresample>swresample.h>
    #include <libavutil></libavutil>opt.h>
    }

    #include <cstdint>
    #include <android></android>asset_manager.h>
    #include
    #include <fstream>
    #include "MediaSource.cpp"


    class FFMpegExtractor {
    public:

       FFMpegExtractor();

       ~FFMpegExtractor();

       int64_t decode2(char *filepath, uint8_t *targetData, AudioProperties targetProperties);

    private:
       MediaSource *mSource;

       bool createAVFormatContext(AVIOContext *avioContext, AVFormatContext **avFormatContext);

       bool openAVFormatContext(AVFormatContext *avFormatContext);

       int32_t cleanup(AVIOContext *avioContext, AVFormatContext *avFormatContext);

       bool getStreamInfo(AVFormatContext *avFormatContext);

       AVStream *getBestAudioStream(AVFormatContext *avFormatContext);

       AVCodec *findCodec(AVCodecID id);

       void printCodecParameters(AVCodecParameters *params);

       bool createAVIOContext2(const std::string &amp;filePath, uint8_t *buffer, uint32_t bufferSize,
                               AVIOContext **avioContext);
    };


    #endif //MYAPP_FFMPEGEXTRACTOR_H
    </fstream></cstdint>

    This is FFMPegExtractor.cpp :

    #include <memory>
    #include <oboe></oboe>Definitions.h>
    #include "FFMpegExtractor.h"
    #include "logging.h"
    #include <fstream>

    FFMpegExtractor::FFMpegExtractor() {
       mSource = new MediaSource;
    }

    FFMpegExtractor::~FFMpegExtractor() {
       delete mSource;
    }

    constexpr int kInternalBufferSize = 1152; // Use MP3 block size. https://wiki.hydrogenaud.io/index.php?title=MP3

    /**
    * Reads from an IStream into FFmpeg.
    *
    * @param ptr       A pointer to the user-defined IO data structure.
    * @param buf       A buffer to read into.
    * @param buf_size  The size of the buffer buff.
    *
    * @return The number of bytes read into the buffer.
    */


    // If FFmpeg needs to read the file, it will call this function.
    // We need to fill the buffer with file's data.
    int read(void *opaque, uint8_t *buffer, int buf_size) {
       MediaSource *source = (MediaSource *) opaque;
       return source->read(buffer, buf_size);
    }

    // If FFmpeg needs to seek in the file, it will call this function.
    // We need to change the read pos.
    int64_t seek(void *opaque, int64_t offset, int whence) {
       MediaSource *source = (MediaSource *) opaque;
       return source->seek(offset, whence);
    }


    // Create and save a MediaSource instance.
    bool FFMpegExtractor::createAVIOContext2(const std::string &amp;filepath, uint8_t *buffer, uint32_t bufferSize,
                                            AVIOContext **avioContext) {

       mSource = new MediaSource;
       mSource->open(filepath);
       constexpr int isBufferWriteable = 0;

       *avioContext = avio_alloc_context(
               buffer, // internal buffer for FFmpeg to use
               bufferSize, // For optimal decoding speed this should be the protocol block size
               isBufferWriteable,
               mSource, // Will be passed to our callback functions as a (void *)
               read, // Read callback function
               nullptr, // Write callback function (not used)
               seek); // Seek callback function

       if (*avioContext == nullptr) {
           LOGE("Failed to create AVIO context");
           return false;
       } else {
           return true;
       }
    }

    bool
    FFMpegExtractor::createAVFormatContext(AVIOContext *avioContext,
                                          AVFormatContext **avFormatContext) {

       *avFormatContext = avformat_alloc_context();
       (*avFormatContext)->pb = avioContext;

       if (*avFormatContext == nullptr) {
           LOGE("Failed to create AVFormatContext");
           return false;
       } else {
           LOGD("Successfully created AVFormatContext");
           return true;
       }
    }

    bool FFMpegExtractor::openAVFormatContext(AVFormatContext *avFormatContext) {

       int result = avformat_open_input(&amp;avFormatContext,
                                        "", /* URL is left empty because we're providing our own I/O */
                                        nullptr /* AVInputFormat *fmt */,
                                        nullptr /* AVDictionary **options */
       );

       if (result == 0) {
           return true;
       } else {
           LOGE("Failed to open file. Error code %s", av_err2str(result));
           return false;
       }
    }

    bool FFMpegExtractor::getStreamInfo(AVFormatContext *avFormatContext) {

       int result = avformat_find_stream_info(avFormatContext, nullptr);
       if (result == 0) {
           return true;
       } else {
           LOGE("Failed to find stream info. Error code %s", av_err2str(result));
           return false;
       }
    }

    AVStream *FFMpegExtractor::getBestAudioStream(AVFormatContext *avFormatContext) {

       int streamIndex = av_find_best_stream(avFormatContext, AVMEDIA_TYPE_AUDIO, -1, -1, nullptr, 0);

       if (streamIndex &lt; 0) {
           LOGE("Could not find stream");
           return nullptr;
       } else {
           return avFormatContext->streams[streamIndex];
       }
    }

    int64_t FFMpegExtractor::decode2(
           char* filepath,
           uint8_t *targetData,
           AudioProperties targetProperties) {

       LOGD("Decode SETUP");
       int returnValue = -1; // -1 indicates error

       // Create a buffer for FFmpeg to use for decoding (freed in the custom deleter below)
       auto buffer = reinterpret_cast(av_malloc(kInternalBufferSize));


       // Create an AVIOContext with a custom deleter
       std::unique_ptr ioContext{
               nullptr,
               [](AVIOContext *c) {
                   av_free(c->buffer);
                   avio_context_free(&amp;c);
               }
       };
       {
           AVIOContext *tmp = nullptr;
           if (!createAVIOContext2(filepath, buffer, kInternalBufferSize, &amp;tmp)) {
               LOGE("Could not create an AVIOContext");
               return returnValue;
           }
           ioContext.reset(tmp);
       }
       // Create an AVFormatContext using the avformat_free_context as the deleter function
       std::unique_ptr formatContext{
               nullptr,
               &amp;avformat_free_context
       };

       {
           AVFormatContext *tmp;
           if (!createAVFormatContext(ioContext.get(), &amp;tmp)) return returnValue;
           formatContext.reset(tmp);
       }
       if (!openAVFormatContext(formatContext.get())) return returnValue;
       LOGD("172");

       if (!getStreamInfo(formatContext.get())) return returnValue;
       LOGD("175");

       // Obtain the best audio stream to decode
       AVStream *stream = getBestAudioStream(formatContext.get());
       if (stream == nullptr || stream->codecpar == nullptr) {
           LOGE("Could not find a suitable audio stream to decode");
           return returnValue;
       }
       LOGD("183");

       printCodecParameters(stream->codecpar);

       // Find the codec to decode this stream
       AVCodec *codec = avcodec_find_decoder(stream->codecpar->codec_id);
       if (!codec) {
           LOGE("Could not find codec with ID: %d", stream->codecpar->codec_id);
           return returnValue;
       }

       // Create the codec context, specifying the deleter function
       std::unique_ptr codecContext{
               nullptr,
               [](AVCodecContext *c) { avcodec_free_context(&amp;c); }
       };
       {
           AVCodecContext *tmp = avcodec_alloc_context3(codec);
           if (!tmp) {
               LOGE("Failed to allocate codec context");
               return returnValue;
           }
           codecContext.reset(tmp);
       }

       // Copy the codec parameters into the context
       if (avcodec_parameters_to_context(codecContext.get(), stream->codecpar) &lt; 0) {
           LOGE("Failed to copy codec parameters to codec context");
           return returnValue;
       }

       // Open the codec
       if (avcodec_open2(codecContext.get(), codec, nullptr) &lt; 0) {
           LOGE("Could not open codec");
           return returnValue;
       }

       // prepare resampler
       int32_t outChannelLayout = (1 &lt;&lt; targetProperties.channelCount) - 1;
       LOGD("Channel layout %d", outChannelLayout);

       SwrContext *swr = swr_alloc();
       av_opt_set_int(swr, "in_channel_count", stream->codecpar->channels, 0);
       av_opt_set_int(swr, "out_channel_count", targetProperties.channelCount, 0);
       av_opt_set_int(swr, "in_channel_layout", stream->codecpar->channel_layout, 0);
       av_opt_set_int(swr, "out_channel_layout", outChannelLayout, 0);
       av_opt_set_int(swr, "in_sample_rate", stream->codecpar->sample_rate, 0);
       av_opt_set_int(swr, "out_sample_rate", targetProperties.sampleRate, 0);
       av_opt_set_int(swr, "in_sample_fmt", stream->codecpar->format, 0);
       av_opt_set_sample_fmt(swr, "out_sample_fmt", AV_SAMPLE_FMT_FLT, 0);
       av_opt_set_int(swr, "force_resampling", 1, 0);

       // Check that resampler has been inited
       int result = swr_init(swr);
       if (result != 0) {
           LOGE("swr_init failed. Error: %s", av_err2str(result));
           return returnValue;
       };
       if (!swr_is_initialized(swr)) {
           LOGE("swr_is_initialized is false\n");
           return returnValue;
       }

       // Prepare to read data
       int bytesWritten = 0;
       AVPacket avPacket; // Stores compressed audio data
       av_init_packet(&amp;avPacket);
       AVFrame *decodedFrame = av_frame_alloc(); // Stores raw audio data
       int bytesPerSample = av_get_bytes_per_sample((AVSampleFormat) stream->codecpar->format);

       LOGD("Bytes per sample %d", bytesPerSample);

       // While there is more data to read, read it into the avPacket
       while (av_read_frame(formatContext.get(), &amp;avPacket) == 0) {

           if (avPacket.stream_index == stream->index) {

               while (avPacket.size > 0) {
                   // Pass our compressed data into the codec
                   result = avcodec_send_packet(codecContext.get(), &amp;avPacket);
                   if (result != 0) {
                       LOGE("avcodec_send_packet error: %s", av_err2str(result));
                       goto cleanup;
                   }

                   // Retrieve our raw data from the codec
                   result = avcodec_receive_frame(codecContext.get(), decodedFrame);
                   if (result != 0) {
                       LOGE("avcodec_receive_frame error: %s", av_err2str(result));
                       goto cleanup;
                   }

                   // DO RESAMPLING
                   auto dst_nb_samples = (int32_t) av_rescale_rnd(
                           swr_get_delay(swr, decodedFrame->sample_rate) + decodedFrame->nb_samples,
                           targetProperties.sampleRate,
                           decodedFrame->sample_rate,
                           AV_ROUND_UP);

                   short *buffer1;
                   av_samples_alloc(
                           (uint8_t **) &amp;buffer1,
                           nullptr,
                           targetProperties.channelCount,
                           dst_nb_samples,
                           AV_SAMPLE_FMT_FLT,
                           0);
                   int frame_count = swr_convert(
                           swr,
                           (uint8_t **) &amp;buffer1,
                           dst_nb_samples,
                           (const uint8_t **) decodedFrame->data,
                           decodedFrame->nb_samples);

                   int64_t bytesToWrite = frame_count * sizeof(float) * targetProperties.channelCount;
                   memcpy(targetData + bytesWritten, buffer1, (size_t) bytesToWrite);
                   bytesWritten += bytesToWrite;
                   av_freep(&amp;buffer1);

                   avPacket.size = 0;
                   avPacket.data = nullptr;
               }
           }
       }

       av_frame_free(&amp;decodedFrame);

       returnValue = bytesWritten;

       cleanup:
       return returnValue;
    }

    void FFMpegExtractor::printCodecParameters(AVCodecParameters *params) {

       LOGD("Stream properties");
       LOGD("Channels: %d", params->channels);
       LOGD("Channel layout: %"
                    PRId64, params->channel_layout);
       LOGD("Sample rate: %d", params->sample_rate);
       LOGD("Format: %s", av_get_sample_fmt_name((AVSampleFormat) params->format));
       LOGD("Frame size: %d", params->frame_size);
    }
    </fstream></memory>

    And this is the MediaSource.cpp :

    #ifndef MYAPP_MEDIASOURCE_CPP
    #define MYAPP_MEDIASOURCE_CPP

    extern "C" {
    #include <libavformat></libavformat>avformat.h>
    #include <libswresample></libswresample>swresample.h>
    #include <libavutil></libavutil>opt.h>
    }

    #include <cstdint>
    #include <android></android>asset_manager.h>
    #include
    #include <fstream>
    #include "logging.h"

    // wrapper class for file stream
    class MediaSource {
    public:

       MediaSource() {
       }

       ~MediaSource() {
           source.close();
       }

       void open(const std::string &amp;filePath) {
           const char *x = filePath.c_str();
           LOGD("Opened %s", x);
           source.open(filePath, std::ios::in | std::ios::binary);
       }

       int read(uint8_t *buffer, int buf_size) {
           // read data to buffer
           source.read((char *) buffer, buf_size);
           // return how many bytes were read
           return source.gcount();
       }

       int64_t seek(int64_t offset, int whence) {
           if (whence == AVSEEK_SIZE) {
               // FFmpeg needs file size.
               int oldPos = source.tellg();
               source.seekg(0, std::ios::end);
               int64_t length = source.tellg();
               // seek to old pos
               source.seekg(oldPos);
               return length;
           } else if (whence == SEEK_SET) {
               // set pos to offset
               source.seekg(offset);
           } else if (whence == SEEK_CUR) {
               // add offset to pos
               source.seekg(offset, std::ios::cur);
           } else {
               // do not support other flags, return -1
               return -1;
           }
           // return current pos
           return source.tellg();
       }

    private:
       std::ifstream source;
    };

    #endif //MYAPP_MEDIASOURCE_CPP
    </fstream></cstdint>

    When the code is executed, I can see that I submit the correct file path, so I assume the resource mp3 is there.
    When this code is executed the app crashes in line 103 of FFMpegExtractor.cpp, at formatContext.reset(tmp);

    This is what Android Studio logs when the app crashes :

    --------- beginning of crash
    2020-02-27 14:31:26.341 9852-9945/com.user.myapp A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x7fffffff0 in tid 9945 (chaelpohl.loopy), pid 9852 (user.myapp)

    This is the (sadly very short) output I get with ndk-stack :

    ********** Crash dump: **********
    Build fingerprint: 'samsung/dreamltexx/dreamlte:9/PPR1.180610.011/G950FXXU6DSK9:user/release-keys'
    #00 0x0000000000016c50 /data/app/com.user.myapp-D7dBCgHF-vdQNNSald4lWA==/lib/arm64/libavformat.so (avformat_free_context+260)
                                                                                                            avformat_free_context
                                                                                                            ??:0:0
    Crash dump is completed

    I tested a bit around, and every call to my formatContext crashes the app. So I assume there is something wrong with the input I provide to build it but I have no clue how to debug this.

    Any help is appreciated ! (Happy to provide additional resources if something crucial is missing).