
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (15)
-
Qu’est ce qu’un éditorial
21 juin 2013, parEcrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
Vous pouvez personnaliser le formulaire de création d’un éditorial.
Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...) -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Déploiements possibles
31 janvier 2010, parDeux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
Version mono serveur
La version mono serveur consiste à n’utiliser qu’une (...)
Sur d’autres sites (2491)
-
Streaming mp4a to localhost using udp and ffmpeg
2 août 2017, par noswoscarI am using the following command to stream a video and it’s audio to localhost :
ffmpeg -re -i out.mp4 -map 0:0 -vcodec libx264 -f h264 udp ://127.0.0.1:1234 -map 0:1 -acodec libfaac -f mp4a udp ://127.0.0.1:2020FFmpeg is not recognising my audio codec and my audio format so I get the following error message :
ErrorWhat audio format and codec do I need to use ? The codec information of the video I wish to send is as follows :
Codecs usedWhen I convert the audio track to mp3 I can run the above command and stream the video and audio properly. However I dont want to convert all my video audio-tracks to mp3.
(I am confused by all the encoders, decoders, codec names in the ffmpeg documentation) Is there a way of finding the right encoder to use with the mp4a audio codec other than reading the whole list of codecs and options ?
Thanks.
-
Your guide to cookies, web analytics, and GDPR compliance
-
Decode mp3 using FFMpeg, Android NDK - What is wrong with my AVFormatContext ?
27 février 2020, par michpohlI am trying to decode am MP3 file to a raw PCM stream using FFMpeg via JNI on Android. I have compiled the latest FFMpeg version (4.2) and added it to my app. This did not make any problems.
The goal is to be able to use mp3 files from the device’s storage for playback with oboeSince I am relatively inexperienced with both C++ and FFMpeg, my approach is based upon this :
oboe’s RhythmGame exampleI have based my
FFMpegExtractor
class on the one found in the example here. With the help of StackOverflow theAAssetManager
use was removed and instead aMediaSource
helper class now serves as a wrapper for my stream (see here)But unfortunately, creating the AVFormatContext doesn’t work right - and I can’t seem to understand why. Since I have very limited understanding of correct pointer usage and C++ memory management, I suspect it’s most likely I’m doing something wrong in that area. But honestly, I have no idea.
This is my
FFMpegExtractor.h
:#define MYAPP_FFMPEGEXTRACTOR_H
extern "C" {
#include <libavformat></libavformat>avformat.h>
#include <libswresample></libswresample>swresample.h>
#include <libavutil></libavutil>opt.h>
}
#include <cstdint>
#include <android></android>asset_manager.h>
#include
#include <fstream>
#include "MediaSource.cpp"
class FFMpegExtractor {
public:
FFMpegExtractor();
~FFMpegExtractor();
int64_t decode2(char *filepath, uint8_t *targetData, AudioProperties targetProperties);
private:
MediaSource *mSource;
bool createAVFormatContext(AVIOContext *avioContext, AVFormatContext **avFormatContext);
bool openAVFormatContext(AVFormatContext *avFormatContext);
int32_t cleanup(AVIOContext *avioContext, AVFormatContext *avFormatContext);
bool getStreamInfo(AVFormatContext *avFormatContext);
AVStream *getBestAudioStream(AVFormatContext *avFormatContext);
AVCodec *findCodec(AVCodecID id);
void printCodecParameters(AVCodecParameters *params);
bool createAVIOContext2(const std::string &filePath, uint8_t *buffer, uint32_t bufferSize,
AVIOContext **avioContext);
};
#endif //MYAPP_FFMPEGEXTRACTOR_H
</fstream></cstdint>This is
FFMPegExtractor.cpp
:#include <memory>
#include <oboe></oboe>Definitions.h>
#include "FFMpegExtractor.h"
#include "logging.h"
#include <fstream>
FFMpegExtractor::FFMpegExtractor() {
mSource = new MediaSource;
}
FFMpegExtractor::~FFMpegExtractor() {
delete mSource;
}
constexpr int kInternalBufferSize = 1152; // Use MP3 block size. https://wiki.hydrogenaud.io/index.php?title=MP3
/**
* Reads from an IStream into FFmpeg.
*
* @param ptr A pointer to the user-defined IO data structure.
* @param buf A buffer to read into.
* @param buf_size The size of the buffer buff.
*
* @return The number of bytes read into the buffer.
*/
// If FFmpeg needs to read the file, it will call this function.
// We need to fill the buffer with file's data.
int read(void *opaque, uint8_t *buffer, int buf_size) {
MediaSource *source = (MediaSource *) opaque;
return source->read(buffer, buf_size);
}
// If FFmpeg needs to seek in the file, it will call this function.
// We need to change the read pos.
int64_t seek(void *opaque, int64_t offset, int whence) {
MediaSource *source = (MediaSource *) opaque;
return source->seek(offset, whence);
}
// Create and save a MediaSource instance.
bool FFMpegExtractor::createAVIOContext2(const std::string &filepath, uint8_t *buffer, uint32_t bufferSize,
AVIOContext **avioContext) {
mSource = new MediaSource;
mSource->open(filepath);
constexpr int isBufferWriteable = 0;
*avioContext = avio_alloc_context(
buffer, // internal buffer for FFmpeg to use
bufferSize, // For optimal decoding speed this should be the protocol block size
isBufferWriteable,
mSource, // Will be passed to our callback functions as a (void *)
read, // Read callback function
nullptr, // Write callback function (not used)
seek); // Seek callback function
if (*avioContext == nullptr) {
LOGE("Failed to create AVIO context");
return false;
} else {
return true;
}
}
bool
FFMpegExtractor::createAVFormatContext(AVIOContext *avioContext,
AVFormatContext **avFormatContext) {
*avFormatContext = avformat_alloc_context();
(*avFormatContext)->pb = avioContext;
if (*avFormatContext == nullptr) {
LOGE("Failed to create AVFormatContext");
return false;
} else {
LOGD("Successfully created AVFormatContext");
return true;
}
}
bool FFMpegExtractor::openAVFormatContext(AVFormatContext *avFormatContext) {
int result = avformat_open_input(&avFormatContext,
"", /* URL is left empty because we're providing our own I/O */
nullptr /* AVInputFormat *fmt */,
nullptr /* AVDictionary **options */
);
if (result == 0) {
return true;
} else {
LOGE("Failed to open file. Error code %s", av_err2str(result));
return false;
}
}
bool FFMpegExtractor::getStreamInfo(AVFormatContext *avFormatContext) {
int result = avformat_find_stream_info(avFormatContext, nullptr);
if (result == 0) {
return true;
} else {
LOGE("Failed to find stream info. Error code %s", av_err2str(result));
return false;
}
}
AVStream *FFMpegExtractor::getBestAudioStream(AVFormatContext *avFormatContext) {
int streamIndex = av_find_best_stream(avFormatContext, AVMEDIA_TYPE_AUDIO, -1, -1, nullptr, 0);
if (streamIndex < 0) {
LOGE("Could not find stream");
return nullptr;
} else {
return avFormatContext->streams[streamIndex];
}
}
int64_t FFMpegExtractor::decode2(
char* filepath,
uint8_t *targetData,
AudioProperties targetProperties) {
LOGD("Decode SETUP");
int returnValue = -1; // -1 indicates error
// Create a buffer for FFmpeg to use for decoding (freed in the custom deleter below)
auto buffer = reinterpret_cast(av_malloc(kInternalBufferSize));
// Create an AVIOContext with a custom deleter
std::unique_ptr ioContext{
nullptr,
[](AVIOContext *c) {
av_free(c->buffer);
avio_context_free(&c);
}
};
{
AVIOContext *tmp = nullptr;
if (!createAVIOContext2(filepath, buffer, kInternalBufferSize, &tmp)) {
LOGE("Could not create an AVIOContext");
return returnValue;
}
ioContext.reset(tmp);
}
// Create an AVFormatContext using the avformat_free_context as the deleter function
std::unique_ptr formatContext{
nullptr,
&avformat_free_context
};
{
AVFormatContext *tmp;
if (!createAVFormatContext(ioContext.get(), &tmp)) return returnValue;
formatContext.reset(tmp);
}
if (!openAVFormatContext(formatContext.get())) return returnValue;
LOGD("172");
if (!getStreamInfo(formatContext.get())) return returnValue;
LOGD("175");
// Obtain the best audio stream to decode
AVStream *stream = getBestAudioStream(formatContext.get());
if (stream == nullptr || stream->codecpar == nullptr) {
LOGE("Could not find a suitable audio stream to decode");
return returnValue;
}
LOGD("183");
printCodecParameters(stream->codecpar);
// Find the codec to decode this stream
AVCodec *codec = avcodec_find_decoder(stream->codecpar->codec_id);
if (!codec) {
LOGE("Could not find codec with ID: %d", stream->codecpar->codec_id);
return returnValue;
}
// Create the codec context, specifying the deleter function
std::unique_ptr codecContext{
nullptr,
[](AVCodecContext *c) { avcodec_free_context(&c); }
};
{
AVCodecContext *tmp = avcodec_alloc_context3(codec);
if (!tmp) {
LOGE("Failed to allocate codec context");
return returnValue;
}
codecContext.reset(tmp);
}
// Copy the codec parameters into the context
if (avcodec_parameters_to_context(codecContext.get(), stream->codecpar) < 0) {
LOGE("Failed to copy codec parameters to codec context");
return returnValue;
}
// Open the codec
if (avcodec_open2(codecContext.get(), codec, nullptr) < 0) {
LOGE("Could not open codec");
return returnValue;
}
// prepare resampler
int32_t outChannelLayout = (1 << targetProperties.channelCount) - 1;
LOGD("Channel layout %d", outChannelLayout);
SwrContext *swr = swr_alloc();
av_opt_set_int(swr, "in_channel_count", stream->codecpar->channels, 0);
av_opt_set_int(swr, "out_channel_count", targetProperties.channelCount, 0);
av_opt_set_int(swr, "in_channel_layout", stream->codecpar->channel_layout, 0);
av_opt_set_int(swr, "out_channel_layout", outChannelLayout, 0);
av_opt_set_int(swr, "in_sample_rate", stream->codecpar->sample_rate, 0);
av_opt_set_int(swr, "out_sample_rate", targetProperties.sampleRate, 0);
av_opt_set_int(swr, "in_sample_fmt", stream->codecpar->format, 0);
av_opt_set_sample_fmt(swr, "out_sample_fmt", AV_SAMPLE_FMT_FLT, 0);
av_opt_set_int(swr, "force_resampling", 1, 0);
// Check that resampler has been inited
int result = swr_init(swr);
if (result != 0) {
LOGE("swr_init failed. Error: %s", av_err2str(result));
return returnValue;
};
if (!swr_is_initialized(swr)) {
LOGE("swr_is_initialized is false\n");
return returnValue;
}
// Prepare to read data
int bytesWritten = 0;
AVPacket avPacket; // Stores compressed audio data
av_init_packet(&avPacket);
AVFrame *decodedFrame = av_frame_alloc(); // Stores raw audio data
int bytesPerSample = av_get_bytes_per_sample((AVSampleFormat) stream->codecpar->format);
LOGD("Bytes per sample %d", bytesPerSample);
// While there is more data to read, read it into the avPacket
while (av_read_frame(formatContext.get(), &avPacket) == 0) {
if (avPacket.stream_index == stream->index) {
while (avPacket.size > 0) {
// Pass our compressed data into the codec
result = avcodec_send_packet(codecContext.get(), &avPacket);
if (result != 0) {
LOGE("avcodec_send_packet error: %s", av_err2str(result));
goto cleanup;
}
// Retrieve our raw data from the codec
result = avcodec_receive_frame(codecContext.get(), decodedFrame);
if (result != 0) {
LOGE("avcodec_receive_frame error: %s", av_err2str(result));
goto cleanup;
}
// DO RESAMPLING
auto dst_nb_samples = (int32_t) av_rescale_rnd(
swr_get_delay(swr, decodedFrame->sample_rate) + decodedFrame->nb_samples,
targetProperties.sampleRate,
decodedFrame->sample_rate,
AV_ROUND_UP);
short *buffer1;
av_samples_alloc(
(uint8_t **) &buffer1,
nullptr,
targetProperties.channelCount,
dst_nb_samples,
AV_SAMPLE_FMT_FLT,
0);
int frame_count = swr_convert(
swr,
(uint8_t **) &buffer1,
dst_nb_samples,
(const uint8_t **) decodedFrame->data,
decodedFrame->nb_samples);
int64_t bytesToWrite = frame_count * sizeof(float) * targetProperties.channelCount;
memcpy(targetData + bytesWritten, buffer1, (size_t) bytesToWrite);
bytesWritten += bytesToWrite;
av_freep(&buffer1);
avPacket.size = 0;
avPacket.data = nullptr;
}
}
}
av_frame_free(&decodedFrame);
returnValue = bytesWritten;
cleanup:
return returnValue;
}
void FFMpegExtractor::printCodecParameters(AVCodecParameters *params) {
LOGD("Stream properties");
LOGD("Channels: %d", params->channels);
LOGD("Channel layout: %"
PRId64, params->channel_layout);
LOGD("Sample rate: %d", params->sample_rate);
LOGD("Format: %s", av_get_sample_fmt_name((AVSampleFormat) params->format));
LOGD("Frame size: %d", params->frame_size);
}
</fstream></memory>And this is the
MediaSource.cpp
:#ifndef MYAPP_MEDIASOURCE_CPP
#define MYAPP_MEDIASOURCE_CPP
extern "C" {
#include <libavformat></libavformat>avformat.h>
#include <libswresample></libswresample>swresample.h>
#include <libavutil></libavutil>opt.h>
}
#include <cstdint>
#include <android></android>asset_manager.h>
#include
#include <fstream>
#include "logging.h"
// wrapper class for file stream
class MediaSource {
public:
MediaSource() {
}
~MediaSource() {
source.close();
}
void open(const std::string &filePath) {
const char *x = filePath.c_str();
LOGD("Opened %s", x);
source.open(filePath, std::ios::in | std::ios::binary);
}
int read(uint8_t *buffer, int buf_size) {
// read data to buffer
source.read((char *) buffer, buf_size);
// return how many bytes were read
return source.gcount();
}
int64_t seek(int64_t offset, int whence) {
if (whence == AVSEEK_SIZE) {
// FFmpeg needs file size.
int oldPos = source.tellg();
source.seekg(0, std::ios::end);
int64_t length = source.tellg();
// seek to old pos
source.seekg(oldPos);
return length;
} else if (whence == SEEK_SET) {
// set pos to offset
source.seekg(offset);
} else if (whence == SEEK_CUR) {
// add offset to pos
source.seekg(offset, std::ios::cur);
} else {
// do not support other flags, return -1
return -1;
}
// return current pos
return source.tellg();
}
private:
std::ifstream source;
};
#endif //MYAPP_MEDIASOURCE_CPP
</fstream></cstdint>When the code is executed, I can see that I submit the correct file path, so I assume the resource mp3 is there.
When this code is executed the app crashes in line 103 ofFFMpegExtractor.cpp
, atformatContext.reset(tmp);
This is what Android Studio logs when the app crashes :
--------- beginning of crash
2020-02-27 14:31:26.341 9852-9945/com.user.myapp A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x7fffffff0 in tid 9945 (chaelpohl.loopy), pid 9852 (user.myapp)This is the (sadly very short) output I get with
ndk-stack
:********** Crash dump: **********
Build fingerprint: 'samsung/dreamltexx/dreamlte:9/PPR1.180610.011/G950FXXU6DSK9:user/release-keys'
#00 0x0000000000016c50 /data/app/com.user.myapp-D7dBCgHF-vdQNNSald4lWA==/lib/arm64/libavformat.so (avformat_free_context+260)
avformat_free_context
??:0:0
Crash dump is completedI tested a bit around, and every call to my
formatContext
crashes the app. So I assume there is something wrong with the input I provide to build it but I have no clue how to debug this.Any help is appreciated ! (Happy to provide additional resources if something crucial is missing).