
Recherche avancée
Autres articles (100)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...) -
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)
Sur d’autres sites (6145)
-
Converting a voice recording into an mp3
21 juillet 2023, par Raphael MFor a vue.js messaging project, I'm using the wavesurfer.js library to record voice messages. However Google chrome gives me an audio/webm blob and Safari gives me an audio/mp4 blob.


I'm trying to find a solution to transcode the blob into audio/mp3. I've tried several methods, including ffmpeg. However, ffmpeg gives me an error when compiling "npm run dev" : "Can't resolve '/node_modules/@ffmpeg/core/dist/ffmpeg-core.js'".


"@ffmpeg/core": "^0.11.0",
"@ffmpeg/ffmpeg": "^0.11.6"



I tried to downgrade ffmpeg


"@ffmpeg/core": "^0.9.0",
"@ffmpeg/ffmpeg": "^0.9.8"



I no longer get the error message when compiling, but when I want to convert my audio stream, the console displays a problem with SharedBuffer : "Uncaught (in promise) ReferenceError : SharedArrayBuffer is not defined".


Here's my complete code below.
Is there a reliable way of transcoding the audio stream into mp3 ?


Can you give me an example ?


Thanks


<template>
 <div class="left-panel">
 <header class="radial-blue">
 <div class="container">
 <h1 class="mb-30">Posez votre première question à nos thérapeutes</h1>
 <p><b>Attention</b>, vous disposez seulement de 2 messages. Veillez à les utiliser de manière judicieuse !</p>
 <div class="available-messages">
 <div class="item disabled">
 <span>Message 1</span>
 </div>
 <div class="item">
 <span>Message 2</span>
 </div>
 </div>
 </div>
 </header>
 </div>
 <div class="right-panel">
 <div class="messagerie bg-light">
 <messaging ref="messagingComponent"></messaging>
 <footer>
 <button type="button"><img src="http://stackoverflow.com/assets/backoffice/images/record-start.svg" style='max-width: 300px; max-height: 300px' /></button>
 <div class="loading-animation">
 <img src="http://stackoverflow.com/assets/backoffice/images/record-loading.svg" style='max-width: 300px; max-height: 300px' />
 </div>
 <button type="button"><img src="http://stackoverflow.com/assets/backoffice/images/record-stop.svg" style='max-width: 300px; max-height: 300px' /></button>
 <div class="textarea gradient text-dark">
 <textarea placeholder="Posez votre question"></textarea>
 </div>
 <div class="loading-text">Chargement de votre microphone en cours...</div>
 <div class="loading-text">Envoi de votre message en cours...</div>
 <div ref="visualizer"></div>
 <button type="button"><img src="http://stackoverflow.com/assets/backoffice/images/send.svg" style='max-width: 300px; max-height: 300px' /></button>
 <div>
 {{ formatTimer() }}
 </div>
 </footer>
 </div>
 </div>
</template>

<code class="echappe-js"><script>&#xA;import Messaging from "./Messaging.vue";&#xA;import { createFFmpeg, fetchFile } from &#x27;@ffmpeg/ffmpeg&#x27;;&#xA;&#xA;export default {&#xA; data() {&#xA; return {&#xA; isMicrophoneLoading: false,&#xA; isSubmitLoading: false,&#xA; isMobile: false,&#xA; isMessagerie: false,&#xA; isRecording: false,&#xA; audioUrl: &#x27;&#x27;,&#xA; messageText: &#x27;&#x27;,&#xA; message:null,&#xA; wavesurfer: null,&#xA; access:(this.isMobile?&#x27;denied&#x27;:&#x27;granted&#x27;),&#xA; maxMinutes: 5,&#xA; orangeTimer: 3,&#xA; redTimer: 4,&#xA; timer: 0,&#xA; timerInterval: null,&#xA; ffmpeg: null,&#xA; };&#xA; },&#xA; components: {&#xA; Messaging,&#xA; },&#xA; mounted() {&#xA; this.checkScreenSize();&#xA; window.addEventListener(&#x27;resize&#x27;, this.checkScreenSize);&#xA;&#xA; if(!this.isMobile)&#xA; {&#xA; this.$moment.locale(&#x27;fr&#x27;);&#xA; window.addEventListener(&#x27;beforeunload&#x27;, (event) => {&#xA; if (this.isMessagerie) {&#xA; event.preventDefault();&#xA; event.returnValue = &#x27;&#x27;;&#xA; }&#xA; });&#xA;&#xA; this.initializeWaveSurfer();&#xA; }&#xA; },&#xA; beforeUnmount() {&#xA; window.removeEventListener(&#x27;resize&#x27;, this.checkScreenSize);&#xA; },&#xA; methods: {&#xA; checkScreenSize() {&#xA; this.isMobile = window.innerWidth < 1200;&#xA;&#xA; const windowHeight = window.innerHeight;&#xA; const navbarHeight = this.$navbarHeight;&#xA; let padding = parseInt(navbarHeight &#x2B;181);&#xA;&#xA; const messageListHeight = windowHeight - padding;&#xA; this.$refs.messagingComponent.$refs.messageList.style.height = messageListHeight &#x2B; &#x27;px&#x27;;&#xA; },&#xA; showMessagerie() {&#xA; this.isMessagerie = true;&#xA; this.$refs.messagingComponent.scrollToBottom();&#xA; },&#xA; checkMicrophoneAccess() {&#xA; if (navigator.mediaDevices &amp;&amp; navigator.mediaDevices.getUserMedia) {&#xA;&#xA; return navigator.mediaDevices.getUserMedia({audio: true})&#xA; .then(function (stream) {&#xA; stream.getTracks().forEach(function (track) {&#xA; track.stop();&#xA; });&#xA; return true;&#xA; })&#xA; .catch(function (error) {&#xA; console.error(&#x27;Erreur lors de la demande d\&#x27;acc&#xE8;s au microphone:&#x27;, error);&#xA; return false;&#xA; });&#xA; } else {&#xA; console.error(&#x27;getUserMedia n\&#x27;est pas support&#xE9; par votre navigateur.&#x27;);&#xA; return false;&#xA; }&#xA; },&#xA; initializeWaveSurfer() {&#xA; this.wavesurfer = this.$wavesurfer.create({&#xA; container: &#x27;#visualizer&#x27;,&#xA; barWidth: 3,&#xA; barHeight: 1.5,&#xA; height: 46,&#xA; responsive: true,&#xA; waveColor: &#x27;rgba(108,115,202,0.3)&#x27;,&#xA; progressColor: &#x27;rgba(108,115,202,1)&#x27;,&#xA; cursorColor: &#x27;transparent&#x27;&#xA; });&#xA;&#xA; this.record = this.wavesurfer.registerPlugin(this.$recordPlugin.create());&#xA; },&#xA; startRecording() {&#xA; const _this = this;&#xA; this.isMicrophoneLoading = true;&#xA;&#xA; setTimeout(() =>&#xA; {&#xA; _this.checkMicrophoneAccess().then(function (accessible)&#xA; {&#xA; if (accessible) {&#xA; _this.record.startRecording();&#xA;&#xA; _this.record.once(&#x27;startRecording&#x27;, () => {&#xA; _this.isMicrophoneLoading = false;&#xA; _this.isRecording = true;&#xA; _this.updateChildMessage( &#x27;server&#x27;, &#x27;Allez-y ! Vous pouvez enregistrer votre message audio maintenant. La dur&#xE9;e maximale autoris&#xE9;e pour votre enregistrement est de 5 minutes.&#x27;, &#x27;text&#x27;, &#x27;&#x27;, &#x27;Message automatique&#x27;);&#xA; _this.startTimer();&#xA; });&#xA; } else {&#xA; _this.isRecording = false;&#xA; _this.isMicrophoneLoading = false;&#xA; _this.$swal.fire({&#xA; title: &#x27;Microphone non d&#xE9;tect&#xE9;&#x27;,&#xA; html: &#x27;<p>Le microphone de votre appareil est inaccessible ou l\&#x27;acc&#xE8;s a &#xE9;t&#xE9; refus&#xE9;.</p><p>Merci de v&#xE9;rifier les param&#xE8;tres de votre navigateur afin de v&#xE9;rifier les autorisations de votre microphone.</p>&#x27;,&#xA; footer: &#x27;<a href='http://stackoverflow.com/contact'>Vous avez besoin d\&#x27;aide ?</a>&#x27;,&#xA; });&#xA; }&#xA; });&#xA; }, 100);&#xA; },&#xA; stopRecording() {&#xA; this.stopTimer();&#xA; this.isRecording = false;&#xA; this.isSubmitLoading = true;&#xA; this.record.stopRecording();&#xA;&#xA; this.record.once(&#x27;stopRecording&#x27;, () => {&#xA; const blobUrl = this.record.getRecordedUrl();&#xA; fetch(blobUrl).then(response => response.blob()).then(blob => {&#xA; this.uploadAudio(blob);&#xA; });&#xA; });&#xA; },&#xA; startTimer() {&#xA; this.timerInterval = setInterval(() => {&#xA; this.timer&#x2B;&#x2B;;&#xA; if (this.timer === this.maxMinutes * 60) {&#xA; this.stopRecording();&#xA; }&#xA; }, 1000);&#xA; },&#xA; stopTimer() {&#xA; clearInterval(this.timerInterval);&#xA; this.timer = 0;&#xA; },&#xA; formatTimer() {&#xA; const minutes = Math.floor(this.timer / 60);&#xA; const seconds = this.timer % 60;&#xA; const formattedMinutes = minutes < 10 ? `0${minutes}` : minutes;&#xA; const formattedSeconds = seconds < 10 ? `0${seconds}` : seconds;&#xA; return `${formattedMinutes}:${formattedSeconds}`;&#xA; },&#xA; async uploadAudio(blob)&#xA; {&#xA; const format = blob.type === &#x27;audio/webm&#x27; ? &#x27;webm&#x27; : &#x27;mp4&#x27;;&#xA;&#xA; // Convert the blob to MP3&#xA; const mp3Blob = await this.convertToMp3(blob, format);&#xA;&#xA; const s3 = new this.$AWS.S3({&#xA; accessKeyId: &#x27;xxx&#x27;,&#xA; secretAccessKey: &#x27;xxx&#x27;,&#xA; region: &#x27;eu-west-1&#x27;&#xA; });&#xA;&#xA; var currentDate = new Date();&#xA; var filename = currentDate.getDate().toString() &#x2B; &#x27;-&#x27; &#x2B; currentDate.getMonth().toString() &#x2B; &#x27;-&#x27; &#x2B; currentDate.getFullYear().toString() &#x2B; &#x27;--&#x27; &#x2B; currentDate.getHours().toString() &#x2B; &#x27;-&#x27; &#x2B; currentDate.getMinutes().toString() &#x2B; &#x27;.mp4&#x27;;&#xA;&#xA; const params = {&#xA; Bucket: &#x27;xxx/audio&#x27;,&#xA; Key: filename,&#xA; Body: mp3Blob,&#xA; ACL: &#x27;public-read&#x27;,&#xA; ContentType: &#x27;audio/mp3&#x27;&#xA; }&#xA;&#xA; s3.upload(params, (err, data) => {&#xA; if (err) {&#xA; console.error(&#x27;Error uploading audio:&#x27;, err)&#xA; } else {&#xA; const currentDate = this.$moment();&#xA; const timestamp = currentDate.format(&#x27;dddd DD MMMM YYYY HH:mm&#x27;);&#xA;&#xA; this.updateChildMessage( &#x27;client&#x27;, &#x27;&#x27;, &#x27;audio&#x27;, mp3Blob, timestamp);&#xA; this.isSubmitLoading = false;&#xA; }&#xA; });&#xA; },&#xA; async convertToMp3(blob, format) {&#xA; const ffmpeg = createFFmpeg({ log: true });&#xA; await ffmpeg.load();&#xA;&#xA; const inputPath = &#x27;input.&#x27; &#x2B; format;&#xA; const outputPath = &#x27;output.mp3&#x27;;&#xA;&#xA; ffmpeg.FS(&#x27;writeFile&#x27;, inputPath, await fetchFile(blob));&#xA;&#xA; await ffmpeg.run(&#x27;-i&#x27;, inputPath, &#x27;-acodec&#x27;, &#x27;libmp3lame&#x27;, outputPath);&#xA;&#xA; const mp3Data = ffmpeg.FS(&#x27;readFile&#x27;, outputPath);&#xA; const mp3Blob = new Blob([mp3Data.buffer], { type: &#x27;audio/mp3&#x27; });&#xA;&#xA; ffmpeg.FS(&#x27;unlink&#x27;, inputPath);&#xA; ffmpeg.FS(&#x27;unlink&#x27;, outputPath);&#xA;&#xA; return mp3Blob;&#xA; },&#xA; sendMessage() {&#xA; this.isSubmitLoading = true;&#xA; if (this.messageText.trim() !== &#x27;&#x27;) {&#xA; const emmet = &#x27;client&#x27;;&#xA; const text = this.escapeHTML(this.messageText)&#xA; .replace(/\n/g, &#x27;<br>&#x27;);&#xA;&#xA; const currentDate = this.$moment();&#xA; const timestamp = currentDate.format(&#x27;dddd DD MMMM YYYY HH:mm&#x27;);&#xA;&#xA; this.$nextTick(() => {&#xA; this.messageText = &#x27;&#x27;;&#xA;&#xA; const textarea = document.getElementById(&#x27;messageTextarea&#x27;);&#xA; if (textarea) {&#xA; textarea.scrollTop = 0;&#xA; textarea.scrollLeft = 0;&#xA; }&#xA; });&#xA;&#xA; this.updateChildMessage(emmet, text, &#x27;text&#x27;, &#x27;&#x27;, timestamp);&#xA; this.isSubmitLoading = false;&#xA; }&#xA; },&#xA; escapeHTML(text) {&#xA; const map = {&#xA; &#x27;&amp;&#x27;: &#x27;&amp;amp;&#x27;,&#xA; &#x27;<&#x27;: &#x27;&amp;lt;&#x27;,&#xA; &#x27;>&#x27;: &#x27;&amp;gt;&#x27;,&#xA; &#x27;"&#x27;: &#x27;&amp;quot;&#x27;,&#xA; "&#x27;": &#x27;&amp;#039;&#x27;,&#xA; "`": &#x27;&amp;#x60;&#x27;,&#xA; "/": &#x27;&amp;#x2F;&#x27;&#xA; };&#xA; return text.replace(/[&amp;<>"&#x27;`/]/g, (match) => map[match]);&#xA; },&#xA; updateChildMessage(emmet, text, type, blob, timestamp) {&#xA; const newMessage = {&#xA; id: this.$refs.messagingComponent.lastMessageId &#x2B; 1,&#xA; emmet: emmet,&#xA; text: text,&#xA; type: type,&#xA; blob: blob,&#xA; timestamp: timestamp&#xA; };&#xA;&#xA; this.$refs.messagingComponent.updateMessages(newMessage);&#xA; }&#xA; },&#xA;};&#xA;</script>



-
When using libva* (ffmpeg) encoded GIF images, an error is reported when compiling the demo
10 août 2023, par yangjinhui2936issuse : I need to use the GIF encoding feature in FFMPEG to encode argb images as gifs.Because the encoding effect of using GIF library is not as good as the effect of FFMPEG.
However, several libraries like avcodec were too bulky, so I did some cropping.I just want to keep the functionality of GIF encoding.
Below is my makefile for cropping ffmpeg :


#!/bin/sh
# ./configure --prefix=$(pwd)/output --arch=arm --target-os=linux --enable-cross-compile --disable-asm --cross-prefix=arm-linux-gnueabihf- 
./configure --prefix=$(pwd)/output --target-os=linux --disable-asm \
--disable-gpl --enable-nonfree --enable-error-resilience --enable-debug --enable-shared --enable-small --enable-zlib \
--disable-ffmpeg --disable-ffprobe --disable-ffplay --disable-programs --disable-symver\
 --disable-doc --disable-htmlpages --disable-manpages --disable-podpages --disable-decoder=h264 --enable-avformat \
 --disable-txtpages --enable-avcodec --enable-avutil \
 --disable-avresample --disable-avfilter --disable-avdevice --disable-postproc \
 --disable-swscale --enable-decoder=gif --enable-demuxer=gif --enable-muxer=gif --disable-iconv \
 --disable-v4l2-m2m --disable-indevs --disable-outdevs

make clean
make -j8

make install



Then link the compiled so to the gif demo.
Blew is gif demo code(He was automatically generated by chatgpt, and I want to verify it) :


#include 
// #include "output/include/imgutils.h"
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libavutil/avutil.h"

int main() {
 AVCodec *enc_codec;
 AVCodecContext *enc_ctx = NULL;
 AVStream *stream = NULL;
 int ret;

 AVFormatContext *fmt_ctx = avformat_alloc_context();
 if (!fmt_ctx) {
 fprintf(stderr, "Could not allocate format context\n");
 return 1;
 }
 
 AVInputFormat *input_fmt = av_find_input_format("image2");
 if ((ret = avformat_open_input(&fmt_ctx, "input.bmp", input_fmt, NULL)) < 0) {
 fprintf(stderr, "Could not open input file %d\n", ret);
 return ret;
 }

 AVCodec *dec_codec = avcodec_find_decoder(AV_CODEC_ID_BMP);
 if (!dec_codec) {
 fprintf(stderr, "Decoder not found\n");
 return 1;
 }
 
 AVCodecContext *dec_ctx = avcodec_alloc_context3(dec_codec);
 if (!dec_ctx) {
 fprintf(stderr, "Could not allocate decoder context\n");
 return 1;
 }

 if ((ret = avcodec_open2(dec_ctx, dec_codec, NULL)) < 0) {
 fprintf(stderr, "Could not open decoder\n");
 return 1;
 }

 AVOutputFormat *out_fmt = av_guess_format("gif", NULL, NULL);
 if (!out_fmt) {
 fprintf(stderr, "Could not find output format\n");
 return 1;
 }

 AVFormatContext *out_ctx = NULL;
 if ((ret = avformat_alloc_output_context2(&out_ctx, out_fmt, NULL, NULL)) < 0) {
 fprintf(stderr, "Could not allocate output context\n");
 return 1;
 }

 stream = avformat_new_stream(out_ctx, NULL);
 if (!stream) {
 fprintf(stderr, "Could not create new stream\n");
 return 1;
 }

 enc_codec = avcodec_find_encoder(AV_CODEC_ID_GIF);
 if (!enc_codec) {
 fprintf(stderr, "Encoder not found\n");
 return 1;
 }

 enc_ctx = avcodec_alloc_context3(enc_codec);
 if (!enc_ctx) {
 fprintf(stderr, "Could not allocate encoder context\n");
 return 1;
 }

 if ((ret = avcodec_parameters_from_context(stream->codecpar, dec_ctx)) < 0) {
 fprintf(stderr, "Could not copy decoder parameters\n");
 return 1;
 }

 if ((ret = avcodec_open2(enc_ctx, enc_codec, NULL)) < 0) {
 fprintf(stderr, "Could not open encoder\n");
 return 1;
 }
 
 enc_ctx->pix_fmt = AV_PIX_FMT_RGB8; 
 enc_ctx->width = dec_ctx->width; 
 enc_ctx->height = dec_ctx->height; 
 enc_ctx->time_base = (AVRational){1, 25}; 

 avformat_init_output(out_ctx, NULL);

 if (!(out_fmt->flags & AVFMT_NOFILE)) {
 if ((ret = avio_open(&out_ctx->pb, "output.gif", AVIO_FLAG_WRITE)) < 0) {
 fprintf(stderr, "Could not open output file\n");
 return ret;
 }
 }

 avformat_write_header(out_ctx, NULL);

 AVFrame *frame = av_frame_alloc();
 AVPacket pkt;
 int frame_count = 0;

 while (av_read_frame(fmt_ctx, &pkt) >= 0) {
 avcodec_send_packet(dec_ctx, &pkt);
 while (avcodec_receive_frame(dec_ctx, frame) == 0) {
 avcodec_send_frame(enc_ctx, frame);
 while (avcodec_receive_packet(enc_ctx, &pkt) == 0) {
 pkt.stream_index = stream->index;
 av_interleaved_write_frame(out_ctx, &pkt);
 av_packet_unref(&pkt);
 }

 frame_count++;
 printf("Encoded frame %d\n", frame_count);
 }
 av_packet_unref(&pkt);
 }

 av_write_trailer(out_ctx);

 avcodec_close(enc_ctx);
 avcodec_free_context(&enc_ctx);
 avcodec_close(dec_ctx);
 avcodec_free_context(&dec_ctx);
 av_frame_free(&frame);

 avformat_close_input(&fmt_ctx);
 avformat_free_context(fmt_ctx);
 avio_close(out_ctx->pb);
 avformat_free_context(out_ctx);

 return 0;
}




Belw is shell of compile script for gif :


#!/bin/sh
gcc -o x2gif x2gif.c -L ./output/lib/ -l:libavformat.a -l:libavcodec.a -l:libavutil.a -lz -I ./output/include/



Unfortunately, the compilation did not pass.
How can I troubleshoot and resolve this issue ?


./output/lib/libavutil.a(lfg.o): In function `av_bmg_get':
/data1/yang/tool/ffmpeg-3.4.4/libavutil/lfg.c:59: undefined reference to `log'
./output/lib/libavutil.a(hwcontext_cuda.o): In function `cuda_free_functions':
/data1/yang/tool/ffmpeg-3.4.4/./compat/cuda/dynlink_loader.h:175: undefined reference to `dlclose'
./output/lib/libavutil.a(hwcontext_cuda.o): In function `cuda_load_functions':
/data1/yang/tool/ffmpeg-3.4.4/./compat/cuda/dynlink_loader.h:192: undefined reference to `dlopen'
/data1/yang/tool/ffmpeg-3.4.4/./compat/cuda/dynlink_loader.h:194: undefined reference to `dlsym'
/data1/yang/tool/ffmpeg-3.4.4/./compat/cuda/dynlink_loader.h:195: undefined reference to `dlsym'
/data1/yang/tool/ffmpeg-3.4.4/./compat/cuda/dynlink_loader.h:196: undefined reference to `dlsym'
/data1/yang/tool/ffmpeg-3.4.4/./compat/cuda/dynlink_loader.h:197: undefined reference to `dlsym'
/data1/yang/tool/ffmpeg-3.4.4/./compat/cuda/dynlink_loader.h:198: undefined reference to `dlsym'
./output/lib/libavutil.a(hwcontext_cuda.o):/data1/yang/tool/ffmpeg-3.4.4/./compat/cuda/dynlink_loader.h:199: more undefined references to `dlsym' follow
./output/lib/libavutil.a(rational.o): In function `av_d2q':
/data1/yang/tool/ffmpeg-3.4.4/libavutil/rational.c:120: undefined reference to `floor'
./output/lib/libavutil.a(eval.o): In function `eval_expr':
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:184: undefined reference to `exp'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:185: undefined reference to `exp'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:189: undefined reference to `floor'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:190: undefined reference to `ceil'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:191: undefined reference to `trunc'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:192: undefined reference to `round'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:263: undefined reference to `pow'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:300: undefined reference to `floor'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:309: undefined reference to `pow'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:315: undefined reference to `hypot'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:316: undefined reference to `atan2'
./output/lib/libavutil.a(eval.o): In function `ff_exp10':
/data1/yang/tool/ffmpeg-3.4.4/libavutil/ffmath.h:44: undefined reference to `exp2'
./output/lib/libavutil.a(eval.o): In function `parse_primary':
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:417: undefined reference to `sinh'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:418: undefined reference to `cosh'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:419: undefined reference to `tanh'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:420: undefined reference to `sin'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:421: undefined reference to `cos'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:422: undefined reference to `tan'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:423: undefined reference to `atan'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:424: undefined reference to `asin'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:425: undefined reference to `acos'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:426: undefined reference to `exp'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:427: undefined reference to `log'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:428: undefined reference to `fabs'



-
The 7 GDPR Principles : A Guide to Compliance
11 août 2023, par Erin — Analytics Tips, GDPR