
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (19)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...) -
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)
Sur d’autres sites (4283)
-
avcodec/libzvbi-teletextdec : fix txt_default_region limits
9 juin 2020, par Marton Balint -
libav ffmpeg - streaming from both a mkv and input stream
20 janvier 2020, par kealistI am trying to use ffmpeg libraries in C# with AutoGen bindings. The overall issue is that I am taking a collection of sources, some streams, and some .mkv containing recordings of a stream. As for now, they are all h264 and only video. For input streams, I am able to adjust the packets and broad cast them and that works fine, but any time I try to call
av_interleaved_write_frame
with packets from the MKV file, I get the errorError occurred: Invalid data found when processing input
.Here is the main loop, where the error happens for mkv files. Is there an extra step ?
/* read all packets */
while (true)
{
if ((ret = ffmpeg.av_read_frame(ifmt_ctx, &packet)) < 0)
{
Console.WriteLine("Unable to read packet");
break;
}
stream_index = (uint)packet.stream_index;
type = ifmt_ctx->streams[packet.stream_index]->codecpar->codec_type;
Console.WriteLine($"Demuxer gave frame of stream_index %{stream_index}");
/* remux this frame without reencoding */
ffmpeg.av_packet_rescale_ts(&packet,
ifmt_ctx->streams[stream_index]->time_base,
ofmt_ctx->streams[stream_index]->time_base);
if (packet.stream_index < 0)
{
Console.WriteLine("Packet stream error");
}
ret = ffmpeg.av_write_frame(ofmt_ctx, &packet);
if (ret < 0)
{
goto end;
}
else
{
ffmpeg.av_packet_unref(&packet);
}
}Anything need to be different for MKV files ?
I get some contradictory error output where it claims it is annex b but also isn’t :
[AVBSFContext @ 00000220eb657080] The input looks like it is Annex B already
Automatically inserted bitstream filter 'h264_mp4toannexb'; args=''
[mpegts @ 00000220ebace300] H.264 bitstream malformed, no startcode found, use the video bitstream filter 'h264_mp4toannexb' to fix it ('-bsf:v h264_mp4toannexb' option with ffmpeg)Verbose output from ffplay from an MKV file :
ffplay version git-2020-01-13-7225479 Copyright (c) 2003-2020 the FFmpeg developers
built with gcc 9.2.1 (GCC) 20200111
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf
libavutil 56. 38.100 / 56. 38.100
libavcodec 58. 65.103 / 58. 65.103
libavformat 58. 35.102 / 58. 35.102
libavdevice 58. 9.103 / 58. 9.103
libavfilter 7. 71.100 / 7. 71.100
libswscale 5. 6.100 / 5. 6.100
libswresample 3. 6.100 / 3. 6.100
libpostproc 55. 6.100 / 55. 6.100
Initialized direct3d renderer.
[h264 @ 00000165ed18d140] Reinit context to 640x480, pix_fmt: yuv444p
Input #0, matroska,webm, from '.\webcam_14_Test1.mkv': 0B f=0/0
Metadata:
ENCODER : Lavf58.12.100
Duration: 00:00:39.30, start: 0.000000, bitrate: 1943 kb/s
Stream #0:0: Video: h264 (High 4:4:4 Predictive), 1 reference frame, yuv444p(progressive, left), 640x480 [SAR 1:1 DAR 4:3], 1k fps, 30 tbr, 1k tbn, 60 tbc (default)
Metadata:
DURATION : 00:00:39.299000000
[h264 @ 00000165f424e200] Reinit context to 640x480, pix_fmt: yuv444p
[ffplay_buffer @ 00000165f52ea840] w:640 h:480 pixfmt:yuv444p tb:1/1000 fr:30/1 sar:1/1
[auto_scaler_0 @ 00000165ed1d2c80] w:iw h:ih flags:'bicubic' interl:0
[ffplay_buffersink @ 00000165f424ef00] auto-inserting filter 'auto_scaler_0' between the filter 'ffplay_buffer' and the filter 'ffplay_buffersink'
[auto_scaler_0 @ 00000165ed1d2c80] w:640 h:480 fmt:yuv444p sar:1/1 -> w:640 h:480 fmt:yuv420p sar:1/1 flags:0x4
Created 640x480 texture with SDL_PIXELFORMAT_IYUV.
[AVIOContext @ 00000165ed179a40] Statistics: 9547965 bytes read, 0 seeks -
Error : Cannot find ffmpeg in firebase cloud function
6 novembre 2024, par Ahmed Wagdii'm trying to compress some uploaded files to firebase storage .. using firebase cloud functions but it give me the error
Error: Cannot find ffmpeg


here is my function :


const functions = require("firebase-functions");
const admin = require("firebase-admin");
const ffmpeg = require("fluent-ffmpeg");
const ffmpegStatic = require("ffmpeg-static");
const axios = require("axios");

// const {onSchedule} = require("firebase-functions/v2/scheduler");

admin.initializeApp();

// Ensure fluent-ffmpeg uses the binary
ffmpeg.setFfmpegPath(ffmpegStatic.path);

const db = admin.firestore();
const bucket = admin.storage().bucket();
const fs = require("fs");
const downloadVideo = async (url, outputPath) => {
 const response = await axios.get(url, {responseType: "stream"});
 const writer = fs.createWriteStream(outputPath);
 response.data.pipe(writer);
 return new Promise((resolve, reject) => {
 writer.on("finish", () => resolve(outputPath));
 writer.on("error", reject);
 });
};

const compressVideo = (videoFullPath, outputFileName, targetSize) => {
 return new Promise((resolve, reject) => {
 ffmpeg.ffprobe(videoFullPath, (err, metadata) => {
 if (err) return reject(err);
 const duration = parseFloat(metadata.format.duration);
 const targetTotalBitrate =
 (targetSize * 1024 * 8) / (1.073741824 * duration);

 let audioBitrate =
 metadata.streams.find((s) => s.codec_type === "audio").bit_rate;
 if (10 * audioBitrate > targetTotalBitrate) {
 audioBitrate = targetTotalBitrate / 10;
 }

 const videoBitrate = targetTotalBitrate - audioBitrate;
 ffmpeg(videoFullPath)
 .output(outputFileName)
 .videoCodec("libx264")
 .audioCodec("aac")
 .videoBitrate(videoBitrate)
 .audioBitrate(audioBitrate)
 .on("end", resolve)
 .on("error", reject)
 .run();
 });
 });
};

const uploadVideoWithResumableUpload = (filePath, destinationBlobName) => {
 const blob = bucket.file(destinationBlobName);
 const options = {resumable: true, validation: "crc32c"};
 return blob.createWriteStream(options).end(fs.readFileSync(filePath));
};

exports.processLessonsOnDemand =
functions.https.onRequest({timeoutSeconds: 3600, memory: "2GB"}
 , async (context) => {
 console.log("Fetching lessons from Firestore...");
 const lessonsRef = db.collection("leassons");
 const lessonsSnapshot = await lessonsRef.get();

 if (lessonsSnapshot.empty) {
 console.log("No lessons found in Firestore.");
 return; // Exit if no lessons are available
 }

 const lessonDoc = lessonsSnapshot.docs[0]; // Get the first document
 const lessonData = lessonDoc.data();

 if (lessonData.shrinked) {
 console.log(
 `Skipping lesson ID ${lessonDoc.id} as it's already shrunk.`,
 );
 return; // Exit if the first lesson is already shrunk
 }

 const videoURL = lessonData.videoURL;
 if (!videoURL) {
 console.log(
 `No video URL for lesson ID: ${lessonDoc.id}. Skipping...`,
 );
 return; // Exit if no video URL is available
 }

 const tempVideoPath = "/tmp/temp_video.mp4";

 try {
 await downloadVideo(videoURL, tempVideoPath);

 const targetSize = (fs.statSync(tempVideoPath).size * 0.30) / 1024;
 const outputCompressedVideo = `/tmp/compressed_${lessonDoc.id}.mp4`;

 await compressVideo(tempVideoPath, outputCompressedVideo, targetSize);

 await uploadVideoWithResumableUpload(
 outputCompressedVideo,
 `compressed_videos/compressed_${lessonDoc.id}.mp4`,
 );

 const newVideoURL = `https://storage.googleapis.com/${bucket.name}/compressed_videos/compressed_${lessonDoc.id}.mp4`;

 const oldVideoPath = videoURL.replace(`https://storage.googleapis.com/${bucket.name}/`, "");
 const oldBlob = bucket.file(oldVideoPath);
 await oldBlob.delete();

 await lessonsRef.doc(lessonDoc.id).update({
 videoURL: newVideoURL,
 shrinked: true,
 });

 console.log(`Processed lesson ID: ${lessonDoc.id}`);
 fs.unlinkSync(tempVideoPath); // Clean up temporary files
 fs.unlinkSync(outputCompressedVideo); // Clean up compressed file
 } catch (error) {
 console.error(`Error processing lesson ID ${lessonDoc.id}:`, error);
 }
 });