
Recherche avancée
Médias (3)
-
The Slip - Artworks
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
-
Creativecommons informational flyer
16 mai 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (80)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Librairies et logiciels spécifiques aux médias
10 décembre 2010, parPour un fonctionnement correct et optimal, plusieurs choses sont à prendre en considération.
Il est important, après avoir installé apache2, mysql et php5, d’installer d’autres logiciels nécessaires dont les installations sont décrites dans les liens afférants. Un ensemble de librairies multimedias (x264, libtheora, libvpx) utilisées pour l’encodage et le décodage des vidéos et sons afin de supporter le plus grand nombre de fichiers possibles. Cf. : ce tutoriel ; FFMpeg avec le maximum de décodeurs et (...) -
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)
Sur d’autres sites (6047)
-
How to mix videos from two participants in a video call layout where each video from the participants are not in sync in janus videoroom and ffmpeg ?
5 mars 2024, par Adrien3001I am using janus gateway to implement a video call app using videoroom plugin. If for example there are two participants in the call the recorded mjr files will be 4 (one audio and video mjr for each user). I am using ffmpeg to first convert the audio to opus and the video to webm for each user. I then combine the video with the audio for each user. Finally I want to mix the resulting two webm videos in a video call layout where one video will overlay the other at the bottom right corner and will be a smaller screen (picture in picture mode). The problem is that I noticed sometimes one of the webm video will have a longer duration than the other and the final mp4 video is out of sync. This is the bash script im using for the video processing to the final mp4 video :


#!/bin/bash
set -x # Enable debugging

# Check if the correct number of arguments are provided
if [ "$#" -ne 3 ]; then
 echo "Usage: $0 videoroomid bossid minionid"
 exit 1
fi

videoroomid=$1
bossid=$2
minionid=$3

# Define paths and tools
MJR_DIR="/opt/janus/recordings-folder"
OUTPUT_DIR="/opt/janus/recordings-folder"
JANUS_PP_REC="/usr/bin/janus-pp-rec"
FFMPEG="/usr/bin/ffmpeg"
THUMBNAIL_DIR="/home/adrienubuntu/okok.spassolab-ubuntu.com/okok-recordings-thumbnail"

# Function to convert MJR to WebM (for video) and Opus (for audio)
convert_mjr() {
 local mjr_file=$1
 local output_file=$2
 local type=$(basename "$mjr_file" | cut -d '-' -f 6)

 echo "Attempting to convert file: $mjr_file"

 if [ ! -f "$mjr_file" ]; then
 echo "MJR file not found: $mjr_file"
 return 1
 fi

 if [ "$type" == "audio" ]; then
 # Convert audio to Opus format
 echo "Converting audio to Opus: $mjr_file"
 $JANUS_PP_REC "$mjr_file" "$output_file"
 if [ $? -ne 0 ]; then
 echo "Conversion failed for file: $mjr_file"
 return 1
 fi
 # Check and adjust audio sample rate
 adjust_audio_sample_rate "$output_file"
 elif [ "$type" == "video" ]; then
 # Convert video to WebM format with VP8 video codec
 echo "Converting video to WebM: $mjr_file"
 $JANUS_PP_REC "$mjr_file" "$output_file"
 if [ $? -ne 0 ]; then
 echo "Conversion failed for file: $mjr_file"
 return 1
 fi
 # Check and convert to constant frame rate
 convert_to_constant_frame_rate "$output_file"
 fi

 echo "Conversion successful: $output_file"
 return 0
}

# Function to merge audio (Opus) and video (WebM) files
merge_audio_video() {
 local audio_file=$1
 local video_file=$2
 local merged_file="${video_file%.*}_merged.webm"

 echo "Merging audio and video files into: $merged_file"

 # Merge audio and video
 $FFMPEG -y -i "$video_file" -i "$audio_file" -c:v copy -c:a libopus -map 0:v:0 -map 1:a:0 "$merged_file"

 if [ $? -eq 0 ]; then
 echo "Merging successful: $merged_file"
 return 0
 else
 echo "Error during merging."
 return 1
 fi
}

# Function to check if MJR files exist
check_mjr_files_exist() {
 local videoroomid=$1
 local bossid=$2
 local minionid=$3

 if ! ls ${MJR_DIR}/videoroom-${videoroomid}-user-${bossid}-*-video-*.mjr &>/dev/null ||
 ! ls ${MJR_DIR}/videoroom-${videoroomid}-user-${minionid}-*-video-*.mjr &>/dev/null; then
 echo "Error: MJR files not found for videoroomid: $videoroomid, bossid: $bossid, minionid: $minionid"
 exit 1
 fi
}

# Function to calculate delay
calculate_delay() {
 local video1=$1
 local video2=$2

 # Get the start time of the first video
 local start_time1=$(ffprobe -v error -show_entries format=start_time -of default=noprint_wrappers=1:nokey=1 "$video1")

 # Get the start time of the second video
 local start_time2=$(ffprobe -v error -show_entries format=start_time -of default=noprint_wrappers=1:nokey=1 "$video2")

 # Calculate the delay (in seconds)
 local delay=$(echo "$start_time2 - $start_time1" | bc)

 # If the delay is negative, make it positive
 if [ $(echo "$delay < 0" | bc) -eq 1 ]; then
 delay=$(echo "-1 * $delay" | bc)
 fi

 echo "$delay"
}

# Function to adjust audio sample rate
adjust_audio_sample_rate() {
 local audio_file=$1
 local desired_sample_rate=48000 # Set the desired sample rate

 # Get the current sample rate of the audio file
 local current_sample_rate=$(ffprobe -v error -show_entries stream=sample_rate -of default=noprint_wrappers=1:nokey=1 "$audio_file")

 # Check if the sample rate needs to be adjusted
 if [ "$current_sample_rate" -ne "$desired_sample_rate" ]; then
 echo "Adjusting audio sample rate from $current_sample_rate to $desired_sample_rate"
 local temp_file="${audio_file%.*}_temp.opus"
 $FFMPEG -y -i "$audio_file" -ar "$desired_sample_rate" "$temp_file"
 mv "$temp_file" "$audio_file"
 fi
}

# Function to convert video to a constant frame rate
convert_to_constant_frame_rate() {
 local video_file=$1
 local desired_frame_rate=30 # Set the desired frame rate

 # Check if the video has a variable frame rate
 local has_vfr=$(ffprobe -v error -select_streams v -show_entries stream=r_frame_rate -of default=noprint_wrappers=1:nokey=1 "$video_file")

 if [ "$has_vfr" == "0/0" ]; then
 echo "Video has a variable frame rate. Converting to a constant frame rate of $desired_frame_rate fps."
 local temp_file="${video_file%.*}_temp.webm"
 $FFMPEG -y -i "$video_file" -r "$desired_frame_rate" -c:v libvpx -b:v 1M -c:a copy "$temp_file"
 mv "$temp_file" "$video_file"
 fi
}

# Main processing function
process_videos() {
 # Check if MJR files exist
 check_mjr_files_exist "$videoroomid" "$bossid" "$minionid"

 # Output a message indicating the start of processing
 echo "Processing started for videoroomid: $videoroomid, bossid: $bossid, minionid: $minionid"

 # Process boss's files
 local boss_audio_files=($(ls ${MJR_DIR}/videoroom-${videoroomid}-user-${bossid}-*-audio-*.mjr))
 local boss_video_files=($(ls ${MJR_DIR}/videoroom-${videoroomid}-user-${bossid}-*-video-*.mjr))
 local boss_merged_files=()

 for i in "${!boss_audio_files[@]}"; do
 local audio_file=${boss_audio_files[$i]}
 local video_file=${boss_video_files[$i]}
 convert_mjr "$audio_file" "${audio_file%.*}.opus"
 convert_mjr "$video_file" "${video_file%.*}.webm"
 if merge_audio_video "${audio_file%.*}.opus" "${video_file%.*}.webm"; then
 boss_merged_files+=("${video_file%.*}_merged.webm")
 fi
 done

 # Concatenate boss's merged files
 if [ ${#boss_merged_files[@]} -gt 0 ]; then
 local boss_concat_list=$(mktemp)
 for file in "${boss_merged_files[@]}"; do
 echo "file '$file'" >> "$boss_concat_list"
 done
 $FFMPEG -y -f concat -safe 0 -i "$boss_concat_list" -c copy "${OUTPUT_DIR}/${bossid}_final.webm"
 rm "$boss_concat_list"
 fi

 # Process minion's files
 local minion_audio_files=($(ls ${MJR_DIR}/videoroom-${videoroomid}-user-${minionid}-*-audio-*.mjr))
 local minion_video_files=($(ls ${MJR_DIR}/videoroom-${videoroomid}-user-${minionid}-*-video-*.mjr))
 local minion_merged_files=()

 for i in "${!minion_audio_files[@]}"; do
 local audio_file=${minion_audio_files[$i]}
 local video_file=${minion_video_files[$i]}
 convert_mjr "$audio_file" "${audio_file%.*}.opus"
 convert_mjr "$video_file" "${video_file%.*}.webm"
 if merge_audio_video "${audio_file%.*}.opus" "${video_file%.*}.webm"; then
 minion_merged_files+=("${video_file%.*}_merged.webm")
 fi
 done

 # Concatenate minion's merged files
 if [ ${#minion_merged_files[@]} -gt 0 ]; then
 local minion_concat_list=$(mktemp)
 for file in "${minion_merged_files[@]}"; do
 echo "file '$file'" >> "$minion_concat_list"
 done
 $FFMPEG -y -f concat -safe 0 -i "$minion_concat_list" -c copy "${OUTPUT_DIR}/${minionid}_final.webm"
 rm "$minion_concat_list"
 fi

 if [ -f "${OUTPUT_DIR}/${bossid}_final.webm" ] && [ -f "${OUTPUT_DIR}/${minionid}_final.webm" ]; then
 final_mp4="${OUTPUT_DIR}/final-output-${videoroomid}-${bossid}-${minionid}.mp4"
 echo "Combining boss and minion videos into: $final_mp4"

 # Calculate the delay between the boss and minion videos
 delay=$(calculate_delay "${OUTPUT_DIR}/${bossid}_final.webm" "${OUTPUT_DIR}/${minionid}_final.webm")

 # Convert the delay to milliseconds for the adelay filter
 delay_ms=$(echo "$delay * 1000" | bc)

 $FFMPEG -i "${OUTPUT_DIR}/${bossid}_final.webm" -i "${OUTPUT_DIR}/${minionid}_final.webm" -filter_complex \
 "[0:v]transpose=1,scale=160:-1[boss_clip]; \
 [0:a]volume=2.0[boss_audio]; \
 [1:a]volume=2.0,adelay=${delay_ms}|${delay_ms}[minion_audio]; \
 [1:v][boss_clip]overlay=W-w-10:H-h-10:shortest=0[output]; \
 [boss_audio][minion_audio]amix=inputs=2:duration=longest[audio]" \
 -map "[output]" -map "[audio]" -c:v libx264 -crf 20 -preset veryfast -c:a aac -strict experimental "$final_mp4"

 if [ $? -ne 0 ]; then
 echo "Error combining boss and minion videos"
 exit 1
 else
 echo "Combining boss and minion videos successful"
 # Generate a thumbnail at 5 seconds into the video
 thumbnail="${OUTPUT_DIR}/$(basename "$final_mp4" .mp4).png"
 echo "Generating thumbnail for: $final_mp4"
 $FFMPEG -ss 00:00:05 -i "$final_mp4" -vframes 1 -q:v 2 "$thumbnail"
 echo "Thumbnail generated: $thumbnail"
 sudo mv -f "$thumbnail" "/home/adrienubuntu/okok.spassolab-ubuntu.com/okok-recordings-thumbnail/"

 sudo mv -f "$final_mp4" "/home/adrienubuntu/okok.spassolab-ubuntu.com/okok-live-recordings/"
 rm -f "${OUTPUT_DIR}"/*.opus "${OUTPUT_DIR}"/*.webm "${OUTPUT_DIR}"/*.mp4
 fi
 else
 echo "Error: One or both final videos are missing"
 exit 1
 fi

 # Output a message indicating the end of processing
 echo "Processing completed for videoroomid: $videoroomid, bossid: $bossid, minionid: $minionid"
}

process_videos



I will then test by calling this command ./name-of-file.sh videoroomid bossid minionid. Is there a way to solve this and still keep the whole process in a dynamic way ? Thanks in advance


-
How to use FFmpeg API overlay filter in C / C++
23 février 2021, par yildizmehmetI have a C++ project which creates 7/24 WebTV like RTMP stream and allows operations like changing current content on runtime, seeking content, looping through a playlist which is constructed by a json array, also supports changing whole playlist on runtime.



Currently i am reading H264 and AAC encoded packets from mp4 files then sending them to destination RTMP server after adjusting their PTS & DTS values without any encoding or decoding.



But i want to apply overlay images to raw frames using FFmpeg "overlay" filter after decoding H264 packets. I looked at sample which came with FFmpeg examples ;



#define _XOPEN_SOURCE 600 /* for usleep */
#include 
#include 
#include 

#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavfilter></libavfilter>buffersink.h>
#include <libavfilter></libavfilter>buffersrc.h>
#include <libavutil></libavutil>opt.h>

const char *filter_descr = "scale=78:24,transpose=cclock";
/* other way:
 scale=78:24 [scl]; [scl] transpose=cclock // assumes "[in]" and "[out]" to be input output pads respectively
 */

static AVFormatContext *fmt_ctx;
static AVCodecContext *dec_ctx;
AVFilterContext *buffersink_ctx;
AVFilterContext *buffersrc_ctx;
AVFilterGraph *filter_graph;
static int video_stream_index = -1;
static int64_t last_pts = AV_NOPTS_VALUE;

static int open_input_file(const char *filename)
{
 int ret;
 AVCodec *dec;

 if ((ret = avformat_open_input(&fmt_ctx, filename, NULL, NULL)) < 0) {
 av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
 return ret;
 }

 if ((ret = avformat_find_stream_info(fmt_ctx, NULL)) < 0) {
 av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
 return ret;
 }

 /* select the video stream */
 ret = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &dec, 0);
 if (ret < 0) {
 av_log(NULL, AV_LOG_ERROR, "Cannot find a video stream in the input file\n");
 return ret;
 }
 video_stream_index = ret;

 /* create decoding context */
 dec_ctx = avcodec_alloc_context3(dec);
 if (!dec_ctx)
 return AVERROR(ENOMEM);
 avcodec_parameters_to_context(dec_ctx, fmt_ctx->streams[video_stream_index]->codecpar);

 /* init the video decoder */
 if ((ret = avcodec_open2(dec_ctx, dec, NULL)) < 0) {
 av_log(NULL, AV_LOG_ERROR, "Cannot open video decoder\n");
 return ret;
 }

 return 0;
}

static int init_filters(const char *filters_descr)
{
 char args[512];
 int ret = 0;
 const AVFilter *buffersrc = avfilter_get_by_name("buffer");
 const AVFilter *buffersink = avfilter_get_by_name("buffersink");
 AVFilterInOut *outputs = avfilter_inout_alloc();
 AVFilterInOut *inputs = avfilter_inout_alloc();
 AVRational time_base = fmt_ctx->streams[video_stream_index]->time_base;
 enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_GRAY8, AV_PIX_FMT_NONE };

 filter_graph = avfilter_graph_alloc();
 if (!outputs || !inputs || !filter_graph) {
 ret = AVERROR(ENOMEM);
 goto end;
 }

 /* buffer video source: the decoded frames from the decoder will be inserted here. */
 snprintf(args, sizeof(args),
 "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
 dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
 time_base.num, time_base.den,
 dec_ctx->sample_aspect_ratio.num, dec_ctx->sample_aspect_ratio.den);

 ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
 args, NULL, filter_graph);
 if (ret < 0) {
 av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
 goto end;
 }

 /* buffer video sink: to terminate the filter chain. */
 ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
 NULL, NULL, filter_graph);
 if (ret < 0) {
 av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
 goto end;
 }

 ret = av_opt_set_int_list(buffersink_ctx, "pix_fmts", pix_fmts,
 AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);
 if (ret < 0) {
 av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
 goto end;
 }

 /*
 * Set the endpoints for the filter graph. The filter_graph will
 * be linked to the graph described by filters_descr.
 */

 /*
 * The buffer source output must be connected to the input pad of
 * the first filter described by filters_descr; since the first
 * filter input label is not specified, it is set to "in" by
 * default.
 */
 outputs->name = av_strdup("in");
 outputs->filter_ctx = buffersrc_ctx;
 outputs->pad_idx = 0;
 outputs->next = NULL;

 /*
 * The buffer sink input must be connected to the output pad of
 * the last filter described by filters_descr; since the last
 * filter output label is not specified, it is set to "out" by
 * default.
 */
 inputs->name = av_strdup("out");
 inputs->filter_ctx = buffersink_ctx;
 inputs->pad_idx = 0;
 inputs->next = NULL;

 if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,
 &inputs, &outputs, NULL)) < 0)
 goto end;

 if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
 goto end;

end:
 avfilter_inout_free(&inputs);
 avfilter_inout_free(&outputs);

 return ret;
}

static void display_frame(const AVFrame *frame, AVRational time_base)
{
 int x, y;
 uint8_t *p0, *p;
 int64_t delay;

 if (frame->pts != AV_NOPTS_VALUE) {
 if (last_pts != AV_NOPTS_VALUE) {
 /* sleep roughly the right amount of time;
 * usleep is in microseconds, just like AV_TIME_BASE. */
 delay = av_rescale_q(frame->pts - last_pts,
 time_base, AV_TIME_BASE_Q);
 if (delay > 0 && delay < 1000000)
 usleep(delay);
 }
 last_pts = frame->pts;
 }

 /* Trivial ASCII grayscale display. */
 p0 = frame->data[0];
 puts("\033c");
 for (y = 0; y < frame->height; y++) {
 p = p0;
 for (x = 0; x < frame->width; x++)
 putchar(" .-+#"[*(p++) / 52]);
 putchar('\n');
 p0 += frame->linesize[0];
 }
 fflush(stdout);
}

int main(int argc, char **argv)
{
 int ret;
 AVPacket packet;
 AVFrame *frame;
 AVFrame *filt_frame;

 if (argc != 2) {
 fprintf(stderr, "Usage: %s file\n", argv[0]);
 exit(1);
 }

 frame = av_frame_alloc();
 filt_frame = av_frame_alloc();
 if (!frame || !filt_frame) {
 perror("Could not allocate frame");
 exit(1);
 }

 if ((ret = open_input_file(argv[1])) < 0)
 goto end;
 if ((ret = init_filters(filter_descr)) < 0)
 goto end;

 /* read all packets */
 while (1) {
 if ((ret = av_read_frame(fmt_ctx, &packet)) < 0)
 break;

 if (packet.stream_index == video_stream_index) {
 ret = avcodec_send_packet(dec_ctx, &packet);
 if (ret < 0) {
 av_log(NULL, AV_LOG_ERROR, "Error while sending a packet to the decoder\n");
 break;
 }

 while (ret >= 0) {
 ret = avcodec_receive_frame(dec_ctx, frame);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
 break;
 } else if (ret < 0) {
 av_log(NULL, AV_LOG_ERROR, "Error while receiving a frame from the decoder\n");
 goto end;
 }

 frame->pts = frame->best_effort_timestamp;

 /* push the decoded frame into the filtergraph */
 if (av_buffersrc_add_frame_flags(buffersrc_ctx, frame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0) {
 av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
 break;
 }

 /* pull filtered frames from the filtergraph */
 while (1) {
 ret = av_buffersink_get_frame(buffersink_ctx, filt_frame);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 break;
 if (ret < 0)
 goto end;
 display_frame(filt_frame, buffersink_ctx->inputs[0]->time_base);
 av_frame_unref(filt_frame);
 }
 av_frame_unref(frame);
 }
 }
 av_packet_unref(&packet);
 }
end:
 avfilter_graph_free(&filter_graph);
 avcodec_free_context(&dec_ctx);
 avformat_close_input(&fmt_ctx);
 av_frame_free(&frame);
 av_frame_free(&filt_frame);

 if (ret < 0 && ret != AVERROR_EOF) {
 fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
 exit(1);
 }

 exit(0);
}




That sample uses these filters ;





"scale=78:24,transpose=cclock"





I compiled and run it with a sample video file but it just outputs fancy characters to console, the code block given below is responsible for this ;



/* Trivial ASCII grayscale display. */
 p0 = frame->data[0];
 puts("\033c");
 for (y = 0; y < frame->height; y++) {
 p = p0;
 for (x = 0; x < frame->width; x++)
 putchar(" .-+#"[*(p++) / 52]);
 putchar('\n');
 p0 += frame->linesize[0];
 }
 fflush(stdout);




I have no issues with Encoding & Decoding, i just don't know how to apply "overlay" filter. Are there any tutorials out there demonstrate how to use "overlay" filter ?


-
Revision 30078 : On passe en balise dynamique ... On peut maintenant supprimer du tracking ...
22 juillet 2009, par kent1@… — LogOn passe en balise dynamique ...
On peut maintenant supprimer du tracking certaines adresses IPs
On passe en version 0.4 pour marquer le changement