
Recherche avancée
Médias (1)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
Autres articles (112)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (6089)
-
Create Panorama from Non-Sequential Video Frames
6 mai 2021, par M.InnatThere is a similar question (not that detailed and no exact solution).



I want to create a single panorama image from video frames. And for that, I need to get minimum non-sequential video frames at first. A demo video file is uploaded here.


What I Need


A mechanism that can produce not-only non-sequential video frames but also in such a way that can be used to create a panorama image. A sample is given below. As we can see to create a panorama image, all the input samples must contain minimum overlap regions to each other otherwise it can not be done.




So, if I have the following video frame's order


A, A, A, B, B, B, B, C, C, A, A, C, C, C, B, B, B ...



To create a panorama image, I need to get something as follows - reduced sequential frames (or adjacent frames) but with minimum overlapping.


[overlap] [overlap] [overlap] [overlap] [overlap]
 A, A,B, B,C, C,A, A,C, C,B, ...



What I've Tried and Stuck


A demo video clip is given above. To get non-sequential video frames, I primarily rely on
ffmpeg
software.

Trial 1 Ref.


ffmpeg -i check.mp4 -vf mpdecimate,setpts=N/FRAME_RATE/TB -map 0:v out.mp4



After that, on the
out.mp4
, I applied slice the video frames usingopencv


import cv2, os 
from pathlib import Path

vframe_dir = Path("vid_frames/")
vframe_dir.mkdir(parents=True, exist_ok=True)

vidcap = cv2.VideoCapture('out.mp4')
success,image = vidcap.read()
count = 0

while success:
 cv2.imwrite(f"{vframe_dir}/frame%d.jpg" % count, image) 
 success,image = vidcap.read()
 count += 1



Next, I rotated these saved images horizontally (as my video is a vertical view).


vframe_dir = Path("out/")
vframe_dir.mkdir(parents=True, exist_ok=True)

vframe_dir_rot = Path("vframe_dir_rot/")
vframe_dir_rot.mkdir(parents=True, exist_ok=True)

for i, each_img in tqdm(enumerate(os.listdir(vframe_dir))):
 image = cv2.imread(f"{vframe_dir}/{each_img}")[:, :, ::-1] # Read (with BGRtoRGB)
 
 image = cv2.rotate(image,cv2.cv2.ROTATE_180)
 image = cv2.rotate(image,cv2.ROTATE_90_CLOCKWISE)

 cv2.imwrite(f"{vframe_dir_rot}/{each_img}", image[:, :, ::-1]) # Save (with RGBtoBGR)



The output is ok for this method (with
ffmpeg
) but inappropriate for creating the panorama image. Because it didn't give some overlapping frames sequentially in the results. Thus panorama can't be generated.



Trail 2 - Ref


ffmpeg -i check.mp4 -vf decimate=cycle=2,setpts=N/FRAME_RATE/TB -map 0:v out.mp4



didn't work at all.


Trail 3


ffmpeg -i check.mp4 -ss 0 -qscale 0 -f image2 -r 1 out/images%5d.png



No luck either. However, I've found this last
ffmpeg
command was close by far but wasn't enough. Comparatively to others, this gave me a small amount of non-duplicate frames (good) but the bad thing is stilldo not need
frames, and I kinda manually pick some desired frames, and then theopecv
stitching algorithm works. So, after picking some frames and rotating (as mentioned before) :

stitcher = cv2.Stitcher.create()
status, pano = stitcher.stitch(images) # images: manually picked video frames -_- 





Update


After some trials, I am kinda adopting the non-programming solution. But would love to see an efficient programmatic approach.


On the given demo video, I used
Adobe
products (premiere pro
andphotoshop
) to do this task, video instruction. But the issue was, I kind of took all video frames at first (without dropping to any frames and that will computationally cost further) viapremier
and usephotoshop
to stitching them (according to the youtube video instruction). It was too heavy for these editor tools and didn't look better way but the output was better than anything until now. Though I took few (400+ frames) video frames only out of 1200+.




Here are some big challenges. The original video clips have some conditions though, and it's too serious. Unlike the given demo video clips :


- 

- It's not straight forward, i.e. camera shaking
- Lighting condition, i.e. causes different visual look at the same spot
- Cameral flickering or banding








This scenario is not included in the given demo video. And this brings additional and heavy challenges to create panorama images from such videos. Even with the non-programming way (using
adobe
tools) I couldn't make it any good.


However, for now, all I'm interest to get a panorama image from the given demo video which is without the above condition. But I would love to know any comment or suggestion on that.


-
Image edited and saved in C# can not be read by ffmpeg
13 juillet 2017, par mmijicI have template image (template.jpg) on which I draw some text (13.07.2017.) and than I save it to another location (temp/intro.jpg).
Than I want to convert that and some other images into video with ffmpeg.
If I run command
ffmpeg.exe -f concat -safe 0 -i input.txt -c:v libx264 -vf "fps=25,format=yuv420p" out.mp4
Those images gets concentrated into one video file. Problem is that this image edited through C# is black in final video.
If I for example open that C# created image in Adobe Fireworks and just save it (CTRL+S) without changing anything, and re run ffmpeg command everything is fine.
This is code which I use to add text to template image
//INTRO IMAGE CREATION
Bitmap image = new Bitmap(1280, 720);
image.SetResolution(image.HorizontalResolution, image.VerticalResolution);
Graphics g = Graphics.FromImage(image);
g.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias;
g.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
g.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.HighQuality;
g.TextRenderingHint = TextRenderingHint.AntiAliasGridFit;
StringFormat format = new StringFormat()
{
Alignment = StringAlignment.Near,
LineAlignment = StringAlignment.Near
};
//template
Bitmap back = new Bitmap(Application.StartupPath + @"\Templates\template.jpg");
g.DrawImage(back, 0, 0, image.Width, image.Height);
//date
string Date = dateIntro.Value.ToString("dd.MM.yyyy.");
var brush = new SolidBrush(Color.FromArgb(255, 206, 33, 39));
Font font = new Font("Ubuntu", 97, FontStyle.Bold, GraphicsUnit.Pixel);
float x = 617;
float y = 530;
g.DrawString(Date, font, brush, x, y, format);
g.Flush();
image.Save(Application.StartupPath + @"\temp\intro.jpg", ImageFormat.Jpeg);
image.Dispose();Image created this way can be opened and viewed in any program except converted to video with ffmpeg.
Is there anything I’m missing while adding text and saving image in C# ?
-
Failed to play MP3 audio with ffmpeg API in Linux
15 janvier, par wangt13I am working on an audioplayer by using FFMPEG library, and ALSA.

The following codes failed to playback the MP3 media smoothly (it is slower and noisy), I checked the FFMPEG codes and examples, but I did not the right solutions.

#include 
#include 
#include <alsa></alsa>asoundlib.h>

#include <libswresample></libswresample>swresample.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libswscale></libswscale>swscale.h>

int init_pcm_play(snd_pcm_t **playback_handle,snd_pcm_uframes_t chunk_size,unsigned int rate,int bits_per_sample,int channels)
{
 snd_pcm_hw_params_t *hw_params;
 snd_pcm_format_t format;

 //1. openPCM,
 if (0 > snd_pcm_open(playback_handle, "default", SND_PCM_STREAM_PLAYBACK, 0))
 {
 printf("snd_pcm_open err\n");
 return -1;
 }
 //2. snd_pcm_hw_params_t
 if(0 > snd_pcm_hw_params_malloc (&hw_params))
 {
 printf("snd_pcm_hw_params_malloc err\n");
 return -1;
 }
 //3. hw_params
 if(0 > snd_pcm_hw_params_any (*playback_handle, hw_params))
 {
 printf("snd_pcm_hw_params_any err\n");
 return -1;
 }
 //4.
 if (0 > snd_pcm_hw_params_set_access (*playback_handle, hw_params, SND_PCM_ACCESS_RW_INTERLEAVED))
 {
 printf("snd_pcm_hw_params_any err\n");
 return -1;
 }

 //5. SND_PCM_FORMAT_U8,8
 if(8 == bits_per_sample) {
 format = SND_PCM_FORMAT_U8;
 }
 if(16 == bits_per_sample) {
 format = SND_PCM_FORMAT_S16_LE;
 }
 if (0 > snd_pcm_hw_params_set_format (*playback_handle, hw_params, format))
 {
 printf("snd_pcm_hw_params_set_format err\n");
 return -1;
 }

 //6.
 if (0 > snd_pcm_hw_params_set_rate_near (*playback_handle, hw_params, &rate, 0))
 {
 printf("snd_pcm_hw_params_set_rate_near err\n");
 return -1;
 }
 //7.
 if (0 > snd_pcm_hw_params_set_channels(*playback_handle, hw_params, 2))
 {
 printf("snd_pcm_hw_params_set_channels err\n");
 return -1;
 }

 //8. set hw_params
 if (0 > snd_pcm_hw_params (*playback_handle, hw_params))
 {
 printf("snd_pcm_hw_params err\n");
 return -1;
 }

 snd_pcm_hw_params_get_period_size(hw_params, &chunk_size, 0);

 snd_pcm_hw_params_free (hw_params);

 return 0;
}

int main(int argc, char *argv[])
{
 AVFormatContext *pFormatCtx = NULL; //for opening multi-media file
 int audioStream = -1;
 AVCodecContext *pCodecCtx = NULL;
 AVCodec *pCodec = NULL; // the codecer
 AVFrame *pFrame = NULL;
 AVPacket *packet;
 uint8_t *out_buffer;
 struct SwrContext *au_convert_ctx;
 snd_pcm_t *playback_handle;
 int bits_per_sample = 0;

 if (avformat_open_input(&pFormatCtx, argv[1], NULL, NULL) != 0) {
 printf("Failed to open video file!");
 return -1; // Couldn't open file
 }

 if(avformat_find_stream_info(pFormatCtx,NULL)<0)
 {
 printf("Failed to find stream info.\n");
 return -1;
 }

 audioStream = av_find_best_stream(pFormatCtx, AVMEDIA_TYPE_AUDIO, -1, -1, NULL, 0);
 if (audioStream == -1) {
 printf("Din't find a video stream!");
 return -1;// Didn't find a video stream
 }

 av_dump_format(pFormatCtx, audioStream, NULL, false);

 // Find the decoder for the video stream
 pCodec = avcodec_find_decoder(pFormatCtx->streams[audioStream]->codecpar->codec_id);
 if (pCodec == NULL) {
 printf("Unsupported codec!\n");
 return -1; // Codec not found
 }

 // Copy context
 pCodecCtx = avcodec_alloc_context3(pCodec);
 AVCodecParameters *pCodecParam = pFormatCtx->streams[audioStream]->codecpar;

 if (avcodec_parameters_to_context(pCodecCtx, pCodecParam) < 0) {
 printf("Failed to set codec params\n");
 return -1;
 }
 // Open codec
 if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0) {
 printf("Failed to open decoder!\n");
 return -1; // Could not open codec
 }
 packet = av_packet_alloc();
 pFrame = av_frame_alloc();

 uint64_t iInputLayout = av_get_default_channel_layout(pCodecCtx->channels);
 enum AVSampleFormat eInputSampleFormat = pCodecCtx->sample_fmt;
 int iInputSampleRate = pCodecCtx->sample_rate;


 uint64_t iOutputLayout = av_get_default_channel_layout(pCodecCtx->channels);
 int iOutputChans = pCodecCtx->channels;
 enum AVSampleFormat eOutputSampleFormat = AV_SAMPLE_FMT_S16;
 int iOutputSampleRate = pCodecCtx->sample_rate;

 au_convert_ctx = swr_alloc_set_opts(NULL,iOutputLayout, eOutputSampleFormat, iOutputSampleRate,
 iInputLayout,eInputSampleFormat, iInputSampleRate, 0, NULL);
 swr_init(au_convert_ctx);
 int iConvertLineSize = 0;
 int iConvertBuffSize = av_samples_get_buffer_size(&iConvertLineSize, iOutputChans, pCodecCtx->frame_size, eOutputSampleFormat, 0);
 printf("ochans: %d, ifrmsmp: %d, osfmt: %d, cbufsz: %d\n", iOutputChans, pCodecCtx->frame_size, eOutputSampleFormat, iConvertBuffSize);
 out_buffer = (uint8_t *) av_malloc(iConvertBuffSize);

 if(eOutputSampleFormat == AV_SAMPLE_FMT_S16 )
 {
 bits_per_sample = 16;
 }
 /*** alsa handle ***/
 init_pcm_play(&playback_handle,256, iOutputSampleRate,bits_per_sample,2);

 if (0 > snd_pcm_prepare (playback_handle))
 {
 printf("snd_pcm_prepare err\n");
 return -1;
 }

 while (av_read_frame(pFormatCtx, packet) >= 0) {
 if (packet->stream_index == audioStream) {
 avcodec_send_packet(pCodecCtx, packet);
 while (avcodec_receive_frame(pCodecCtx, pFrame) == 0) {
 int outframes = swr_convert(au_convert_ctx, &out_buffer, pCodecCtx->frame_size, (const uint8_t **) pFrame->data, pFrame->nb_samples); // 转换音频
 snd_pcm_writei(playback_handle, out_buffer, outframes);
 av_frame_unref(pFrame);
 }
 }
 av_packet_unref(packet);
 }
 swr_free(&au_convert_ctx);
 snd_pcm_close(playback_handle);
 av_freep(&out_buffer);

 return 0;
}



Running the codes can show following logs.


./ap_alsa ./dooralarm.mp3
[mp3 @ 0x1e72020] Estimating duration from bitrate, this may be inaccurate
Input #0, mp3, from '(null)':
 Metadata:
 genre : Blues
 id3v2_priv.XMP : <?xpacket begin="\xef\xbb\xbf" id="W5M0MpCehiHzreSzNTczkc9d"?>\x0a\x0a \x0a s
 Stream #0:0: Audio: mp3, 22050 Hz, mono, fltp, 48 kb/s
ochans: 1, ifrmsmp: 576, osfmt: 1, cbufsz: 1152



I am using the FFMPEG-4.4.4, and Linux-kernel-5.10.20.