Recherche avancée

Médias (1)

Mot : - Tags -/MediaSPIP

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (6089)

  • Create Panorama from Non-Sequential Video Frames

    6 mai 2021, par M.Innat

    There is a similar question (not that detailed and no exact solution).

    



    


    I want to create a single panorama image from video frames. And for that, I need to get minimum non-sequential video frames at first. A demo video file is uploaded here.

    


    What I Need

    


    A mechanism that can produce not-only non-sequential video frames but also in such a way that can be used to create a panorama image. A sample is given below. As we can see to create a panorama image, all the input samples must contain minimum overlap regions to each other otherwise it can not be done.

    


    enter image description here

    


    So, if I have the following video frame's order

    


    A, A, A, B, B, B, B, C, C, A, A, C, C, C, B, B, B ...


    


    To create a panorama image, I need to get something as follows - reduced sequential frames (or adjacent frames) but with minimum overlapping.

    


         [overlap]  [overlap]  [overlap] [overlap]  [overlap]
 A,    A,B,       B,C,       C,A,       A,C,      C,B,  ...


    


    What I've Tried and Stuck

    


    A demo video clip is given above. To get non-sequential video frames, I primarily rely on ffmpeg software.

    


    Trial 1 Ref.

    


    ffmpeg -i check.mp4 -vf mpdecimate,setpts=N/FRAME_RATE/TB -map 0:v out.mp4


    


    After that, on the out.mp4, I applied slice the video frames using opencv

    


    import cv2, os 
from pathlib import Path

vframe_dir = Path("vid_frames/")
vframe_dir.mkdir(parents=True, exist_ok=True)

vidcap = cv2.VideoCapture('out.mp4')
success,image = vidcap.read()
count = 0

while success:
    cv2.imwrite(f"{vframe_dir}/frame%d.jpg" % count, image)     
    success,image = vidcap.read()
    count += 1


    


    Next, I rotated these saved images horizontally (as my video is a vertical view).

    


    vframe_dir = Path("out/")
vframe_dir.mkdir(parents=True, exist_ok=True)

vframe_dir_rot = Path("vframe_dir_rot/")
vframe_dir_rot.mkdir(parents=True, exist_ok=True)

for i, each_img in tqdm(enumerate(os.listdir(vframe_dir))):
    image = cv2.imread(f"{vframe_dir}/{each_img}")[:, :, ::-1] # Read (with BGRtoRGB)
    
    image = cv2.rotate(image,cv2.cv2.ROTATE_180)
    image = cv2.rotate(image,cv2.ROTATE_90_CLOCKWISE)

    cv2.imwrite(f"{vframe_dir_rot}/{each_img}", image[:, :, ::-1]) # Save (with RGBtoBGR)


    


    The output is ok for this method (with ffmpeg) but inappropriate for creating the panorama image. Because it didn't give some overlapping frames sequentially in the results. Thus panorama can't be generated.

    
 


    Trail 2 - Ref

    


    ffmpeg -i check.mp4 -vf decimate=cycle=2,setpts=N/FRAME_RATE/TB -map 0:v out.mp4


    


    didn't work at all.

    


    Trail 3

    


    ffmpeg -i check.mp4 -ss 0 -qscale 0 -f image2 -r 1 out/images%5d.png


    


    No luck either. However, I've found this last ffmpeg command was close by far but wasn't enough. Comparatively to others, this gave me a small amount of non-duplicate frames (good) but the bad thing is still do not need frames, and I kinda manually pick some desired frames, and then the opecv stitching algorithm works. So, after picking some frames and rotating (as mentioned before) :

    


    stitcher = cv2.Stitcher.create()
status, pano = stitcher.stitch(images) # images: manually picked video frames -_- 


    
 


    Update

    


    After some trials, I am kinda adopting the non-programming solution. But would love to see an efficient programmatic approach.

    


    On the given demo video, I used Adobe products (premiere pro and photoshop) to do this task, video instruction. But the issue was, I kind of took all video frames at first (without dropping to any frames and that will computationally cost further) via premier and use photoshop to stitching them (according to the youtube video instruction). It was too heavy for these editor tools and didn't look better way but the output was better than anything until now. Though I took few (400+ frames) video frames only out of 1200+.

    


    enter image description here

    



    


    Here are some big challenges. The original video clips have some conditions though, and it's too serious. Unlike the given demo video clips :

    


      

    • It's not straight forward, i.e. camera shaking
    • 


    • Lighting condition, i.e. causes different visual look at the same spot
    • 


    • Cameral flickering or banding
    • 


    


    This scenario is not included in the given demo video. And this brings additional and heavy challenges to create panorama images from such videos. Even with the non-programming way (using adobe tools) I couldn't make it any good.

    



    


    However, for now, all I'm interest to get a panorama image from the given demo video which is without the above condition. But I would love to know any comment or suggestion on that.

    


  • Image edited and saved in C# can not be read by ffmpeg

    13 juillet 2017, par mmijic

    I have template image (template.jpg) on which I draw some text (13.07.2017.) and than I save it to another location (temp/intro.jpg).

    Than I want to convert that and some other images into video with ffmpeg.

    If I run command

    ffmpeg.exe -f concat -safe 0 -i input.txt -c:v libx264 -vf "fps=25,format=yuv420p" out.mp4

    Those images gets concentrated into one video file. Problem is that this image edited through C# is black in final video.

    If I for example open that C# created image in Adobe Fireworks and just save it (CTRL+S) without changing anything, and re run ffmpeg command everything is fine.

    This is code which I use to add text to template image

    //INTRO IMAGE CREATION
    Bitmap image = new Bitmap(1280, 720);
    image.SetResolution(image.HorizontalResolution, image.VerticalResolution);
    Graphics g = Graphics.FromImage(image);
    g.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias;
    g.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
    g.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.HighQuality;
    g.TextRenderingHint = TextRenderingHint.AntiAliasGridFit;
    StringFormat format = new StringFormat()
    {
       Alignment = StringAlignment.Near,
       LineAlignment = StringAlignment.Near
    };

    //template
    Bitmap back = new Bitmap(Application.StartupPath + @"\Templates\template.jpg");
    g.DrawImage(back, 0, 0, image.Width, image.Height);

    //date
    string Date = dateIntro.Value.ToString("dd.MM.yyyy.");
    var brush = new SolidBrush(Color.FromArgb(255, 206, 33, 39));
    Font font = new Font("Ubuntu", 97, FontStyle.Bold, GraphicsUnit.Pixel);
    float x = 617;
    float y = 530;

    g.DrawString(Date, font, brush, x, y, format);
    g.Flush();
    image.Save(Application.StartupPath + @"\temp\intro.jpg", ImageFormat.Jpeg);
    image.Dispose();

    Image created this way can be opened and viewed in any program except converted to video with ffmpeg.

    Is there anything I’m missing while adding text and saving image in C# ?

  • Failed to play MP3 audio with ffmpeg API in Linux

    15 janvier, par wangt13

    I am working on an audioplayer by using FFMPEG library, and ALSA.
    
The following codes failed to playback the MP3 media smoothly (it is slower and noisy), I checked the FFMPEG codes and examples, but I did not the right solutions.

    


    #include &#xA;#include &#xA;#include <alsa></alsa>asoundlib.h>&#xA;&#xA;#include <libswresample></libswresample>swresample.h>&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libswscale></libswscale>swscale.h>&#xA;&#xA;int init_pcm_play(snd_pcm_t **playback_handle,snd_pcm_uframes_t chunk_size,unsigned int rate,int bits_per_sample,int channels)&#xA;{&#xA;    snd_pcm_hw_params_t *hw_params;&#xA;    snd_pcm_format_t format;&#xA;&#xA;    //1. openPCM,&#xA;    if (0 > snd_pcm_open(playback_handle, "default", SND_PCM_STREAM_PLAYBACK, 0))&#xA;    {&#xA;        printf("snd_pcm_open err\n");&#xA;        return -1;&#xA;    }&#xA;    //2. snd_pcm_hw_params_t&#xA;    if(0 > snd_pcm_hw_params_malloc (&amp;hw_params))&#xA;    {&#xA;        printf("snd_pcm_hw_params_malloc err\n");&#xA;        return -1;&#xA;    }&#xA;    //3. hw_params&#xA;    if(0 > snd_pcm_hw_params_any (*playback_handle, hw_params))&#xA;    {&#xA;        printf("snd_pcm_hw_params_any err\n");&#xA;        return -1;&#xA;    }&#xA;    //4.&#xA;    if (0 > snd_pcm_hw_params_set_access (*playback_handle, hw_params, SND_PCM_ACCESS_RW_INTERLEAVED))&#xA;    {&#xA;        printf("snd_pcm_hw_params_any err\n");&#xA;        return -1;&#xA;    }&#xA;&#xA;    //5. SND_PCM_FORMAT_U8,8&#xA;    if(8 == bits_per_sample) {&#xA;        format = SND_PCM_FORMAT_U8;&#xA;    }&#xA;    if(16 == bits_per_sample) {&#xA;        format = SND_PCM_FORMAT_S16_LE;&#xA;    }&#xA;    if (0 > snd_pcm_hw_params_set_format (*playback_handle, hw_params, format))&#xA;    {&#xA;        printf("snd_pcm_hw_params_set_format err\n");&#xA;        return -1;&#xA;    }&#xA;&#xA;    //6.&#xA;    if (0 > snd_pcm_hw_params_set_rate_near (*playback_handle, hw_params, &amp;rate, 0))&#xA;    {&#xA;        printf("snd_pcm_hw_params_set_rate_near err\n");&#xA;        return -1;&#xA;    }&#xA;    //7.&#xA;    if (0 > snd_pcm_hw_params_set_channels(*playback_handle, hw_params, 2))&#xA;    {&#xA;        printf("snd_pcm_hw_params_set_channels err\n");&#xA;        return -1;&#xA;    }&#xA;&#xA;    //8. set hw_params&#xA;    if (0 > snd_pcm_hw_params (*playback_handle, hw_params))&#xA;    {&#xA;        printf("snd_pcm_hw_params err\n");&#xA;        return -1;&#xA;    }&#xA;&#xA;    snd_pcm_hw_params_get_period_size(hw_params, &amp;chunk_size, 0);&#xA;&#xA;    snd_pcm_hw_params_free (hw_params);&#xA;&#xA;    return 0;&#xA;}&#xA;&#xA;int main(int argc, char *argv[])&#xA;{&#xA;    AVFormatContext *pFormatCtx = NULL; //for opening multi-media file&#xA;    int audioStream = -1;&#xA;    AVCodecContext *pCodecCtx = NULL;&#xA;    AVCodec *pCodec = NULL; // the codecer&#xA;    AVFrame *pFrame = NULL;&#xA;    AVPacket *packet;&#xA;    uint8_t *out_buffer;&#xA;    struct SwrContext *au_convert_ctx;&#xA;    snd_pcm_t *playback_handle;&#xA;    int bits_per_sample = 0;&#xA;&#xA;    if (avformat_open_input(&amp;pFormatCtx, argv[1], NULL, NULL) != 0) {&#xA;        printf("Failed to open video file!");&#xA;        return -1; // Couldn&#x27;t open file&#xA;    }&#xA;&#xA;    if(avformat_find_stream_info(pFormatCtx,NULL)&lt;0)&#xA;    {&#xA;        printf("Failed to find stream info.\n");&#xA;        return -1;&#xA;    }&#xA;&#xA;    audioStream = av_find_best_stream(pFormatCtx, AVMEDIA_TYPE_AUDIO, -1, -1, NULL, 0);&#xA;    if (audioStream == -1) {&#xA;        printf("Din&#x27;t find a video stream!");&#xA;        return -1;// Didn&#x27;t find a video stream&#xA;    }&#xA;&#xA;    av_dump_format(pFormatCtx, audioStream, NULL, false);&#xA;&#xA;    // Find the decoder for the video stream&#xA;    pCodec = avcodec_find_decoder(pFormatCtx->streams[audioStream]->codecpar->codec_id);&#xA;    if (pCodec == NULL) {&#xA;        printf("Unsupported codec!\n");&#xA;        return -1; // Codec not found&#xA;    }&#xA;&#xA;    // Copy context&#xA;    pCodecCtx = avcodec_alloc_context3(pCodec);&#xA;    AVCodecParameters *pCodecParam = pFormatCtx->streams[audioStream]->codecpar;&#xA;&#xA;     if (avcodec_parameters_to_context(pCodecCtx, pCodecParam) &lt; 0) {&#xA;        printf("Failed to set codec params\n");&#xA;        return -1;&#xA;    }&#xA;    // Open codec&#xA;    if (avcodec_open2(pCodecCtx, pCodec, NULL) &lt; 0) {&#xA;        printf("Failed to open decoder!\n");&#xA;        return -1; // Could not open codec&#xA;    }&#xA;    packet = av_packet_alloc();&#xA;    pFrame = av_frame_alloc();&#xA;&#xA;    uint64_t iInputLayout                    = av_get_default_channel_layout(pCodecCtx->channels);&#xA;    enum AVSampleFormat eInputSampleFormat   = pCodecCtx->sample_fmt;&#xA;    int         iInputSampleRate             = pCodecCtx->sample_rate;&#xA;&#xA;&#xA;    uint64_t iOutputLayout                   = av_get_default_channel_layout(pCodecCtx->channels);&#xA;    int      iOutputChans                    = pCodecCtx->channels;&#xA;    enum AVSampleFormat eOutputSampleFormat  = AV_SAMPLE_FMT_S16;&#xA;    int         iOutputSampleRate            = pCodecCtx->sample_rate;&#xA;&#xA;    au_convert_ctx = swr_alloc_set_opts(NULL,iOutputLayout, eOutputSampleFormat, iOutputSampleRate,&#xA;        iInputLayout,eInputSampleFormat, iInputSampleRate, 0, NULL);&#xA;    swr_init(au_convert_ctx);&#xA;    int iConvertLineSize = 0;&#xA;    int iConvertBuffSize  = av_samples_get_buffer_size(&amp;iConvertLineSize, iOutputChans, pCodecCtx->frame_size, eOutputSampleFormat, 0);&#xA;    printf("ochans: %d, ifrmsmp: %d, osfmt: %d, cbufsz: %d\n", iOutputChans, pCodecCtx->frame_size, eOutputSampleFormat, iConvertBuffSize);&#xA;    out_buffer = (uint8_t *) av_malloc(iConvertBuffSize);&#xA;&#xA;    if(eOutputSampleFormat == AV_SAMPLE_FMT_S16 )&#xA;    {&#xA;        bits_per_sample = 16;&#xA;    }&#xA;    /*** alsa handle ***/&#xA;    init_pcm_play(&amp;playback_handle,256, iOutputSampleRate,bits_per_sample,2);&#xA;&#xA;    if (0 > snd_pcm_prepare (playback_handle))&#xA;    {&#xA;        printf("snd_pcm_prepare err\n");&#xA;        return -1;&#xA;    }&#xA;&#xA;    while (av_read_frame(pFormatCtx, packet) >= 0) {&#xA;        if (packet->stream_index == audioStream) {&#xA;            avcodec_send_packet(pCodecCtx, packet);&#xA;            while (avcodec_receive_frame(pCodecCtx, pFrame) == 0) {&#xA;                int outframes = swr_convert(au_convert_ctx, &amp;out_buffer, pCodecCtx->frame_size, (const uint8_t **) pFrame->data, pFrame->nb_samples); // 转换音频&#xA;                snd_pcm_writei(playback_handle, out_buffer, outframes);&#xA;                av_frame_unref(pFrame);&#xA;            }&#xA;        }&#xA;        av_packet_unref(packet);&#xA;    }&#xA;    swr_free(&amp;au_convert_ctx);&#xA;    snd_pcm_close(playback_handle);&#xA;    av_freep(&amp;out_buffer);&#xA;&#xA;    return 0;&#xA;}&#xA;

    &#xA;

    Running the codes can show following logs.

    &#xA;

    ./ap_alsa ./dooralarm.mp3&#xA;[mp3 @ 0x1e72020] Estimating duration from bitrate, this may be inaccurate&#xA;Input #0, mp3, from &#x27;(null)&#x27;:&#xA;  Metadata:&#xA;    genre           : Blues&#xA;    id3v2_priv.XMP  : &lt;?xpacket begin="\xef\xbb\xbf" id="W5M0MpCehiHzreSzNTczkc9d"?>\x0a\x0a \x0a  s&#xA;  Stream #0:0: Audio: mp3, 22050 Hz, mono, fltp, 48 kb/s&#xA;ochans: 1, ifrmsmp: 576, osfmt: 1, cbufsz: 1152&#xA;

    &#xA;

    I am using the FFMPEG-4.4.4, and Linux-kernel-5.10.20.

    &#xA;