Recherche avancée

Médias (1)

Mot : - Tags -/copyleft

Autres articles (21)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

Sur d’autres sites (2866)

  • ffmpeg concat .dv without errors or loss of audio sync

    29 mars 2022, par Dave Lang

    I'm ripping video from a bunch of ancient MiniDV tapes using, after much trial and error, some almost as ancient Mac hardware and iMovie HD 6.0.5. This is working well except that it will only create a contiguous video clip of about 12.6 GB in size. If the total video is larger than that, it creates a second clip that is usually about 500 MB.

    


    I want to join these two clips in the "best" way possible - meaning with ffmpeg throwing as few errors as possible, and the audio / video staying in sync.

    


    I'm currently using the following command line in a bash shell :

    


    for f in *.dv ; do echo file '$f' >> list.txt ; done && ffmpeg -f concat -safe 0 -i list.txt -c copy stitched-video.dv && rm list.txt

    


    This seems to be working well, and using the 'eyeball' check, sync seems to be preserved.

    


    However, I do get the following error message when ffmpeg starts in on the second file :

    


    Non-monotonous DTS in output stream 0:1 ; previous : 107844491, current : 107843736 ; changing to 107844492. This may result in incorrect timestamps in the output file.

    


    Since I know just enough about ffmpeg to be dangerous, I don't understand the significance of this message.

    


    Can anyone suggest changes to my ffmpeg command that will fix whatever ffmpeg is telling me is going wrong ?

    


    I'm going to be working on HD MiniDV tapes next, and, because they suffer from numerous dropouts, my task is going to become more complex, so I'd like to nail this one.

    


    Thanks !

    


    as suggested below ffprobe for the two files

    


    Input #0, dv, from 'file1.dv' : Metadata : timecode : 00:00:00 ;22 Duration : 00:59:54.79, start : 0.000000, bitrate : 28771 kb/s Stream #0:0 : Video : dvvideo, yuv411p, 720x480 [SAR 8:9 DAR 4:3], 25000 kb/s, 29.97 fps, 29.97 tbr, 29.97 tbn Stream #0:1 : Audio : pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s

    


    Input #0, dv, from 'file2.dv' : Metadata : timecode : 00:15:06 ;19 Duration : 00:02:04.09, start : 0.000000, bitrate : 28771 kb/s Stream #0:0 : Video : dvvideo, yuv411p, 720x480 [SAR 8:9 DAR 4:3], 25000 kb/s, 29.97 fps, 29.97 tbr, 29.97 tbn Stream #0:1 : Audio : pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s

    


  • Android Encode h264 using libavcodec for ARGB

    12 décembre 2013, par nmxprime

    I have a stream of buffer content which actually contains 480x800 sized ARGB image[byte array of size 480*800*4]. i want to encode around 10,000s of similar images into a stream of h.264 at specified fps(12). this shows how to encode images into encoded video,but requires input to be yuv420.

    Now i have ARGB images, i want to encode into CODEC_ID_H264
    How to convert RGB from YUV420p for ffmpeg encoder ? shows how to do it for rgb24, but how to do it for rgb32,meaning ARGB image data

    how do i use libavcodec for this ?

    EDIT : i found How to convert RGB from YUV420p for ffmpeg encoder ?
    But i don't understand.

    From the 1st link, i come to know that AVFrame struct contains data[0],data1,data[2] which are filled with Y, U & V values.

    In 2nd link, they showed how to use sws_scale to convert RGB24 to YUV420 as such

    SwsContext * ctx = sws_getContext(imgWidth, imgHeight,
                                 AV_PIX_FMT_RGB24, imgWidth, imgHeight,
                                 AV_PIX_FMT_YUV420P, 0, 0, 0, 0);
    uint8_t * inData[1] = { rgb24Data }; // RGB24 have one plane
    int inLinesize[1] = { 3*imgWidth }; // RGB stride
    sws_scale(ctx, inData, inLinesize, 0, imgHeight, dst_picture.data, dst_picture.linesize)

    Here i assume that rgb24Data is the buffer containing RGB24 image bytes.

    So how i use this information for ARGB, which is 32 bit ? Do i need manually to strip-off the alpha channel or any other work around ?

    Thank you

  • pts and dts problems while encoding multiple streams to AVFormatContext with libavcodec and libavformat

    20 novembre 2022, par WalleyM

    I am trying to encode a mpeg2video stream and a signed PCM 32 bit audio stream to a .mov file using ffmpeg's avcodec and avformat libraries.

    


    My video stream is set up in almost the exact same way as is described here with my audio stream being set up in a very similar way.

    


    My time_base for both audio and video is set to 1/fps.

    


    Here is the overview output from setting up the encoder :

    


    


    Output #0, mov, to ' /Recordings/SDI_Video.mov' :
    
Metadata :
    
encoder : Lavf59.27.100
    
Stream #0:0 : Video : mpeg2video (m2v1 / 0x3176326D), yuv420p, 1920x1080, q=2-31, 207360 kb/s, 90k tbn
    
Stream #0:1 : Audio : pcm_s32be (in32 / 0x32336E69), 48000 Hz, stereo, s32, 3072 kb/s

    


    


    As I understand it my pts should be when the frame is presented while dts should be when the frame is decoded. This means that audio and video frame pts should be the same whereas dts should be incremental between them.

    


    Essentially meaning interleaved audio and video frames should be in the following pts and dts order :

    


    pts 112233
dts 123456

    


    I am using this format to set my pts and dts :

    


    videoFrame->pts = frameCounter;
    
if(avcodec_send_frame(videoContext, videoFrame) < 0)
{
    std::cout << "Failed to send video frame " << frameCounter << std::endl;
    return;
}
    
AVPacket videoPkt;
av_init_packet(&videoPkt);
videoPkt.data = nullptr;
videoPkt.size = 0;
videoPkt.flags |= AV_PKT_FLAG_KEY;
videoPkt.stream_index = 0;
videoPkt.dts = frameCounter * 2;
    
if(avcodec_receive_packet(videoContext, &videoPkt) == 0)
{
    av_interleaved_write_frame(outputFormatContext, &videoPkt);
    av_packet_unref(&videoPkt);
}


    


    With audio the same except :

    


    audioPkt.stream_index = 1;
audioPkt.dts = frameCounter * 2 + 1;


    


    However, I still get problems with my dts setting shown in this output :

    


    


    [mov @ 0x7fc1b3667480] Application provided invalid, non monotonically increasing dts to muxer in stream 0 : 1 >= 0
    
[mov @ 0x7fc1b3667480] Application provided invalid, non monotonically increasing dts to muxer in stream 0 : 2 >= 1
    
[mov @ 0x7fc1b3667480] Application provided invalid, non monotonically increasing dts to muxer in stream 0 : 3 >= 2

    


    


    I would like to fix this issue.