Recherche avancée

Médias (0)

Mot : - Tags -/interaction

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (10)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

  • Soumettre améliorations et plugins supplémentaires

    10 avril 2011

    Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
    Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

Sur d’autres sites (4702)

  • How to convert a .m4a file to a .wav file on Android using React Native ?

    23 octobre 2022, par aurelien_morel

    I am trying to convert a .m4a file that I record using expo-audio into a .wav file. The goal is then to use it as a blob to send it on a Google Cloud Storage.
I tried to do this using ffmpeg-kit-react-native :

    


    const uri = recording.getURI();
console.log(uri);

if (Platform.OS === 'android') {
    FFmpegKit.execute(`-i ${uri} temp.wav`).then(async (session) => {
    // const returnCode = await session.getReturnCode();
    uri = 'temp.wav';
    });
}

const response = await fetch(uri);
const blob = await response.blob();


    


    but I have no sucess (getting the error) :

    


    TypeError : null is not an object (evaluating 'FFmpegKitReactNativeModule.ffmpegSession')

    


    uri have this form :

    


    file :///data/user/0/host.exp.exponent/cache/ExperienceData/%2540aamorel%252Fvoki/Audio/recording-4038abed-f264-48ca-a0cc-861268190874.m4a

    


    I am not sure if I use the FFmpeg toolkit correctly. Do you know how to make this work ? Or is there a simpler way to do it ?

    


  • How to the stream should be automatically restart after 10 seconds if the stream cut

    2 mars 2023, par Mr_Milky

    I will restart using this script . But sometime for some reason the stream goes cut....

    


    How to the stream should be automatically restart after 10 seconds if the stream cut.

    


    #!/bin/bash
while true;do
grep -c "Non-monotonous DTS in output stream" file.txt >nonmonotonus.txt
grep -c "Timestamps are unset in a packet for stream" file.txt >timestamp.txt
grep -c "PES packet size mismatch" file.txt >pespacket.txt
grep -c "Error while decoding stream" file.txt >errordecoding.txt
grep -c "Circular buffer overrun" file.txt >circularbuffer.txt
grep -c "Header missing" file.txt >header.txt
grep -c "Conversion failed" file.txt >conversion.txt

file=nonmonotonus.txt
file1=timestamp.txt
file2=pespacket.txt
file3=errordecoding.txt
file4=circularbuffer.txt
file5=header.txt
file6=conversion.txt

if (($(<"$file")>=3000)) || (($(<"$file1")>=500)) || (($(<"$file2")>=100)) || (($(<"$file3")>=1000)) || (($(<"$file4")>=500)) || (($(<"$file5")>=6)) || (($(<"$file6")>=1)); then
stream1 restart > restart.txt
sleep 1
fi
done
__________________________________________________________________________

FFmpeg -re -threads 3 -c:s webvtt -i "$INPUT_URL?source=null&overrun_nonfatal=1&fifo_size=1000000" \
  -c:v copy \
  -map 0:0 -map 0:1  \
  -c:a aac -b:a 128k -ar 48000 \
  -threads 4 -f hls -hls_time 2 -hls_wrap 15 \
  "manifest.m3u8" \
null > /dev/null 2>&1 2>file.txt & echo $! > $STREAM_PID_PATH



    


    How to automatically restart the stream.. after cut the .ts file

    


    Thankyou ...

    


  • How do I use the FFmpeg libraries to extract every nth frame from a video and save it as a small image file in C++ ?

    1er novembre 2022, par Panchs

    After experimenting with the examples on the FFmpeg documentation, I was finally able to create a short program that extracts every nth frame from a video. However, the output files that it produces are huge at over 15mb for each image. How can I change this to produce lower quality images ?

    


    The result I am trying to get is done easily on the command line with :

    


    ffmpeg -i [input video] -vf "select=not(mod(n\,10))" -fps_mode vfr img_%03d.jpg

    


    For a video with about 500 frames, this creates 50 images that are only about 800kb each ; how am would I be able to mimic this in my program ?

    


    My code consists of opening the input file, decoding the packets, then saving the frames :

    


    #include <cstdio>&#xA;#include <cstdlib>&#xA;#include <iostream>&#xA;&#xA;extern "C" {&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavfilter></libavfilter>buffersink.h>&#xA;#include <libavfilter></libavfilter>buffersrc.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libswscale></libswscale>swscale.h>&#xA;}&#xA;&#xA;static AVFormatContext *fmt_ctx;&#xA;static AVCodecContext *dec_ctx;&#xA;static int video_stream_index = -1;&#xA;&#xA;// OPEN THE INPUT FILE&#xA;static int open_input_file(const char *filename) {&#xA;    // INIT VARS AND FFMPEG OBJECTS&#xA;    int ret;&#xA;    const AVCodec *dec;&#xA;&#xA;    // OPEN INPUT FILE&#xA;    if((ret = avformat_open_input(&amp;fmt_ctx, filename, NULL, NULL)) &lt; 0) {&#xA;        printf("ERROR: failed to open input file\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // FIND STREAM INFO BASED ON INPUT FILE&#xA;    if((ret = avformat_find_stream_info(fmt_ctx, NULL)) &lt; 0) {&#xA;        printf("ERROR: failed to find stream information\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // FIND THE BEST VIDEO STREAM FOR THE INPUT FILE&#xA;    ret = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &amp;dec, 0);&#xA;    if(ret &lt; 0) {&#xA;        printf("ERROR: failed to find a video stream in the input file\n");&#xA;        return ret;&#xA;    }&#xA;    video_stream_index = ret;&#xA;&#xA;    // ALLOCATE THE DECODING CONTEXT FOR THE INPUT FILE&#xA;    dec_ctx = avcodec_alloc_context3(dec);&#xA;    if(!dec_ctx) {&#xA;        printf("ERROR: failed to allocate decoding context\n");&#xA;        // CAN NOT ALLOCATE MEMORY ERROR&#xA;        return AVERROR(ENOMEM);&#xA;    }&#xA;    avcodec_parameters_to_context(dec_ctx, fmt_ctx->streams[video_stream_index]->codecpar);&#xA;&#xA;    // INIT THE VIDEO DECODER&#xA;    if((ret = avcodec_open2(dec_ctx, dec, NULL)) &lt; 0) {&#xA;        printf("ERROR: failed to open video decoder\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    return 0;&#xA;}&#xA;&#xA;// SAVE THE FILE&#xA;static void save(unsigned char *buf, int wrap, int x_size, int y_size, char *file_name) {&#xA;    // INIT THE EMPTY FILE&#xA;    FILE *file;&#xA;&#xA;    // OPEN AND WRITE THE IMAGE FILE&#xA;    file = fopen(file_name, "wb");&#xA;    fprintf(file, "P6\n%d %d\n%d\n", x_size, y_size, 255);&#xA;    for(int i = 0; i &lt; y_size; i&#x2B;&#x2B;) {&#xA;        fwrite(buf &#x2B; i * wrap, 1, x_size * 3, file);&#xA;    }&#xA;    fclose(file);&#xA;}&#xA;&#xA;// DECODE FRAME AND CONVERT IT TO AN RGB IMAGE&#xA;static void decode(AVCodecContext *cxt, AVFrame *frame, AVPacket *pkt,&#xA;                   const char *out_file_name, const char *file_ext, int mod=1) {&#xA;    // INIT A BLANK CHAR TO HOLD THE FILE NAME AND AN EMPTY INT TO HOLD FUNCTION RETURN VALUES&#xA;    char buf[1024];&#xA;    int ret;&#xA;&#xA;    // SEND PACKET TO DECODER&#xA;    ret = avcodec_send_packet(cxt, pkt);&#xA;    if(ret &lt; 0) {&#xA;        printf("ERROR: error sending packet for decoding\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    // CREATE A SCALAR CONTEXT FOR CONVERSION&#xA;    SwsContext *sws_ctx = sws_getContext(dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt, dec_ctx->width,&#xA;                                         dec_ctx->height, AV_PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);&#xA;&#xA;    // CREATE A NEW RGB FRAME FOR CONVERSION&#xA;    AVFrame* rgb_frame = av_frame_alloc();&#xA;    rgb_frame->format = AV_PIX_FMT_RGB24;&#xA;    rgb_frame->width = dec_ctx->width;&#xA;    rgb_frame->height = dec_ctx->height;&#xA;&#xA;    // ALLOCATE A NEW BUFFER FOR THE RGB CONVERSION FRAME&#xA;    av_frame_get_buffer(rgb_frame, 0);&#xA;&#xA;    // WHILE RETURN COMES BACK OKAY (FUNCTION RETURNS >= 0)...&#xA;    while(ret >= 0) {&#xA;        // GET FRAME BACK FROM DECODER&#xA;        ret = avcodec_receive_frame(cxt, frame);&#xA;        // IF "RESOURCE TEMP NOT AVAILABLE" OR "END OF FILE" ERROR...&#xA;        if(ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {&#xA;            return;&#xA;        } else if(ret &lt; 0) {&#xA;            printf("ERROR: error during decoding\n");&#xA;            exit(1);&#xA;        }&#xA;&#xA;        // IF FRAME NUMBER IF THE (MOD)TH FRAME...&#xA;        if(cxt->frame_number % mod == 0){&#xA;            // OUTPUT WHICH FRAME IS BEING SAVED&#xA;            printf("saving frame %03d\n", cxt->frame_number);&#xA;            // REMOVES TEMPORARY BUFFERED DATA&#xA;            fflush(stdout);&#xA;&#xA;            // SCALE (CONVERT) THE OLD FRAME TO THE NEW RGB FRAME&#xA;            sws_scale(sws_ctx, frame->data, frame->linesize, 0, frame->height,&#xA;                      rgb_frame->data, rgb_frame->linesize);&#xA;&#xA;            // SET "BUF" TO THE OUTPUT FILE PATH (SAVES TO "out_file_name_###.file_ext")&#xA;            snprintf(buf, sizeof(buf), "%s_%03d.%s", out_file_name, cxt->frame_number, file_ext);&#xA;            // SAVE THE FRAME&#xA;            save(rgb_frame->data[0], rgb_frame->linesize[0], rgb_frame->width, rgb_frame->height, buf);&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;int main() {&#xA;    // SIMULATE COMMAND LINE ARGUMENTS&#xA;    char argv0[] = "test";&#xA;    char argv1[] = "/User/Desktop/frames/test_video.mov";&#xA;    char *argv[] = {argv0, argv1, nullptr};&#xA;&#xA;    // INIT VARS AND FFMPEG OBJECTS&#xA;    int ret;&#xA;    AVPacket *packet;&#xA;    AVFrame *frame;&#xA;&#xA;    // ALLOCATE FRAME AND PACKET&#xA;    frame = av_frame_alloc();&#xA;    packet = av_packet_alloc();&#xA;    if (!frame || !packet) {&#xA;        fprintf(stderr, "Could not allocate frame or packet\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    // IF FILE DOESN&#x27;T OPEN, GO TO THE END&#xA;    if((ret = open_input_file(argv[1])) &lt; 0) {&#xA;        goto end;&#xA;    }&#xA;    &#xA;    // READ ALL THE PACKETS - simple&#xA;    while(av_read_frame(fmt_ctx, packet) >= 0) {&#xA;        // IF PACKET INDEX MATCHES VIDEO INDEX...&#xA;        if (packet->stream_index == video_stream_index) {&#xA;            // SEND PACKET TO THE DECODER and SAVE&#xA;            std::string name = "/User/Desktop/frames/img";&#xA;            std::string ext = "bmp";&#xA;            decode(dec_ctx, frame, packet, name.c_str(), ext.c_str(), 5);&#xA;        }&#xA;&#xA;        // UNREFERENCE THE PACKET&#xA;        av_packet_unref(packet);&#xA;    }&#xA;&#xA;    // END MARKER&#xA;    end:&#xA;    avcodec_free_context(&amp;dec_ctx);&#xA;    avformat_close_input(&amp;fmt_ctx);&#xA;    av_frame_free(&amp;frame);&#xA;    av_packet_free(&amp;packet);&#xA;&#xA;    // FINAL ERROR CATCH&#xA;    if (ret &lt; 0 &amp;&amp; ret != AVERROR_EOF) {&#xA;        fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    exit(0);&#xA;}&#xA;</iostream></cstdlib></cstdio>

    &#xA;

    I am not sure how to go about producing images that are much smaller in size like the ones produced on the command line. I have a feeling that this is possible somehow during the conversion to RGB or the saving of the file but I can't seem to figure out how.

    &#xA;

    Also, is there any way that I could go about this much more efficiently ? On the command line, this finishes very quickly (no more than a second or two for a 9 sec. movie at 60 fps).

    &#xA;