Recherche avancée

Médias (91)

Autres articles (62)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (4746)

  • Image to MPEG on Linux works, same code on Android = green video

    5 avril 2018, par JScoobyCed

    EDIT
    I have check the execution and found that the error is not (yet) at the swscale point. My current issue is that the JPG image is not found :
    No such file or directory
    when doing the avformat_open_input(&pFormatCtx, imageFileName, NULL, NULL);
    Before you tell me I need to register anything, I can tell I already did (I updated the code below).
    I also added the Android permission to access the external storage (I don’t think it is related to Android since I can already write to the /mnt/sdcard/ where the image is also located)
    END EDIT

    I have been through several tutorials (including the few posted from SO, i.e. http://dranger.com/ffmpeg/, how to compile ffmpeg for Android...,been through dolphin-player source code). Here is what I have :
    . Compiled ffmpeg for android
    . Ran basic tutorials using NDK to create a dummy video on my android device
    . been able to generate a MPEG2 video from images on Ubuntu using a modified version of dummy video code above and a lot of Googling
    . running the new code on Android device gives a green screen video (duration 1 sec whatever the number of frames I encode)

    I saw another post about iPhone in a similar situation that mentioned the ARM processor optimization could be the culprit. I tried a few ldextra-flags (-arch armv7-a and similar) to no success.

    I include at the end the code that loads the image. Is there something different to do on Android than on linux ? Is my ffmpeg build not correct for Android video encoding ?

    void copyFrame(AVCodecContext *destContext, AVFrame* dest,
               AVCodecContext *srcContext, AVFrame* source) {
    struct SwsContext *swsContext;
    swsContext = sws_getContext(srcContext->width, srcContext->height, srcContext->pix_fmt,
                   destContext->width, destContext->height, destContext->pix_fmt,
                   SWS_FAST_BILINEAR, NULL, NULL, NULL);
    sws_scale(swsContext, source->data, source->linesize, 0, srcContext->height, dest->data, dest->linesize);
    sws_freeContext(swsContext);
    }

    int loadFromFile(const char* imageFileName, AVFrame* realPicture, AVCodecContext* videoContext) {
    AVFormatContext *pFormatCtx = NULL;
    avcodec_register_all();
    av_register_all();

    int ret = avformat_open_input(&pFormatCtx, imageFileName, NULL, NULL);
    if (ret != 0) {
       // ERROR hapening here
       // Can't open image file. Use strerror(AVERROR(ret))) for details
       return ERR_CANNOT_OPEN_IMAGE;
    }

    AVCodecContext *pCodecCtx;

    pCodecCtx = pFormatCtx->streams[0]->codec;
    pCodecCtx->width = W_VIDEO;
    pCodecCtx->height = H_VIDEO;
    pCodecCtx->pix_fmt = PIX_FMT_YUV420P;

    // Find the decoder for the video stream
    AVCodec *pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
    if (!pCodec) {
       // Codec not found
       return ERR_CODEC_NOT_FOUND;
    }

    // Open codec
    if (avcodec_open(pCodecCtx, pCodec) < 0) {
       // Could not open codec
       return ERR_CANNOT_OPEN_CODEC;
    }

    //
    AVFrame *pFrame;

    pFrame = avcodec_alloc_frame();

    if (!pFrame) {
       // Can't allocate memory for AVFrame
       return ERR_CANNOT_ALLOC_MEM;
    }

    int frameFinished;
    int numBytes;

    // Determine required buffer size and allocate buffer
    numBytes = avpicture_get_size(PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height);
    uint8_t *buffer = (uint8_t *) av_malloc(numBytes * sizeof (uint8_t));

    avpicture_fill((AVPicture *) pFrame, buffer, PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height);
    AVPacket packet;
    int res = 0;
    while (av_read_frame(pFormatCtx, &packet) >= 0) {
       if (packet.stream_index != 0)
           continue;

       ret = avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
       if (ret > 0) {
           // now, load the useful info into realPicture
           copyFrame(videoContext, realPicture, pCodecCtx, pFrame);
           // Free the packet that was allocated by av_read_frame
           av_free_packet(&packet);
           return 0;
       } else {
           // Error decoding frame. Use strerror(AVERROR(ret))) for details
           res = ERR_DECODE_FRAME;
       }
    }
    av_free(pFrame);

    // close codec
    avcodec_close(pCodecCtx);

    // Close the image file
    av_close_input_file(pFormatCtx);

    return res;
    }

    Some ./configure options :
    --extra-cflags="-O3 -fpic -DANDROID -DHAVE_SYS_UIO_H=1 -Dipv6mr_interface=ipv6mr_ifindex -fasm -Wno-psabi -fno-short-enums -fno-strict-aliasing -finline-limit=300 -mfloat-abi=softfp -mfpu=vfp -marm -march=armv7-a -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE"

    --extra-ldflags="-Wl,-rpath-link=$PLATFORM/usr/lib -L$PLATFORM/usr/lib -nostdlib -lc -lm -ldl -llog"

    --arch=armv7-a --enable-armv5te --enable-armv6 --enable-armvfp --enable-memalign-hack

  • Matplotlib : Live animation works fine but displays a blank plot when being saved

    18 juillet 2017, par Loïc Poncin

    I made a little forest fire animation. My code is at the end of the question.

    Here is some information before I ask my question :

    • No tree : forest[i,j] = 0
    • A tree : forest[i,j] = 1
    • A tree on fire : forest[i,j] = 2

    Basically what happens is that constructforest creates a 2 dimensional array called forest of size n by m with a probability of tree occupancy called p. After that setonfire sets on fire the forest and while the forest can burn spreadfire spread the fire.

    When I run forestfire with the Python prompt or the IPython prompt I get a nice animation but when I go check the video file that I saved I only see a blank plot.

    I did some research, I found many questions about this issue but none of the advice I read was helpful :

    Can someone tell me what is going on please ?

    forestfire.py

    from random import random

    import numpy as np

    import matplotlib.pylab as plt
    import matplotlib.colors as mcolors
    import matplotlib.animation as animation


    def hazard(p):
       r=random()
       assert p>=0 and p<=1
       return r <= p


    def constructforest(n,m,p):
       forest = np.zeros((n,n))
       for i in xrange(n):
           for j in xrange(m):
               if hazard(p):
                   forest[i,j] = 1
       return forest


    def setfire(forest,i,j):
       forest[i,j] = 2
       return forest


    def spreadfire(forest):    

       n,m=forest.shape
       c = np.copy(forest)

       for i in xrange(n):
           for j in xrange(m):

               if c[i,j] == 1:

                   Y, X = xrange(max(0,i-1),min(n,i+2)), xrange(max(0,j-1),min(m,j+2))

                   for y in Y:
                       for x in X:

                           if c[y,x] == 2:
                               forest[i,j] = 2                        
       return forest


    def canburn(forest):    

       n,m=forest.shape
       c = np.copy(forest)

       for i in xrange(n):
           for j in xrange(m):

               if c[i,j] == 1:

                   Y, X = xrange(max(0,i-1),min(n,i+2)), xrange(max(0,j-1),min(m,j+2))

                   for y in Y:
                       for x in X:

                           if c[y,x] == 2:
                               return True                      
       return False


    def forestfire(forest):

       fig, ax = plt.subplots()

       movie = []    

       # Colormap
       red, green, blue = [(1,0,0,1)], [(0,1,0,1)], [(0,0,1,1)]  

       colors = np.vstack((blue, green, red))
       mycmap = mcolors.LinearSegmentedColormap.from_list('my_colormap', colors)

       # Initialization
       k = 0

       forest = spreadfire(forest)

       im = plt.imshow(forest, animated=True, cmap = mycmap, interpolation="none", origin='lower')
       movie.append([im])

       # Fire propagation
       while canburn(forest):
           k += 1
           print k

           forest = spreadfire(forest)

           im = plt.imshow(forest, animated=True, cmap = mycmap, interpolation="none", origin='lower')
           movie.append([im])

       return animation.ArtistAnimation(fig, movie, blit=True, repeat_delay=100)



    ani = forestfire(setfire(constructforest(101,101,0.4),50,50))

    ani.save("forestfire_test.mp4", writer = 'ffmpeg', fps=5, dpi=500)

    EDIT

    As requested by @Y.Luo by @ImportanceOfBeingErnest in the comments I downgraded matplotlib to 2.0.0 and I changed the framerate of the animation but forestfire_test.mp4 still displays a blank plot.

    Here are my settings :
    enter image description here
    enter image description here

  • FFmpeg / python - command works when run from shell but fails when run from python

    4 avril 2019, par artembus

    I have a python script which should run an ffmpeg command with this function :

    def transcode(in_path, out_path):
       cmd = ["ffmpeg", "-y", "-i", in_path, '-vf smartblur=lr=1']
       cmd += ["-an", out_path]
       print("Running:", " ".join(cmd))
       subprocess.run(cmd, stdout=cmdout, stderr=cmdout)

    When I run the python script it fails with this ffmpeg error :

    Running: ffmpeg -y -i raid/orig/scenes/train/5786088.mp4 -vf smartblur=lr=1 -an raid/4K/scenes/train/5786088.mp4
    ffmpeg version 2.8.15-0ubuntu0.16.04.1 Copyright (c) 2000-2018 the FFmpeg developers
     built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.10) 20160609
     configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv
     libavutil      54. 31.100 / 54. 31.100
     libavcodec     56. 60.100 / 56. 60.100
     libavformat    56. 40.101 / 56. 40.101
     libavdevice    56.  4.100 / 56.  4.100
     libavfilter     5. 40.101 /  5. 40.101
     libavresample   2.  1.  0 /  2.  1.  0
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  2.101 /  1.  2.101
     libpostproc    53.  3.100 / 53.  3.100
    Unrecognized option 'vf smartblur=lr=1'.
    Error splitting the argument list: Option not found

    You can see the command it tries to execute in the first line, when I run it in the command line it works fine. When I run the command in the shell it outputs the same version and parameters of the ffmpeg as written in the error above.

    I feel like I missed something simple yet crucial, anyone can point me to the right direction ?