Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (75)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (4202)

  • Can I convert a django video upload from a form using ffmpeg before storing the video ?

    5 mai 2014, par GetItDone

    I’ve been stuck for weeks trying to use ffmpeg to convert user uploaded videos to flv. I use heroku to host my website, and store my static and media files on amazon S3 with s3boto. The initial video file will upload fine, however when I retrieve the video and run a celery task (in the same view where the initial video file is uploaded), the new file won’t store on S3. I’ve been trying to get this to work for over a month, with no luck, and really no good resources available for learning how to do this, so I figure maybe if I can get the ffmpeg task to run before storing the video I may be able to get it to work. Unfortunately I’m still not a very advanced at python (or django), so I don’t even know if/how this is possible. Anyone have any ideas ? I am willing to use any solution at this point no matter how ugly, as long as it successfully takes video uploads and converts to flv using ffmpeg, with the resulting file being stored on S3. It doesn’t seem that my situation is very common, because no matter where I look, I cannot find a solution that explains what I should be trying to do. Therefore I will be very appreciative of any guidance. Thanks. My relevant code follows :

    #models.py
    def content_file_name(instance, filename):
       ext = filename.split('.')[-1]
       new_file_name = "remove%s.%s" % (uuid.uuid4(), ext)
       return '/'.join(['videos', instance.teacher.username, new_file_name])

    class BroadcastUpload(models.Model):
       title = models.CharField(max_length=50, verbose_name=_('Title'))
       description = models.TextField(max_length=100, verbose_name=_('Description'))
       teacher = models.ForeignKey(User, null=True, blank=True, related_name='teacher')
       created_date = models.DateTimeField(auto_now_add=True)
       video_upload = models.FileField(upload_to=content_file_name)
       flvfilename = models.CharField(max_length=100, null=True, blank=True)
       videothumbnail = models.CharField(max_length=100, null=True, blank=True)

    #tasks.py
    @task(name='celeryfiles.tasks.convert_flv')
    def convert_flv(video_id):
       video = BroadcastUpload.objects.get(pk=video_id)
       print "ID: %s" % video.id
       id = video.id
       print "VIDEO NAME: %s" % video.video_upload.name
       teacher = video.teacher
       print "TEACHER: %s" % teacher
       filename = video.video_upload
       sourcefile = "%s%s" % (settings.MEDIA_URL, filename)
       vidfilename = "%s_%s.flv" % (teacher, video.id)
       targetfile = "%svideos/flv/%s" % (settings.MEDIA_URL, vidfilename)
       ffmpeg = "ffmpeg -i %s %s" % (sourcefile, vidfilename)
       try:
           ffmpegresult = subprocess.call(ffmpeg)
           #also tried separately with following line:
           #ffmpegresult = commands.getoutput(ffmpeg)
           print "---------------FFMPEG---------------"
           print "FFMPEGRESULT: %s" % ffmpegresult
       except Exception as e:
           ffmpegresult = None
           print("Failed to convert video file %s to %s" % (sourcefile, targetfile))
           print(traceback.format_exc())
       video.flvfilename = vidfilename
       video.save()

    @task(name='celeryfiles.tasks.ffmpeg_image')        
    def ffmpeg_image(video_id):
       video = BroadcastUpload.objects.get(pk=video_id)
       print "ID: %s" %video.id
       id = video.id
       print "VIDEO NAME: %s" % video.video_upload.name
       teacher = video.teacher
       print "TEACHER: %s" % teacher
       filename = video.video_upload
       sourcefile = "%s%s" % (settings.MEDIA_URL, filename)
       imagefilename = "%s_%s.png" % (teacher, video.id)
       thumbnailfilename = "%svideos/flv/%s" % (settings.MEDIA_URL, thumbnailfilename)
       grabimage = "ffmpeg -y -i %s -vframes 1 -ss 00:00:02 -an -vcodec png -f rawvideo -s 320x240 %s" % (sourcefile, thumbnailfilename)
       try:        
            videothumbnail = subprocess.call(grabimage)
            #also tried separately following line:
            #videothumbnail = commands.getoutput(grabimage)
            print "---------------IMAGE---------------"
            print "VIDEOTHUMBNAIL: %s" % videothumbnail
       except Exception as e:
            videothumbnail = None
            print("Failed to convert video file %s to %s" % (sourcefile, thumbnailfilename))
            print(traceback.format_exc())
       video.videothumbnail = imagefilename
       video.save()

    #views.py
    def upload_broadcast(request):
       if request.method == 'POST':
           form = BroadcastUploadForm(request.POST, request.FILES)
           if form.is_valid():
               upload=form.save()
               video_id = upload.id
               image_grab = ffmpeg_image.delay(video_id)
               video_conversion = convert_flv.delay(video_id)
               return HttpResponseRedirect('/current_classes/')
       else:
           form = BroadcastUploadForm(initial={'teacher': request.user,})
       return render_to_response('videos/create_video.html', {'form': form,}, context_instance=RequestContext(request))

    #settings.py
    DEFAULT_FILE_STORAGE = 'myapp.s3utils.MediaRootS3BotoStorage'
    DEFAULT_S3_PATH = "media"
    STATICFILES_STORAGE = 'myapp.s3utils.StaticRootS3BotoStorage'
    STATIC_S3_PATH = "static"
    AWS_STORAGE_BUCKET_NAME = 'my_bucket'
    CLOUDFRONT_DOMAIN = 'domain.cloudfront.net'
    AWS_ACCESS_KEY_ID = 'MY_KEY_ID'
    AWS_SECRET_ACCESS_KEY = 'MY_SECRET_KEY'
    MEDIA_ROOT = '/%s/' % DEFAULT_S3_PATH
    MEDIA_URL = 'http://%s/%s/' % (CLOUDFRONT_DOMAIN, DEFAULT_S3_PATH)
    ...

    #s3utils.py
    from storages.backends.s3boto import S3BotoStorage
    from django.utils.functional import SimpleLazyObject

    StaticRootS3BotoStorage = lambda: S3BotoStorage(location='static')
    MediaRootS3BotoStorage  = lambda: S3BotoStorage(location='media')

    I can add any other info if needed to help me solve my problem.

  • FFMPEG and STB_Image Create awful Picture

    9 février 2023, par murage kibicho

    I was learning how to use the FFMPEG C api and I was trying to encode a jpeg into a MPEG file. I load the JPEG into (unsigned char *) using the stb-image library. Then I create a (uint8_t *) and copy my rgb values. Finally, I convert RGB to YUV420 using sws_scale. However, a portion of my image blurs out when I perform the encoding.Bad Image
/
This is the original imageoriginal image

    
Perhaps I allocate my frame buffer incorrectly ?

    


    ret = av_frame_get_buffer(frame, 0);


    


    
This is my entire program

    


    #define STB_IMAGE_IMPLEMENTATION&#xA;#include "stb_image.h"&#xA;#define STB_IMAGE_WRITE_IMPLEMENTATION&#xA;#include "stb_image_write.h"&#xA;#define STB_IMAGE_RESIZE_IMPLEMENTATION&#xA;#include "stb_image_resize.h"&#xA;#include &#xA;&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libavutil></libavutil>imgutils.h>&#xA;#include <libswscale></libswscale>swscale.h>&#xA;//gcc stack.c -lm -o stack.o `pkg-config --cflags --libs libavformat libavcodec libswresample libswscale libavutil` &amp;&amp; ./stack.o&#xA;&#xA;/*&#xA;int i : pts of current frame&#xA;*/&#xA;void PictureToFrame(int i, AVFrame *frame, int height, int width)&#xA;{&#xA;    //Use stb image to get rgb values&#xA;        char *fileName = "profil.jpeg";&#xA;        int imageHeight = 0;&#xA;        int imageWidth = 0;&#xA;        int colorChannels = 0;&#xA;        int arrayLength = 0;&#xA;    unsigned char *image = stbi_load(fileName,&amp;imageWidth,&amp;imageHeight,&amp;colorChannels,0);&#xA;    &#xA;    printf("(height: %d, width: %d)\n",imageHeight, imageWidth);&#xA;    assert(colorChannels == 3 &amp;&amp; imageHeight == height &amp;&amp; imageWidth == width);&#xA;    &#xA;    //Convert unsigned char * to uint8_t *&#xA;    arrayLength = imageHeight * imageWidth * colorChannels;&#xA;    uint8_t *rgb = calloc(arrayLength, sizeof(uint8_t));&#xA;    int j = arrayLength-1;&#xA;    for(int i = 0; i &lt; arrayLength; i&#x2B;&#x2B;)&#xA;    {&#xA;        rgb[i] = (uint8_t) image[i];&#xA;        }&#xA;        &#xA;        //Use SwsContext to scale RGB to YUV420P and write to frame&#xA;        const int in_linesize[1] = { 3* imageWidth};&#xA;        struct SwsContext *sws_context = NULL;&#xA;        sws_context = sws_getCachedContext(sws_context,&#xA;            imageWidth, imageHeight, AV_PIX_FMT_RGB24,&#xA;            imageWidth, imageHeight, AV_PIX_FMT_YUV420P,&#xA;            0, 0, 0, 0);&#xA;        sws_scale(sws_context, (const uint8_t * const *)&amp;rgb, in_linesize, 0,&#xA;            imageHeight, frame->data, frame->linesize);&#xA;        //Save frame pts&#xA;        frame->pts = i;&#xA;        &#xA;        //Free alloc&#x27;d data&#xA;        stbi_image_free(image);&#xA;        sws_freeContext(sws_context);&#xA;        free(rgb);&#xA;}&#xA;static void encode(AVCodecContext *enc_ctx, AVFrame *frame, AVPacket *pkt, FILE *outfile)&#xA;{&#xA;    int returnValue;&#xA;    /* send the frame to the encoder */&#xA;    if(frame)&#xA;    {&#xA;        printf("Send frame %3"PRId64"\n", frame->pts);&#xA;    }&#xA;        returnValue = avcodec_send_frame(enc_ctx, frame);&#xA;    if(returnValue &lt; 0)&#xA;    {&#xA;        printf("Error sending a frame for encoding\n");&#xA;        return;&#xA;    }&#xA;    while(returnValue >= 0)&#xA;    {&#xA;        returnValue = avcodec_receive_packet(enc_ctx, pkt);&#xA;        if(returnValue == AVERROR(EAGAIN) || returnValue == AVERROR_EOF)&#xA;        {&#xA;            return;&#xA;        }&#xA;        else if(returnValue &lt; 0)&#xA;        {&#xA;            printf("Error during encoding\n");&#xA;            return;&#xA;        }&#xA;&#xA;        printf("Write packet %3"PRId64" (size=%5d)\n", pkt->pts, pkt->size);&#xA;        fwrite(pkt->data, 1, pkt->size, outfile);&#xA;        av_packet_unref(pkt);&#xA;    }&#xA;}&#xA;&#xA;&#xA;int main(int argc, char **argv)&#xA;{&#xA;    const char *filename, *codec_name;&#xA;    const AVCodec *codec;&#xA;    AVCodecContext *c= NULL;&#xA;    int i, ret, x, y;&#xA;    FILE *f;&#xA;    AVFrame *frame;&#xA;    AVPacket *pkt;&#xA;    uint8_t endcode[] = { 0, 0, 1, 0xb7 };&#xA;&#xA;    filename = "outo.mp4";&#xA;    codec_name = "mpeg1video";//"mpeg1video";//"libx264";&#xA;&#xA;&#xA;    /* find the mpeg1video encoder */&#xA;    codec = avcodec_find_encoder_by_name(codec_name);&#xA;    if(!codec)&#xA;    {&#xA;        printf("Error finding codec\n");&#xA;    return 0;&#xA;    }&#xA;&#xA;    c = avcodec_alloc_context3(codec);&#xA;    if(!c)&#xA;    {&#xA;        printf("Error allocating c\n");&#xA;    return 0;&#xA;    }&#xA;&#xA;    pkt = av_packet_alloc();&#xA;    if(!pkt)&#xA;    {&#xA;        printf("Error allocating pkt\n");&#xA;    return 0;&#xA;    }&#xA;&#xA;    /* put sample parameters */&#xA;    c->bit_rate = 400000;&#xA;    /* resolution must be a multiple of two */&#xA;    c->width = 800;&#xA;    c->height = 800;&#xA;    /* frames per second */&#xA;    c->time_base = (AVRational){1, 25};&#xA;    c->framerate = (AVRational){25, 1};&#xA;    c->gop_size = 10;&#xA;    c->max_b_frames = 1;&#xA;    c->pix_fmt = AV_PIX_FMT_YUV420P;&#xA;&#xA;    if(codec->id == AV_CODEC_ID_H264)&#xA;    {&#xA;        av_opt_set(c->priv_data, "preset", "slow", 0);&#xA;    }&#xA;        &#xA;&#xA;    /* open it */&#xA;    ret = avcodec_open2(c, codec, NULL);&#xA;    if(ret &lt; 0) &#xA;    {&#xA;        printf("Error opening codec\n");&#xA;    return 0;&#xA;    }&#xA;&#xA;    f = fopen(filename, "wb");&#xA;    if(!f)&#xA;    {&#xA;         printf("Error opening file\n");&#xA;     return 0;&#xA;    }&#xA;&#xA;    frame = av_frame_alloc();&#xA;    if(!frame)&#xA;    {&#xA;        printf("Error allocating frame\n");&#xA;    return 0;&#xA;    }&#xA;    frame->format = c->pix_fmt;&#xA;    frame->width  = c->width;&#xA;    frame->height = c->height;&#xA;&#xA;    //I suspect this is the problem&#xA;    ret = av_frame_get_buffer(frame, 0);&#xA;    if(ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Could not allocate the video frame data\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* encode 25 frames*/&#xA;    for(i = 0; i &lt; 25; i&#x2B;&#x2B;) &#xA;    {&#xA;&#xA;        /* make sure the frame data is writable */&#xA;        ret = av_frame_make_writable(frame);&#xA;        if(ret &lt; 0)&#xA;        {&#xA;            return 0;&#xA;        }&#xA;        //FIll Frame with picture data&#xA;        PictureToFrame(i, frame, c->height, c->width);&#xA;&#xA;        /* encode the image */&#xA;        encode(c, frame, pkt, f);&#xA;    }&#xA;&#xA;    /* flush the encoder */&#xA;    encode(c, NULL, pkt, f);&#xA;&#xA;    /* add sequence end code to have a real MPEG file */&#xA;    if (codec->id == AV_CODEC_ID_MPEG1VIDEO || codec->id == AV_CODEC_ID_MPEG2VIDEO)&#xA;        fwrite(endcode, 1, sizeof(endcode), f);&#xA;    fclose(f);&#xA;&#xA;    avcodec_free_context(&amp;c);&#xA;    av_frame_free(&amp;frame);&#xA;    av_packet_free(&amp;pkt);&#xA;&#xA;    return 0;&#xA;}&#xA;&#xA;

    &#xA;

  • constructing an ffmpeg script for use in Xcode/swift project

    21 octobre 2019, par NCrusher

    I’m going back to the drawing board with this post because I’ve been through so much trial and error over the last day with this issue that the information I posted earlier is no longer relevant.

    I’ve only been learning both Swift and FFmpeg for a few weeks, and I’ve just exhausted my ability to troubleshoot this.

    I’m maybe 90% certain this is a problem with my ffmpeg script, rather than with the Swift component. But I think it’s complicated by what characters need special formatting in Swift (particularly mathematical operatives.)

    I started off using a method in Xcode modeled after this post, which Xcode actually managed to guide me through updating for current versions of swift without breaking. Which left me with this :

    func ffmpegConvert(inputPath: String, filters: String, outputPath: String) {
       guard let launchPath = Bundle.main.path(forResource: "ffmpeg", ofType: "") else { return }
       do {
           let convertTask: Process = Process()
           convertTask.launchPath = launchPath
           convertTask.arguments = [
               "-i", inputPath,
               filters,
               outputPath
           ]
           convertTask.standardInput = FileHandle.nullDevice
           convertTask.launch()
           convertTask.waitUntilExit()
       }
    }

    I call this function when I click the "Start Conversion" button on my app. Like I said, that part seems to work fine. The problem is either in the way ffmpeg is being called by the app, or with the construction of the strings in my arguments array.

    The inputFilePath and outputFilePath strings are self-explanatory. Both of them are perfectly acceptably formatted filepath strings.

    The filters is a little tougher. My app has five conversion options and a different filter set for each one. One is as simple as -c copy and the most complex is -c:a libmp3lame -ac 1 -ar 22050 -q:a 9 (I’m working with audiobooks so I don’t need a lot of complexity in my arguments.

    The app appears to be launching ffmpeg perfectly. But the console keeps giving me errors. And the errors keep changing depending on what I try. Here’s what I’ve been through so far :

    var inputFilePath = "/Volumes/CSW External/ffmpeg/diamonds.aac"
    var ffmpegFilters = "-c copy"
    var outputFilePath = "/Volumes/CSW External/ffmpeg/diamonds.m4b"

    Result :

    Unrecognized option ’c copy’.
    Error splitting the argument list : Option not found

    Next attempt, I tried var ffmpegFilters = "--c copy". Result was the same error.

    Then I tried var ffmpegFilters = " -c copy" and it actually read the metadata from my file before throwing a different error at me :

    Unable to find a suitable output format for ’ -c copy’ -c copy :
    Invalid argument

    I’m assuming that the fact that it read the metadata before throwing a different error at me means I made...some form of progress ?

    I spent a few hours researching that particular error and why people might be getting it and couldn’t find a situation that was analogous to what I was trying to do. Mostly people were encountering it from the command line and/or other operating systems. So no help there.

    At that point, since I was just throwing things at the wall to see what might stick, I decided to throw the whole command, inputPath / ffMpegFilters / outputPath into a single string to see if I could make that work (under the logic that if it did, I could narrow the cause of my trouble down to the way the separate strings are being constructed by XCode.)

    I tried it both with the whitespace in the filepath and with the whitespace escaped out (using double \ as required by Swift.) The ffmpeg log came displayed a perfectly valid

    Doing so took me back to the first error I got :

    Unrecognized option ’-c copy’.
    Error splitting the argument list : Option not found

    So then I started researching THAT error. Some of the discussions I came across indicated that the problem was that the arguments couldn’t all be in a single string, they needed to be split up and put in an array. Which I could see for a longer argument, but -c copy shouldn’t need that.

    But I decided to give it a go. Formerly my method for constructing the string of arguments would have looked like this

    func conversionSelection() {
       if inputFileUrl != nil {
           let conversionChoice = conversionOptionsPopup.indexOfSelectedItem
           switch conversionChoice {
               case 1 :
                   outputExtension = ".mp3"
                   ffmpegFilters = "-c:a libmp3lame -ac 1 -ar 22050 -q:a 9"
               (...case 2, 3, 4, default, etc)
           }
       }
    }

    Now it looks more like this :

    func conversionSelection() {
       if inputFileUrl != nil {
           let conversionChoice = conversionOptionsPopup.indexOfSelectedItem
           switch conversionChoice {
               case 1 :
                   outputExtension = ".mp3"
                   ffmpegCodec = "-c:a libmp3lame"
                   ffmpegChannels = "-ac 1"
                   ffmpegSampling = "-ar 22050"
                   ffmpegBitrate = "-q:a 9"
               (case 2, case 3, case 4, default, etc)
           }
       }
    }

    Unfortunately, this just brought me full circle. If I try to use -c:a libmp3lame or --c:a libmp3lame I get the Error splitting the argument list: Option not found error. Interestingly, however, it gives the argument with relation to the ffmpegSampling argument, which is a slight difference.

    If I put a whitespace in front of it -c:a libmp3lame it will get far enough into the process to read the input file metadata, then I get this :

    Unable to find a suitable output format for ’ -c:a libmp3lame’ -c:a
    libmp3lame : Invalid argument

    I’m stumped. I thought this was going to be an easy fix, but I’ve been at it almost a full day with all the trial and error, and nothing is working, and I’ve exhausted my newbie understanding of both Swift and ffmpeg.