Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (78)

  • Qu’est ce qu’un masque de formulaire

    13 juin 2013, par

    Un masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
    Chaque formulaire de publication d’objet peut donc être personnalisé.
    Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
    Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (7136)

  • Python, Flask, ffmpeg video streaming : Video does not work in Firefox

    12 avril 2018, par user3187926

    I am writing a preview portion for video management system, it works like a charm in chrome with standard tag but firefox does not recognize MIME type for some reason and it bugs me a lot.

    Here is my stream class :

    class Stream:
       run = False
       FNULL = open(os.devnull, 'w')
       overlay = ffmpeg.input("somelogo.png")

       def __init__(self, camid):
           camUtil = CameraUtil()
           self.camid = camid
           self.streamurl = camUtil.get_stream_from_id(self.camid)['streamURL']
           print(self.streamurl)
           self.args = ffmpeg.input(self.streamurl)
           # vcodec="libvpx",
           # acodec="libvorbis",
           self.args = ffmpeg.output(self.args, "-",
                                     f="matroska",
                                     vcodec="copy",
                                     acodec="copy",
                                     blocksize="1024",
                                     # strftime="1",
                                     # segment_time="60",
                                     # segment_format="matroska"
                                     preset="ultrafast",
                                     metadata="title='test'"
                                     )
           self.args = ffmpeg.get_args(self.args)
           print(self.args)
           self.pipe = subprocess.Popen(['ffmpeg'] + self.args,
                                        stdout=subprocess.PIPE,)
                                        #stderr=self.FNULL)

       def dep_stream(self):
           def gen():
               try:
                   f = self.pipe.stdout
                   byte = f.read(1024)
                   while byte:
                       yield byte
                       byte = f.read(1024)
               finally:
                   self.pipe.kill()

           return Response(gen(), status=200,
                           mimetype='video/webm',
                           headers={'Access-Control-Allow-Origin': '*',
                                    "Content-Type": "video/webm",
                                    })

    My html playback portion :

       <video preload="auto" autoplay="autoplay" width="1280" height="720">
         <source src="/stream/{{ camid }}" type="video/webm;codecs=&quot;vp8, vorbis&quot;"></source>
         YOUR BROWSER DOES NOT SUPPORT HTML5, WHAT YEAR ARE YOU FROM?!
       </video>

    Firefox says "No video with supported format and MIME type found" and in console it says

    error : Error Code : NS_ERROR_DOM_MEDIA_METADATA_ERR (0x806e0006)

    Did I do something dumb ?! Or am I missing something, because it works google chrome like a charm

    I need fresh eyes.

    Help plez

  • use AVMutableVideoComposition rotate video after ffmpge can't get rotate info

    29 janvier 2018, par ladeng

    before the video is not rotated, I can use FFmpeg command get rotate information, command like this :

    ffprobe -v quiet -print_format json -show_format -show_streams recordVideo.mp4

    or

    ffmpeg -i recordVideo.mp4.

    when I use AVMutableVideoComposition rotate video, the video lost video rotate information, rotate video simple : RoatetVideoSimpleCode,
    code below :

    -(void)performWithAsset:(AVAsset*)asset complateBlock:(void(^)(void))complateBlock{
    AVMutableComposition *mutableComposition;
    AVMutableVideoComposition *mutableVideoComposition;
    cacheRotateVideoURL = [[NSURL alloc] initFileURLWithPath:[NSString pathWithComponents:@[NSTemporaryDirectory(), kCacheCertVideoRotate]]];

    AVMutableVideoCompositionInstruction *instruction = nil;
    AVMutableVideoCompositionLayerInstruction *layerInstruction = nil;
    CGAffineTransform t1;
    CGAffineTransform t2;

    AVAssetTrack *assetVideoTrack = nil;
    AVAssetTrack *assetAudioTrack = nil;
    // Check if the asset contains video and audio tracks
    if ([[asset tracksWithMediaType:AVMediaTypeVideo] count] != 0) {
       assetVideoTrack = [asset tracksWithMediaType:AVMediaTypeVideo][0];
    }
    if ([[asset tracksWithMediaType:AVMediaTypeAudio] count] != 0) {
       assetAudioTrack = [asset tracksWithMediaType:AVMediaTypeAudio][0];
    }

    CMTime insertionPoint = kCMTimeZero;
    NSError *error = nil;

    //    CGAffineTransform rotateTranslate;
    // Step 1
    // Create a composition with the given asset and insert audio and video tracks into it from the asset
    if (!mutableComposition) {

       // Check whether a composition has already been created, i.e, some other tool has already been applied
       // Create a new composition
       mutableComposition = [AVMutableComposition composition];

       // Insert the video and audio tracks from AVAsset
       if (assetVideoTrack != nil) {
           AVMutableCompositionTrack *compositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
           [compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetVideoTrack atTime:insertionPoint error:&amp;error];

       }
       if (assetAudioTrack != nil) {
           AVMutableCompositionTrack *compositionAudioTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
           [compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetAudioTrack atTime:insertionPoint error:&amp;error];
       }

    }


    // Step 2
    // Translate the composition to compensate the movement caused by rotation (since rotation would cause it to move out of frame)
    //    t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.height, 0.0);
    // Rotate transformation
    //    t2 = CGAffineTransformRotate(t1, degreesToRadians(90.0));

    CGFloat degrees = 90;
    //--
    if (degrees != 0) {
       //        CGAffineTransform mixedTransform;
       if(degrees == 90){
           //90°
           t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.height,0.0);
           t2 = CGAffineTransformRotate(t1,M_PI_2);
       }else if(degrees == 180){
           //180°
           t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.width, assetVideoTrack.naturalSize.height);
           t2 = CGAffineTransformRotate(t1,M_PI);
       }else if(degrees == 270){
           //270°
           t1 = CGAffineTransformMakeTranslation(0.0, assetVideoTrack.naturalSize.width);
           t2 = CGAffineTransformRotate(t1,M_PI_2*3.0);
       }
    }

    // Step 3
    // Set the appropriate render sizes and rotational transforms
    if (!mutableVideoComposition) {

       // Create a new video composition
       mutableVideoComposition = [AVMutableVideoComposition videoComposition];
       mutableVideoComposition.renderSize = CGSizeMake(assetVideoTrack.naturalSize.height,assetVideoTrack.naturalSize.width);
       mutableVideoComposition.frameDuration = CMTimeMake(1, 30);

       // The rotate transform is set on a layer instruction
       instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
       instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [mutableComposition duration]);
       layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:(mutableComposition.tracks)[0]];
       [layerInstruction setTransform:t2 atTime:kCMTimeZero];
       //           [layerInstruction setTransform:rotateTranslate atTime:kCMTimeZero];

    } else {

       mutableVideoComposition.renderSize = CGSizeMake(mutableVideoComposition.renderSize.height, mutableVideoComposition.renderSize.width);

       // Extract the existing layer instruction on the mutableVideoComposition
       instruction = (mutableVideoComposition.instructions)[0];
       layerInstruction = (instruction.layerInstructions)[0];

       // Check if a transform already exists on this layer instruction, this is done to add the current transform on top of previous edits
       CGAffineTransform existingTransform;

       if (![layerInstruction getTransformRampForTime:[mutableComposition duration] startTransform:&amp;existingTransform endTransform:NULL timeRange:NULL]) {
           [layerInstruction setTransform:t2 atTime:kCMTimeZero];
       } else {
           // Note: the point of origin for rotation is the upper left corner of the composition, t3 is to compensate for origin
           CGAffineTransform t3 = CGAffineTransformMakeTranslation(-1*assetVideoTrack.naturalSize.height/2, 0.0);
           CGAffineTransform newTransform = CGAffineTransformConcat(existingTransform, CGAffineTransformConcat(t2, t3));
           [layerInstruction setTransform:newTransform atTime:kCMTimeZero];
       }

    }


    // Step 4
    // Add the transform instructions to the video composition
    instruction.layerInstructions = @[layerInstruction];
    mutableVideoComposition.instructions = @[instruction];

    //write video
    if ([[NSFileManager  defaultManager] fileExistsAtPath:cacheRotateVideoURL.path]) {
       NSError *error = nil;
       BOOL removeFlag = [[NSFileManager  defaultManager] removeItemAtURL:cacheRotateVideoURL error:&amp;error];
       SPLog(@"remove rotate file:%@ %@",cacheRotateVideoURL.path,removeFlag?@"Success":@"Failed");
    }

    AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:asset presetName:AVAssetExportPresetMediumQuality] ;

    exportSession.outputURL = cacheRotateVideoURL;
    exportSession.outputFileType = AVFileTypeMPEG4;
    exportSession.videoComposition = mutableVideoComposition;
    exportSession.shouldOptimizeForNetworkUse = YES;
    exportSession.timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration);

    [exportSession exportAsynchronouslyWithCompletionHandler:^{
       SPLog(@"cache write done");
       AVAsset* asset = [AVURLAsset URLAssetWithURL: cacheRotateVideoURL options:nil];
       SPLog(@"rotate recrod video time: %lf",CMTimeGetSeconds(asset.duration));

       ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
       [library writeVideoAtPathToSavedPhotosAlbum:cacheRotateVideoURL
                                   completionBlock:^(NSURL *assetURL, NSError *error) {
                                       if (error) {
                                           NSLog(@"Save video fail:%@",error);
                                       } else {
                                           NSLog(@"Save video succeed.");
                                       }
                                   }];

       complateBlock();
    }];

    }

    can anyone tell me why this is so?
    how can I write rotate information when I rotate the video ?

  • Is there an efficient way to retrieve frames from a video in Android ?

    28 mars 2015, par Naveed

    I have an app which requires me to retrieve frames from a video and do some processing with them. However it seems like that the frame retrieval is very slow to the point where it is unacceptable. Sometimes it is taking upto 2.5 second to retrieve a single frame. I am using the MediaMetadataRetriever as most stackoverflow questions suggested. However the performance is very bad. Here is what I have :

      private List<bitmap> retrieveFrames() {

           MediaMetadataRetriever fmmr = new MediaMetadataRetriever();
           fmmr.setDataSource("/path/to/some/video.mp4");
           String strLength = fmmr.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
           long milliSecs = Long.parseLong(strLength);
           long microSecLength = milliSecs * 1000;

           Log.d("TAG", "length: " + microSecLength);
           long one_sec = 1000000; // one sec in micro seconds

           ArrayList<bitmap> frames = new ArrayList&lt;>();
           int j = 0;
           for (int i = 0; i &lt; microSecLength; i += (one_sec / 5)) {
               long time = System.currentTimeMillis();
               Bitmap frame = fmmr.getFrameAtTime(i, MediaMetadataRetriever.OPTION_CLOSEST);
               j++;
               Log.d("TAG", "Frame number: " + j + " Time taken: " + (System.currentTimeMillis() - time));
               // commented out because each frame would be written to disk instead of holding them in memory
               //  frames.add(frame);
           }
           fmmr.release();
           return frames;
       }
    </bitmap></bitmap>

    The above will logs :

    03-26 21:49:29.781  13213-13239/com.example.naveed.myapplication D/TAG﹕ length: 4949000
    03-26 21:49:30.187  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 1 Time taken: 406
    03-26 21:49:30.779  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 2 Time taken: 592
    03-26 21:49:31.578  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 3 Time taken: 799
    03-26 21:49:32.632  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 4 Time taken: 1054
    03-26 21:49:33.895  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 5 Time taken: 1262
    03-26 21:49:35.382  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 6 Time taken: 1486
    03-26 21:49:37.128  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 7 Time taken: 1746
    03-26 21:49:39.077  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 8 Time taken: 1948
    03-26 21:49:41.287  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 9 Time taken: 2210
    03-26 21:49:43.717  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 10 Time taken: 2429
    03-26 21:49:44.093  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 11 Time taken: 376
    03-26 21:49:44.707  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 12 Time taken: 614
    03-26 21:49:45.539  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 13 Time taken: 831
    03-26 21:49:46.597  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 14 Time taken: 1057
    03-26 21:49:47.875  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 15 Time taken: 1278
    03-26 21:49:49.384  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 16 Time taken: 1508
    03-26 21:49:51.112  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 17 Time taken: 1728
    03-26 21:49:53.096  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 18 Time taken: 1983
    03-26 21:49:55.315  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 19 Time taken: 2218
    03-26 21:49:57.711  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 20 Time taken: 2396
    03-26 21:49:58.065  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 21 Time taken: 354
    03-26 21:49:58.640  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 22 Time taken: 574
    03-26 21:49:59.369  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 23 Time taken: 728
    03-26 21:50:00.112  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 24 Time taken: 742
    03-26 21:50:00.834  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 25 Time taken: 721

    As you can see from above, it is taking about 18 - 25 sec to retrieve 25 frames from a 4 sec long video.

    I have also tried this which uses FFmpeg underneath to do the same. I am not sure how well this library is implemented but it only improves the over all performance by a couple of seconds meaning it takes about 15-20 sec to do the same.

    So my question is : is there a way to do it quicker ? My friend has an iOS app where he does something similar but it only takes couple of seconds and he is grabbing even more frames however he is not sure how to do it on android.

    Is there anything on android that would speed up the process. Am I approaching this wrong ?

    The end goal is to stitch those frames together into a gif.