Recherche avancée

Médias (1)

Mot : - Tags -/bug

Autres articles (36)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (6192)

  • How to build and link FFMPEG to iOS ?

    30 juin 2015, par Alexander Tkachenko

    all !

    I know, there are a lot of questions here about FFMPEG on iOS, but no one answer is appropriate for my case :(
    Something strange happens each case when I am trying to link FFMPEG in my project, so please, help me !

    My task is to write video-chat application for iOS, that uses RTMP-protocol for publishing and reading video-stream to/from custom Flash Media Server.

    I decided to use rtmplib, free open-source library for streaming FLV video over RTMP, as it is the only appropriate library.

    Many problem appeared when I began research of it, but later I understood how it should work.

    Now I can read live stream of FLV video(from url) and send it back to channel, with the help of my application.

    My trouble now is in sending video FROM Camera.
    Basic operations sequence, as I understood, should be the following :

    1. Using AVFoundation, with the help of sequence (Device-AVCaptureSession-AVVideoDataOutput-> AVAssetWriter) I write this to a file(If you need, I can describe this flow more detailed, but in the context of question it is not important). This flow is necessary to make hardware-accelerated conversion of live video from the camera into H.264 codec. But it is in MOV container format. (This is completed step)

    2. I read this temporary file with each sample written, and obtain the stream of bytes of video-data, (H.264 encoded, in QuickTime container). (this is allready completed step)

    3. I need to convert videodata from QuickTime container format to FLV. And it all in real-time.(packet - by - packet)

    4. If i will have the packets of video-data, contained in FLV container format, I will be able to send packets over RTMP using rtmplib.

    Now, the most complicated part for me, is step 3.

    I think, I need to use ffmpeg lib to this conversion (libavformat). I even found the source code, showing how to decode h.264 data packets from MOV file (looking in libavformat, i found that it is possible to extract this packets even from byte stream, which is more appropriate for me). And having this completed, I will need to encode packets into FLV(using ffmpeg or manually, in a way of adding FLV-headers to h.264 packets, it is not problem and is easy, if I am correct).

    FFMPEG has great documentation and is very powerfull library, and I think, there won’t be a problem to use it. BUT the problem here is that I can not got it working in iOS project.

    I have spend 3 days reading documentation, stackoverflow and googling the answer on the question "How to build FFMPEG for iOS" and I think, my PM is gonna fire me if I will spend one more week on trying to compile this library :))

    I tried to use many different build-scripts and configure files, but when I build FFMPEG, i Got libavformat, libavcodec, etc. for x86 architecture (even when I specify armv6 arch in build-script). (I use "lipo -info libavcodec.a" to show architectures)

    So I cannot build this sources, and decided to find prebuilt FFMPEG, that is build for architecture armv7, armv6, i386.

    I have downloaded iOS Comm Lib from MidnightCoders from github, and it contains example of usage FFMPEG, it contains prebuilt .a files of avcodec,avformat and another FFMPEG libraries.

    I check their architecture :

    iMac-2:MediaLibiOS root# lipo -info libavformat.a
    Architectures in the fat file: libavformat.a are: armv6 armv7 i386

    And I found that it is appropriate for me !
    When I tried to add this libraries and headers to xCode project, It compiles fine(and I even have no warnings like "Library is compiled for another architecture"), and I can use structures from headers, but when I am trying to call C-function from libavformat (av_register_all()), the compiler show me error message "Symbol(s) not found for architecture armv7 : av_register_all".

    I thought, that maybe there are no symbols in lib, and tried to show them :

    root# nm -arch armv6 libavformat.a | grep av_register_all
    00000000 T _av_register_all

    Now I am stuck here, I don’t understand, why xCode can not see this symbols, and can not move forward.

    Please, correct me if I am wrong in the understanding of flow for publishing RTMP-stream from iOS, and help me in building and linking FFMPEG for iOS.

    I have iPhone 5.1. SDK and xCode 4.2.

  • FFmpeg does not decode h264 stream

    5 juillet 2012, par HAPPY_TIGER

    I am trying to decode h264 stream from rtsp server and render it on iPhone.

    I found some libraries and read some articles about it.

    Libraries are from dropCam for iPhone called RTSPClient and DecoderWrapper.

    But I can not decode frame data with DecodeWrapper that using on ffmpeg.

    Here are my code.

    VideoViewer.m

    - (void)didReceiveFrame:(NSData*)frameData presentationTime:(NSDate*)presentationTime
    {
       [VideoDecoder staticInitialize];
       mConverter = [[VideoDecoder alloc] initWithCodec:kVCT_H264 colorSpace:kVCS_RGBA32 width:320 height:240 privateData:nil];


       [mConverter decodeFrame:frameData];

       if ([mConverter isFrameReady]) {
           UIImage *imageData =[mConverter getDecodedFrame];
           if (imageData) {
               [mVideoView setImage:imageData];
               NSLog(@"decoded!");
           }
       }
    }

    ---VideoDecoder.m---
    - (id)initWithCodec:(enum VideoCodecType)codecType
            colorSpace:(enum VideoColorSpace)colorSpace
                 width:(int)width
                height:(int)height
           privateData:(NSData*)privateData {
       if(self = [super init]) {

           codec = avcodec_find_decoder(CODEC_ID_H264);
           codecCtx = avcodec_alloc_context();

           // Note: for H.264 RTSP streams, the width and height are usually not specified (width and height are 0).  
           // These fields will become filled in once the first frame is decoded and the SPS is processed.
           codecCtx->width = width;
           codecCtx->height = height;

           codecCtx->extradata = av_malloc([privateData length]);
           codecCtx->extradata_size = [privateData length];
           [privateData getBytes:codecCtx->extradata length:codecCtx->extradata_size];
           codecCtx->pix_fmt = PIX_FMT_RGBA;
    #ifdef SHOW_DEBUG_MV
           codecCtx->debug_mv = 0xFF;
    #endif

           srcFrame = avcodec_alloc_frame();
           dstFrame = avcodec_alloc_frame();

           int res = avcodec_open(codecCtx, codec);
           if (res < 0)
           {
               NSLog(@"Failed to initialize decoder");
           }

       }

       return self;    
    }

    - (void)decodeFrame:(NSData*)frameData {


       AVPacket packet = {0};
       packet.data = (uint8_t*)[frameData bytes];
       packet.size = [frameData length];

       int frameFinished=0;
       NSLog(@"Packet size===>%d",packet.size);
       // Is this a packet from the video stream?
       if(packet.stream_index==0)
       {
           int res = avcodec_decode_video2(codecCtx, srcFrame, &frameFinished, &packet);
           NSLog(@"Res value===>%d",res);
           NSLog(@"frame data===>%d",(int)srcFrame->data);
           if (res < 0)
           {
               NSLog(@"Failed to decode frame");
           }
       }
       else
       {
           NSLog(@"No video stream found");
       }


       // Need to delay initializing the output buffers because we don't know the dimensions until we decode the first frame.
       if (!outputInit) {
           if (codecCtx->width > 0 && codecCtx->height > 0) {
    #ifdef _DEBUG
               NSLog(@"Initializing decoder with frame size of: %dx%d", codecCtx->width, codecCtx->height);
    #endif

               outputBufLen = avpicture_get_size(PIX_FMT_RGBA, codecCtx->width, codecCtx->height);
               outputBuf = av_malloc(outputBufLen);

               avpicture_fill((AVPicture*)dstFrame, outputBuf, PIX_FMT_RGBA, codecCtx->width, codecCtx->height);

               convertCtx = sws_getContext(codecCtx->width, codecCtx->height, codecCtx->pix_fmt,  codecCtx->width,
                                           codecCtx->height, PIX_FMT_RGBA, SWS_FAST_BILINEAR, NULL, NULL, NULL);

               outputInit = YES;
               frameFinished=1;
           }
           else {
               NSLog(@"Could not get video output dimensions");
           }
       }

       if (frameFinished)
           frameReady = YES;

    }

    The console shows me as follows.

    2011-05-16 20:16:04.223 RTSPTest1[41226:207] Packet size===>359
    [h264 @ 0x5815c00] no frame!
    2011-05-16 20:16:04.223 RTSPTest1[41226:207] Res value===>-1
    2011-05-16 20:16:04.224 RTSPTest1[41226:207] frame data===>101791200
    2011-05-16 20:16:04.224 RTSPTest1[41226:207] Failed to decode frame
    2011-05-16 20:16:04.225 RTSPTest1[41226:207] decoded!
    2011-05-16 20:16:04.226 RTSPTest1[41226:207] Packet size===>424
    [h264 @ 0x5017c00] no frame!
    2011-05-16 20:16:04.226 RTSPTest1[41226:207] Res value===>-1
    2011-05-16 20:16:04.227 RTSPTest1[41226:207] frame data===>81002704
    2011-05-16 20:16:04.227 RTSPTest1[41226:207] Failed to decode frame
    2011-05-16 20:16:04.228 RTSPTest1[41226:207] decoded!
    2011-05-16 20:16:04.229 RTSPTest1[41226:207] Packet size===>424
    [h264 @ 0x581d000] no frame!
    2011-05-16 20:16:04.229 RTSPTest1[41226:207] Res value===>-1
    2011-05-16 20:16:04.230 RTSPTest1[41226:207] frame data===>101791616
    2011-05-16 20:16:04.230 RTSPTest1[41226:207] Failed to decode frame
    2011-05-16 20:16:04.231 RTSPTest1[41226:207] decoded!
    . . . .  .

    But the simulator shows nothing.

    What's wrong with my code.

    Help me solve this problem.

    Thanks for your answers.

  • Piwik Mobile est maintenant disponible !

    18 août 2010, par SteveG

    Après quelques mois de développement, l’équipe Piwik est fière de vous présenter le client mobile. Piwik Mobile est déjàdisponible pour les markets des différents téléphones avec IOS (comme iPhone, iPod et iPad) ou Android (1.6 ou supérieure). Piwik Mobile dans les markets : Android : http://www.androidpit.com/en/android/market/apps/app/org.piwik.mobile/Piwik-Mobile-Beta iOS : http://itunes.apple.com/us/app/piwikmobile/id385536442?mt=8 Piwik Mobile a été développé en utilisant [...]

    ]]>