Recherche avancée

Médias (0)

Mot : - Tags -/médias

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (51)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

Sur d’autres sites (4310)

  • ffmpeg ffserver - create a mosaic from two 720p webcam feeds

    28 juillet 2015, par der_felix

    for a project i would like to take the video feeds (NO audio) of two logitech c920 webcams, put them side-by-side and stream them.
    the c920 is able to compress the video feed with h264 itself(if enabled) and delivers 1080p with upto 30fps.
    the stream is then loaded in an android app by a ffmpeg library and rendered to the screen.

    what i already know :
    i know that i can take multiple streams or input files and create a mosaic stream via the filter_complex module.
    http and h264 seem to be good for streaming, but other configurations are also welcom if they are faster/better.

    the question :
    but how can i start the cameras with v4l2, set the camera resoltution and camera internal encoding and use these streams to create the mosaic ?
    the mosaic should be unscaled (=2560x720px).

    and i very often get the error code 256 but didnt find a solution what it means.

    the system : laptop with usb3, ubuntu 15.04x64 ffmpeg 2.7.1 and ffserver 2.5.7

    thanks for your help

    ffserver config :

    HTTPPort 8080                
    HTTPBindAddress 0.0.0.0      
    MaxHTTPConnections 2000  
    MaxClients 1000        
    MaxBandwidth 50000
    CustomLog -      
    #NoDaemon      

    <feed>        

    File /tmp/feed1.ffm
    Launch ffmpeg -f v4l2 - input_format h264 -i /dev/video0 -i /dev/video1 -size 1280x720 -r 30 -filter_complex "nullsrc=size=2560x720 [base]; [0:v] setpts=PTS-STARTPTS [left]; [1:v] setpts=PTS-STARTPTS [right]; [base][left] overlay=shortest=1 [tmp1]; [tmp1][right] overlay=shortest=1:x=1280"  -c:v libx264 -f mpegts

    </feed>

    <stream>

    Feed feed1.ffm
    Format mpegts      
    VideoBitRate 1024  
    #VideoBufferSize 1024
    VideoFrameRate 30      
    #VideoSize hd720      
    VideoSize 2560x720
    #VideoIntraOnly        
    #VideoGopSize 12      
    VideoCodec libx264      
    NoAudio            
    VideoQMin 3        
    VideoQMax 31
    NoDefaults

    </stream>

    <stream>        
      Format status
      #Only allow local people to get the status
      ACL allow localhost
      ACL allow 192.168.0.0 192.168.255.255
    </stream>

    output :

    ubuntu@ubuntu:~$ ffserver
    ffserver version 2.5.7-0ubuntu0.15.04.1 Copyright (c) 2000-2015 the FFmpeg developers
     built with gcc 4.9.2 (Ubuntu 4.9.2-10ubuntu13)
     configuration: --prefix=/usr --extra-version=0ubuntu0.15.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --shlibdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --enable-shared --disable-stripping --enable-avresample --enable-avisynth --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libshine --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libwavpack --enable-libwebp --enable-libxvid --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzvbi --enable-libzmq --enable-frei0r --enable-libvpx --enable-libx264 --enable-libsoxr --enable-gnutls --enable-openal --enable-libopencv --enable-librtmp --enable-libx265
     libavutil      54. 15.100 / 54. 15.100
     libavcodec     56. 13.100 / 56. 13.100
     libavformat    56. 15.102 / 56. 15.102
     libavdevice    56.  3.100 / 56.  3.100
     libavfilter     5.  2.103 /  5.  2.103
     libavresample   2.  1.  0 /  2.  1.  0
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    Tue Jul 28 10:13:44 2015 FFserver started.
    Tue Jul 28 10:13:44 2015 Launch command line: ffmpeg -f v4l2 - input_format h264 -i /dev/video0 -i /dev/video1 -size 1280x720 -r 30 -filter_complex nullsrc=size=2560x720 [base]; [0:v] setpts=PTS-STARTPTS [left]; [1:v] setpts=PTS-STARTPTS [right]; [base][left] overlay=shortest=1 [tmp1]; [tmp1][right] overlay=shortest=1:x=1280 -c:v libx264 -f mpegts http://127.0.0.1:8080/feed1.ffm
    feed1.ffm: Pid 17388 exited with status 256 after 0 seconds

    Hey guys !

    Here is our plan b for the mosaic stream.

    Alternative config :

    HTTPPort 8080        
    HTTPBindAddress 0.0.0.0  
    MaxHTTPConnections 2000  
    MaxClients 1000        
    MaxBandwidth 50000      
    CustomLog -        

    <feed>
    File /tmp/feedlinks.ffm
    Launch ffmpeg -f v4l2 -input_format h264 -vcodec h264 -i /dev/video0 -video_size 1280x720 -r 30
    </feed>

    <feed>
    File /tmp/feedrechts.ffm
    Launch ffmpeg -f v4l2 -input_format h264 -vcodec h264 -i /dev/video1 -video_size 1280x720 -r 30
    </feed>

    <stream>
    Feed feedlinks.ffm
    Format mpegts
    VideoBitRate 512
    VideoFrameRate 30
    VideoSize hd720
    VideoCodec libx264
    NoAudio
    VideoQMin 3
    VideoQMax 31
    </stream>

    <stream>
    Feed feedrechts.ffm
    Format mpegts
    VideoBitRate 512
    VideoFrameRate 30
    VideoSize hd720
    VideoCodec libx264
    NoAudio
    VideoQMin 3
    VideoQMax 31
    </stream>

    <feed>
    File /tmp/feedmosaic.ffm
    Launch ffmpeg -i http://localhost:8080/testlinks.mpg -i http://localhost:8080/testrechts.mpg -filter_complex "nullsrc=size=2560x720 [base]; [0:v] setpts=PTS-STARTPTS [left]; [1:v] setpts=PTS-STARTPTS [right]; [base][left] overlay=shortest=1 [tmp1]; [tmp1][right] overlay=shortest=1:x=1280" -c:v libx264 -preset ultrafast -f mpegts
    </feed>

    <stream>
    Feed feedmosaic.ffm
    Format mpegts         # Format of the stream
    VideoFrameRate 30      # Number of frames per second
    VideoSize 2560x720
    VideoCodec libx264      # Choose your codecs.
    NoAudio            # Suppress audio
    VideoQMin 3         # Videoquality ranges from 1 - 31 (worst to best)
    VideoQMax 31
    NoDefaults
    </stream>

    <stream>           # Server status URL
      Format status
      # Only allow local people to get the status
      ACL allow localhost
      ACL allow 192.168.0.0 192.168.255.255
      ACL allow 192.168.178.0 192.168.255.255
    </stream>

    And this is the new output :

    ubuntu@ubuntu:~$ ffserver
    ffserver version 2.5.7-0ubuntu0.15.04.1 Copyright (c) 2000-2015 the FFmpeg developers
     built with gcc 4.9.2 (Ubuntu 4.9.2-10ubuntu13)
     configuration: --prefix=/usr --extra-version=0ubuntu0.15.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --shlibdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --enable-shared --disable-stripping --enable-avresample --enable-avisynth --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libshine --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libwavpack --enable-libwebp --enable-libxvid --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzvbi --enable-libzmq --enable-frei0r --enable-libvpx --enable-libx264 --enable-libsoxr --enable-gnutls --enable-openal --enable-libopencv --enable-librtmp --enable-libx265
     libavutil      54. 15.100 / 54. 15.100
     libavcodec     56. 13.100 / 56. 13.100
     libavformat    56. 15.102 / 56. 15.102
     libavdevice    56.  3.100 / 56.  3.100
     libavfilter     5.  2.103 /  5.  2.103
     libavresample   2.  1.  0 /  2.  1.  0
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    /etc/ffserver.conf:44: Setting default value for video bit rate tolerance = 128000. Use NoDefaults to disable it.
    /etc/ffserver.conf:44: Setting default value for video rate control equation = tex^qComp. Use NoDefaults to disable it.
    /etc/ffserver.conf:44: Setting default value for video max rate = 1024000. Use NoDefaults to disable it.
    /etc/ffserver.conf:44: Setting default value for video buffer size = 1024000. Use NoDefaults to disable it.
    /etc/ffserver.conf:61: Setting default value for video bit rate tolerance = 128000. Use NoDefaults to disable it.
    /etc/ffserver.conf:61: Setting default value for video rate control equation = tex^qComp. Use NoDefaults to disable it.
    /etc/ffserver.conf:61: Setting default value for video max rate = 1024000. Use NoDefaults to disable it.
    /etc/ffserver.conf:61: Setting default value for video buffer size = 1024000. Use NoDefaults to disable it.
    Tue Jul 28 11:13:01 2015 Codec bitrates do not match for stream 0
    Tue Jul 28 11:13:01 2015 FFserver started.
    Tue Jul 28 11:13:01 2015 Launch command line: ffmpeg -f v4l2 -input_format h264 -vcodec h264 -i /dev/video0 -video_size 1280x720 -r 30 http://127.0.0.1:8080/feedlinks.ffm
    Tue Jul 28 11:13:01 2015 Launch command line: ffmpeg -f v4l2 -input_format h264 -vcodec h264 -i /dev/video1 -video_size 1280x720 -r 30 http://127.0.0.1:8080/feedrechts.ffm
    Tue Jul 28 11:13:01 2015 Launch command line: ffmpeg -i http://localhost:8080/testlinks.mpg -i http://localhost:8080/testrechts.mpg -filter_complex nullsrc=size=2560x720 [base]; [0:v] setpts=PTS-STARTPTS [left]; [1:v] setpts=PTS-STARTPTS [right]; [base][left] overlay=shortest=1 [tmp1]; [tmp1][right] overlay=shortest=1:x=1280 -c:v libx264 -preset ultrafast -f mpegts http://127.0.0.1:8080/feedmosaic.ffm
    Tue Jul 28 11:13:02 2015 127.0.0.1 - - [GET] "/feedlinks.ffm HTTP/1.1" 200 4175
    Tue Jul 28 11:13:02 2015 127.0.0.1 - - [GET] "/feedrechts.ffm HTTP/1.1" 200 4175
    Tue Jul 28 11:13:18 2015 127.0.0.1 - - [POST] "/feedmosaic.ffm HTTP/1.1" 200 4096
    Tue Jul 28 11:13:18 2015 127.0.0.1 - - [GET] "/testlinks.mpg HTTP/1.1" 200 2130291
    Tue Jul 28 11:13:18 2015 127.0.0.1 - - [GET] "/testrechts.mpg HTTP/1.1" 200 1244999
    feedmosaic.ffm: Pid 18775 exited with status 256 after 17 seconds

    Thanks for your help !

  • Compute PTS and DTS correctly to sync audio and video ffmpeg C++

    14 août 2015, par Kaidul Islam

    I am trying to mux H264 encoded data and G711 PCM data into mov multimedia container. I am creating AVPacket from encoded data and initially the PTS and DTS value of video/audio frames is equivalent to AV_NOPTS_VALUE. So I calculated the DTS using current time information. My code -

    bool AudioVideoRecorder::WriteVideo(const unsigned char *pData, size_t iDataSize, bool const bIFrame) {
       .....................................
       .....................................
       .....................................
       AVPacket pkt = {0};
       av_init_packet(&amp;pkt);
       int64_t dts = av_gettime();
       dts = av_rescale_q(dts, (AVRational){1, 1000000}, m_pVideoStream->time_base);
       int duration = 90000 / VIDEO_FRAME_RATE;
       if(m_prevVideoDts > 0LL) {
           duration = dts - m_prevVideoDts;
       }
       m_prevVideoDts = dts;

       pkt.pts = AV_NOPTS_VALUE;
       pkt.dts = m_currVideoDts;
       m_currVideoDts += duration;
       pkt.duration = duration;
       if(bIFrame) {
           pkt.flags |= AV_PKT_FLAG_KEY;
       }
       pkt.stream_index = m_pVideoStream->index;
       pkt.data = (uint8_t*) pData;
       pkt.size = iDataSize;

       int ret = av_interleaved_write_frame(m_pFormatCtx, &amp;pkt);

       if(ret &lt; 0) {
           LogErr("Writing video frame failed.");
           return false;
       }

       Log("Writing video frame done.");

       av_free_packet(&amp;pkt);
       return true;
    }

    bool AudioVideoRecorder::WriteAudio(const unsigned char *pEncodedData, size_t iDataSize) {
       .................................
       .................................
       .................................
       AVPacket pkt = {0};
       av_init_packet(&amp;pkt);

       int64_t dts = av_gettime();
       dts = av_rescale_q(dts, (AVRational){1, 1000000}, (AVRational){1, 90000});
       int duration = AUDIO_STREAM_DURATION; // 20
       if(m_prevAudioDts > 0LL) {
           duration = dts - m_prevAudioDts;
       }
       m_prevAudioDts = dts;
       pkt.pts = AV_NOPTS_VALUE;
       pkt.dts = m_currAudioDts;
       m_currAudioDts += duration;
       pkt.duration = duration;

       pkt.stream_index = m_pAudioStream->index;
       pkt.flags |= AV_PKT_FLAG_KEY;
       pkt.data = (uint8_t*) pEncodedData;
       pkt.size = iDataSize;

       int ret = av_interleaved_write_frame(m_pFormatCtx, &amp;pkt);
       if(ret &lt; 0) {
           LogErr("Writing audio frame failed: %d", ret);
           return false;
       }

       Log("Writing audio frame done.");

       av_free_packet(&amp;pkt);
       return true;
    }

    And I added stream like this -

    AVStream* AudioVideoRecorder::AddMediaStream(enum AVCodecID codecID) {
       ................................
       .................................  
       pStream = avformat_new_stream(m_pFormatCtx, codec);
       if (!pStream) {
           LogErr("Could not allocate stream.");
           return NULL;
       }
       pStream->id = m_pFormatCtx->nb_streams - 1;
       pCodecCtx = pStream->codec;
       pCodecCtx->codec_id = codecID;

       switch(codec->type) {
       case AVMEDIA_TYPE_VIDEO:
           pCodecCtx->bit_rate = VIDEO_BIT_RATE;
           pCodecCtx->width = PICTURE_WIDTH;
           pCodecCtx->height = PICTURE_HEIGHT;
           pStream->time_base = (AVRational){1, 90000};
           pStream->avg_frame_rate = (AVRational){90000, 1};
           pStream->r_frame_rate = (AVRational){90000, 1}; // though the frame rate is variable and around 15 fps
           pCodecCtx->pix_fmt = STREAM_PIX_FMT;
           m_pVideoStream = pStream;
           break;

       case AVMEDIA_TYPE_AUDIO:
           pCodecCtx->sample_fmt = AV_SAMPLE_FMT_S16;
           pCodecCtx->bit_rate = AUDIO_BIT_RATE;
           pCodecCtx->sample_rate = AUDIO_SAMPLE_RATE;
           pCodecCtx->channels = 1;
           m_pAudioStream = pStream;
           break;

       default:
           break;
       }

       /* Some formats want stream headers to be separate. */
       if (m_pOutputFmt->flags &amp; AVFMT_GLOBALHEADER)
           m_pFormatCtx->flags |= CODEC_FLAG_GLOBAL_HEADER;

       return pStream;
    }

    There are several problems with this calculation :

    1. The video is laggy and lags behind than audio increasingly with time.

    2. Suppose, an audio frame is received (WriteAudio(..)) little lately like 3 seconds, then the late frame should be started playing with 3 second delay, but it’s not. The delayed frame is played consecutively with previous frame.

    3. Sometimes I recorded for 40 seconds but the file duration is much like 2 minutes, but audio/video is played only few moments like 40 seconds and rest of the file contains nothing and seekbar jumps at en immediately after 40 seconds (tested in VLC).

    EDIT :

    According to Ronald S. Bultje’s suggestion, what I’ve understand :

    m_pAudioStream->time_base = (AVRational){1, 9000}; // actually no need to set as 9000 is already default value for audio as you said
    m_pVideoStream->time_base = (AVRational){1, 9000};

    should be set as now both audio and video streams are now in same time base units.

    And for video :

    ...................
    ...................

    int64_t dts = av_gettime(); // get current time in microseconds
    dts *= 9000;
    dts /= 1000000; // 1 second = 10^6 microseconds
    pkt.pts = AV_NOPTS_VALUE; // is it okay?
    pkt.dts = dts;
    // and no need to set pkt.duration, right?

    And for audio : (exactly same as video, right ?)

    ...................
    ...................

    int64_t dts = av_gettime(); // get current time in microseconds
    dts *= 9000;
    dts /= 1000000; // 1 second = 10^6 microseconds
    pkt.pts = AV_NOPTS_VALUE; // is it okay?
    pkt.dts = dts;
    // and no need to set pkt.duration, right?

    And I think they are now like sharing same currDts, right ? Please correct me if I am wrong anywhere or missing anything.

    Also, if I want to use video stream time base as (AVRational){1, frameRate} and audio stream time base as (AVRational){1, sampleRate}, how the correct code should look like ?

    EDIT 2.0 :

    m_pAudioStream->time_base = (AVRational){1, VIDEO_FRAME_RATE};
    m_pVideoStream->time_base = (AVRational){1, VIDEO_FRAME_RATE};

    And

    bool AudioVideoRecorder::WriteAudio(const unsigned char *pEncodedData, size_t iDataSize) {
       ...........................
       ......................
       AVPacket pkt = {0};
       av_init_packet(&amp;pkt);

       int64_t dts = av_gettime() / 1000; // convert into millisecond
       dts = dts * VIDEO_FRAME_RATE;
       if(m_dtsOffset &lt; 0) {
           m_dtsOffset = dts;
       }

       pkt.pts = AV_NOPTS_VALUE;
       pkt.dts = (dts - m_dtsOffset);

       pkt.stream_index = m_pAudioStream->index;
       pkt.flags |= AV_PKT_FLAG_KEY;
       pkt.data = (uint8_t*) pEncodedData;
       pkt.size = iDataSize;

       int ret = av_interleaved_write_frame(m_pFormatCtx, &amp;pkt);
       if(ret &lt; 0) {
           LogErr("Writing audio frame failed: %d", ret);
           return false;
       }

       Log("Writing audio frame done.");

       av_free_packet(&amp;pkt);
       return true;
    }

    bool AudioVideoRecorder::WriteVideo(const unsigned char *pData, size_t iDataSize, bool const bIFrame) {
       ........................................
       .................................
       AVPacket pkt = {0};
       av_init_packet(&amp;pkt);
       int64_t dts = av_gettime() / 1000;
       dts = dts * VIDEO_FRAME_RATE;
       if(m_dtsOffset &lt; 0) {
           m_dtsOffset = dts;
       }
       pkt.pts = AV_NOPTS_VALUE;
       pkt.dts = (dts - m_dtsOffset);

       if(bIFrame) {
           pkt.flags |= AV_PKT_FLAG_KEY;
       }
       pkt.stream_index = m_pVideoStream->index;
       pkt.data = (uint8_t*) pData;
       pkt.size = iDataSize;

       int ret = av_interleaved_write_frame(m_pFormatCtx, &amp;pkt);

       if(ret &lt; 0) {
           LogErr("Writing video frame failed.");
           return false;
       }

       Log("Writing video frame done.");

       av_free_packet(&amp;pkt);
       return true;
    }

    Is the last change okay ? The video and audio seems synced. Only problem is - the audio is played without the delay regardless the packet arrived in delay.
    Like -

    packet arrival : 1 2 3 4... (then next frame arrived after 3 sec) .. 5

    audio played : 1 2 3 4 (no delay) 5

    EDIT 3.0 :

    zeroed audio sample data :

    AVFrame* pSilentData;
    pSilentData = av_frame_alloc();
    memset(&amp;pSilentData->data[0], 0, iDataSize);

    pkt.data = (uint8_t*) pSilentData;
    pkt.size = iDataSize;

    av_freep(&amp;pSilentData->data[0]);
    av_frame_free(&amp;pSilentData);

    Is this okay ? But after writing this into file container, there are dot dot noise during playing the media. Whats the problem ?

    EDIT 4.0 :

    Well, For µ-Law audio the zero value is represented as 0xff. So -

    memset(&amp;pSilentData->data[0], 0xff, iDataSize);

    solve my problem.

  • ffmpeg + ffserver : "Broken ffmpeg default settings detected"

    18 octobre 2012, par Chris Nolet

    I'm just trying to connect ffmpeg to ffserver and stream rawvideo.

    I keep getting the error : broken ffmpeg default settings detected from libx264 and then Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height from ffmpeg before it exits.

    I'm launching ffmpeg with the command : ffmpeg -f x11grab -s 320x480 -r 10 -i :0.0 -tune zerolatency http://localhost:8090/feed1.ffm

    My ffserver.conf file (for ffserver) looks like this :

    Port 8090
    BindAddress 0.0.0.0
    MaxHTTPConnections 2000
    MaxClients 1000
    MaxBandwidth 1000
    CustomLog -
    NoDaemon

    <feed>
     ACL allow 127.0.0.1
    </feed>

    <stream>
     Feed feed1.ffm
     Format asf

     NoAudio

     VideoBitRate 128
     VideoBufferSize 400
     VideoFrameRate 24
     VideoSize 320x480

     VideoGopSize 12

     VideoQMin 1
     VideoQMax 31

     VideoCodec libx264
    </stream>

    <stream>
     Format status
    </stream>

    And the full output is :

    ffmpeg version N-45614-g364c60b Copyright (c) 2000-2012 the FFmpeg developers
     built on Oct 17 2012 04:34:04 with Apple clang version 4.1 (tags/Apple/clang-421.11.65) (based on LLVM 3.1svn)
     configuration: --enable-shared --enable-libx264 --enable-libmp3lame --enable-x11grab --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --cc=/usr/bin/clang --host-cflags=&#39;-Os -w -pipe -march=native -Qunused-arguments -mmacosx-version-min=10.7&#39; --extra-cflags=&#39;-x objective-c&#39; --extra-ldflags=&#39;-framework Foundation -framework Cocoa -framework CoreServices -framework ApplicationServices -lobjc&#39;
     libavutil      51. 76.100 / 51. 76.100
     libavcodec     54. 66.100 / 54. 66.100
     libavformat    54. 32.101 / 54. 32.101
     libavdevice    54.  3.100 / 54.  3.100
     libavfilter     3. 19.103 /  3. 19.103
     libswscale      2.  1.101 /  2.  1.101
     libswresample   0. 16.100 /  0. 16.100
     libpostproc    52.  1.100 / 52.  1.100
    [x11grab @ 0x7f87dc01e200] device: :0.0 -> display: :0.0 x: 0 y: 0 width: 320 height: 480
    [x11grab @ 0x7f87dc01e200] Estimating duration from bitrate, this may be inaccurate
    Input #0, x11grab, from &#39;:0.0&#39;:
     Duration: N/A, start: 1350517708.386699, bitrate: 49152 kb/s
       Stream #0:0: Video: rawvideo (BGRA / 0x41524742), bgra, 320x480, 49152 kb/s, 10 tbr, 1000k tbn, 10 tbc
    [tcp @ 0x7f87dc804120] TCP connection to localhost:8090 failed: Connection refused
    [tcp @ 0x7f87dc804b20] TCP connection to localhost:8090 failed: Connection refused
    [libx264 @ 0x7f87dd801000] broken ffmpeg default settings detected
    [libx264 @ 0x7f87dd801000] use an encoding preset (e.g. -vpre medium)
    [libx264 @ 0x7f87dd801000] preset usage: -vpre <speed> -vpre <profile>
    [libx264 @ 0x7f87dd801000] speed presets are listed in x264 --help
    [libx264 @ 0x7f87dd801000] profile is optional; x264 defaults to high
    Output #0, ffm, to &#39;http://localhost:8090/feed1.ffm&#39;:
     Metadata:
       creation_time   : now
       Stream #0:0: Video: h264, yuv420p, 160x128, q=2-31, 128 kb/s, 1000k tbn, 10 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (rawvideo -> libx264)
    Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
    </profile></speed>

    Any help much appreciated :)