Recherche avancée

Médias (91)

Autres articles (68)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

Sur d’autres sites (6271)

  • Things I Have Learned About Emscripten

    1er septembre 2015, par Multimedia Mike — Cirrus Retro

    3 years ago, I released my Game Music Appreciation project, a website with a ludicrously uninspired title which allowed users a relatively frictionless method to experience a range of specialized music files related to old video games. However, the site required use of a special Chrome plugin. Ever since that initial release, my #1 most requested feature has been for a pure JavaScript version of the music player.

    “Impossible !” I exclaimed. “There’s no way JS could ever run fast enough to run these CPU emulators and audio synthesizers in real time, and allow for the visualization that I demand !” Well, I’m pleased to report that I have proved me wrong. I recently quietly launched a new site with what I hope is a catchier title, meant to evoke a cloud-based retro-music-as-a-service product : Cirrus Retro. Right now, it’s basically the same as the old site, but without the wonky Chrome-specific technology.

    Along the way, I’ve learned a few things about using Emscripten that I thought might be useful to share with other people who wish to embark on a similar journey. This is geared more towards someone who has a stronger low-level background (such as C/C++) vs. high-level (like JavaScript).

    General Goals
    Do you want to cross-compile an entire desktop application, one that relies on an extensive GUI toolkit ? That might be difficult (though I believe there is a path for porting qt code directly with Emscripten). Your better wager might be to abstract out the core logic and processes of the program and then create a new web UI to access them.

    Do you want to compile a game that basically just paints stuff to a 2D canvas ? You’re in luck ! Emscripten has a porting path for SDL. Make a version of your C/C++ software that targets SDL (generally not a tall order) and then compile that with Emscripten.

    Do you just want to cross-compile some functionality that lives in a library ? That’s what I’ve done with the Cirrus Retro project. For this, plan to compile the library into a JS file that exports some public functions that other, higher-level, native JS (i.e., JS written by a human and not a computer) will invoke.

    Memory Levels
    When porting C/C++ software to JavaScript using Emscripten, you have to think on 2 different levels. Or perhaps you need to force JavaScript into a low level C lens, especially if you want to write native JS code that will interact with Emscripten-compiled code. This often means somehow allocating chunks of memory via JS and passing them to the Emscripten-compiled functions. And you wouldn’t believe the type of gymnastics you need to execute to get native JS and Emscripten-compiled JS to cooperate.

    “Emscripten : Pointers and Pointers” is the best (and, really, ONLY) explanation I could find for understanding the basic mechanics of this process, at least when I started this journey. However, there’s a mistake in the explanation that left me confused for a little while, and I’m at a loss to contact the author (doesn’t anyone post a simple email address anymore ?).

    Per the best of my understanding, Emscripten allocates a large JS array and calls that the memory space that the compiled C/C++ code is allowed to operate in. A pointer in C/C++ code will just be an index into that mighty array. Really, that’s not too far off from how a low-level program process is supposed to view memory– as a flat array.

    Eventually, I just learned to cargo-cult my way through the memory allocation process. Here’s the JS code for allocating an Emscripten-compatible byte buffer, taken from my test harness (more on that later) :

    var musicBuffer = fs.readFileSync(testSpec[’filename’]) ;
    var musicBufferBytes = new Uint8Array(musicBuffer) ;
    var bytesMalloc = player._malloc(musicBufferBytes.length) ;
    var bytes = new Uint8Array(player.HEAPU8.buffer, bytesMalloc, musicBufferBytes.length) ;
    bytes.set(new Uint8Array(musicBufferBytes.buffer)) ;
    

    So, read the array of bytes from some input source, create a Uint8Array from the bytes, use the Emscripten _malloc() function to allocate enough bytes from the Emscripten memory array for the input bytes, then create a new array… then copy the bytes…

    You know what ? It’s late and I can’t remember how it works exactly, but it does. It has been a few months since I touched that code (been fighting with front-end website tech since then). You write that memory allocation code enough times and it begins to make sense, and then you hope you don’t have to write it too many more times.

    Multithreading
    You can’t port multithreaded code to JS via Emscripten. JavaScript has no notion of threads ! If you don’t understand the computer science behind this limitation, a more thorough explanation is beyond the scope of this post. But trust me, I’ve thought about it a lot. In fact, the official Emscripten literature states that you should be able to port most any C/C++ code as long as 1) none of the code is proprietary (i.e., all the raw source is available) ; and 2) there are no threads.

    Yes, I read about the experimental pthreads support added to Emscripten recently. Don’t get too excited ; that won’t be ready and widespread for a long time to come as it relies on a new browser API. In the meantime, figure out how to make your multithreaded C/C++ code run in a single thread if you want it to run in a browser.

    Printing Facility
    Eventually, getting software to work boils down to debugging, and the most primitive tool in many a programmer’s toolbox is the humble print statement. A print statement allows you to inspect a piece of a program’s state at key junctures. Eventually, when you try to cross-compile C/C++ code to JS using Emscripten, something is not going to work correctly in the generated JS “object code” and you need to understand what. You’ll be pleading for a method of just inspecting one variable deep in the original C/C++ code.

    I came up with this simple printf-workalike called emprintf() :

    #ifndef EMPRINTF_H
    #define EMPRINTF_H
    

    #include <stdio .h>
    #include <stdarg .h>
    #include <emscripten .h>

    #define MAX_MSG_LEN 1000

    /* NOTE : Don’t pass format strings that contain single quote (’) or newline
    * characters. */
    static void emprintf(const char *format, ...)

    char msg[MAX_MSG_LEN] ;
    char consoleMsg[MAX_MSG_LEN + 16] ;
    va_list args ;

    /* create the string */
    va_start(args, format) ;
    vsnprintf(msg, MAX_MSG_LEN, format, args) ;
    va_end(args) ;

    /* wrap the string in a console.log(’’) statement */
    snprintf(consoleMsg, MAX_MSG_LEN + 16, "console.log(’%s’)", msg) ;

    /* send the final string to the JavaScript console */
    emscripten_run_script(consoleMsg) ;

    #endif /* EMPRINTF_H */

    Put it in a file called “emprint.h”. Include it into any C/C++ file where you need debugging visibility, use emprintf() as a replacement for printf() and the output will magically show up on the browser’s JavaScript debug console. Heed the comments and don’t put any single quotes or newlines in strings, and keep it under 1000 characters. I didn’t say it was perfect, but it has helped me a lot in my Emscripten adventures.

    Optimization Levels
    Remember to turn on optimization when compiling. I have empirically found that optimizing for size (-Os) leads to the best performance all around, in addition to having the smallest size. Just be sure to specify some optimization level. If you don’t, the default is -O0 which offers horrible performance when running in JS.

    Static Compression For HTTP Delivery
    JavaScript code compresses pretty efficiently, even after it has been optimized for size using -Os. I routinely see compression ratios between 3.5:1 and 5:1 using gzip.

    Web servers in this day and age are supposed to be smart enough to detect when a requesting web browser can accept gzip-compressed data and do the compression on the fly. They’re even supposed to be smart enough to cache compressed output so the same content is not recompressed for each request. I would have to set up a series of tests to establish whether either of the foregoing assertions are correct and I can’t be bothered. Instead, I took it into my own hands. The trick is to pre-compress the JS files and then instruct the webserver to serve these files with a ‘Content-Type’ of ‘application/javascript’ and a ‘Content-Encoding’ of ‘gzip’.

    1. Compress your large Emscripten-build JS files with ‘gzip’ : ‘gzip compiled-code.js’
    2. Rename them from extension .js.gz to .jsgz
    3. Tell the webserver to deliver .jsgz files with the correct Content-Type and Content-Encoding headers

    To do that last step with Apache, specify these lines :

    AddType application/javascript jsgz
    AddEncoding gzip jsgz
    

    They belong in either a directory’s .htaccess file or in the sitewide configuration (/etc/apache2/mods-available/mime.conf works on my setup).

    Build System and Build Time Optimization
    Oh goodie, build systems ! I had a very specific manner in which I wanted to build my JS modules using Emscripten. Can I possibly coerce any of the many popular build systems to do this ? It has been a few months since I worked on this problem specifically but I seem to recall that the build systems I tried to used would freak out at the prospect of compiling stuff to a final binary target of .js.

    I had high hopes for Bazel, which Google released while I was developing Cirrus Retro. Surely, this is software that has been battle-tested in the harshest conditions of one of the most prominent software-developing companies in the world, needing to take into account the most bizarre corner cases and still build efficiently and correctly every time. And I have little doubt that it fulfills the order. Similarly, I’m confident that Google also has a team of no fewer than 100 or so people dedicated to developing and supporting the project within the organization. When you only have, at best, 1-2 hours per night to work on projects like this, you prefer not to fight with such cutting edge technology and after losing 2 or 3 nights trying to make a go of Bazel, I eventually put it aside.

    I also tried to use Autotools. It failed horribly for me, mostly for my own carelessness and lack of early-project source control.

    After that, it was strictly vanilla makefiles with no real dependency management. But you know what helps in these cases ? ccache ! Or at least, it would if it didn’t fail with Emscripten.

    Quick tip : ccache has trouble with LLVM unless you set the CCACHE_CPP2 environment variable (e.g. : “export CCACHE_CPP2=1”). I don’t remember the specifics, but it magically fixes things. Then, the lazy build process becomes “make clean && make”.

    Testing
    If you have never used Node.js, testing Emscripten-compiled JS code might be a good opportunity to start. I was able to use Node.js to great effect for testing the individually-compiled music player modules, wiring up a series of invocations using Python for a broader test suite (wouldn’t want to go too deep down the JS rabbit hole, after all).

    Be advised that Node.js doesn’t enjoy the same kind of JIT optimizations that the browser engines leverage. Thus, in the case of time critical code like, say, an audio synthesis library, the code might not run in real time. But as long as it produces the correct bitwise waveform, that’s good enough for continuous integration.

    Also, if you have largely been a low-level programmer for your whole career and are generally unfamiliar with the world of single-threaded, event-driven, callback-oriented programming, you might be in for a bit of a shock. When I wanted to learn how to read the contents of a file in Node.js, this is the first tutorial I found on the matter. I thought the code presented was a parody of bad coding style :

    var fs = require("fs") ;
    var fileName = "foo.txt" ;
    

    fs.exists(fileName, function(exists)
    if (exists)
    fs.stat(fileName, function(error, stats)
    fs.open(fileName, "r", function(error, fd)
    var buffer = new Buffer(stats.size) ;

    fs.read(fd, buffer, 0, buffer.length, null, function(error, bytesRead, buffer)
    var data = buffer.toString("utf8", 0, buffer.length) ;

    console.log(data) ;
    fs.close(fd) ;
    ) ;
    ) ;
    ) ;
    ) ;

    Apparently, this kind of thing doesn’t raise an eyebrow in the JS world.

    Now, I understand and respect the JS programming model. But this was seriously frustrating when I first encountered it because a simple script like the one I was trying to write just has an ordered list of tasks to complete. When it asks for bytes from a file, it really has nothing better to do than to wait for the answer.

    Thankfully, it turns out that Node’s fs module includes synchronous versions of the various file access functions. So it’s all good.

    Conclusion
    I’m sure I missed or underexplained some things. But if other brave souls are interested in dipping their toes in the waters of Emscripten, I hope these tips will come in handy.

  • Compute PTS and DTS correctly to sync audio and video ffmpeg C++

    14 août 2015, par Kaidul Islam

    I am trying to mux H264 encoded data and G711 PCM data into mov multimedia container. I am creating AVPacket from encoded data and initially the PTS and DTS value of video/audio frames is equivalent to AV_NOPTS_VALUE. So I calculated the DTS using current time information. My code -

    bool AudioVideoRecorder::WriteVideo(const unsigned char *pData, size_t iDataSize, bool const bIFrame) {
       .....................................
       .....................................
       .....................................
       AVPacket pkt = {0};
       av_init_packet(&amp;pkt);
       int64_t dts = av_gettime();
       dts = av_rescale_q(dts, (AVRational){1, 1000000}, m_pVideoStream->time_base);
       int duration = 90000 / VIDEO_FRAME_RATE;
       if(m_prevVideoDts > 0LL) {
           duration = dts - m_prevVideoDts;
       }
       m_prevVideoDts = dts;

       pkt.pts = AV_NOPTS_VALUE;
       pkt.dts = m_currVideoDts;
       m_currVideoDts += duration;
       pkt.duration = duration;
       if(bIFrame) {
           pkt.flags |= AV_PKT_FLAG_KEY;
       }
       pkt.stream_index = m_pVideoStream->index;
       pkt.data = (uint8_t*) pData;
       pkt.size = iDataSize;

       int ret = av_interleaved_write_frame(m_pFormatCtx, &amp;pkt);

       if(ret &lt; 0) {
           LogErr("Writing video frame failed.");
           return false;
       }

       Log("Writing video frame done.");

       av_free_packet(&amp;pkt);
       return true;
    }

    bool AudioVideoRecorder::WriteAudio(const unsigned char *pEncodedData, size_t iDataSize) {
       .................................
       .................................
       .................................
       AVPacket pkt = {0};
       av_init_packet(&amp;pkt);

       int64_t dts = av_gettime();
       dts = av_rescale_q(dts, (AVRational){1, 1000000}, (AVRational){1, 90000});
       int duration = AUDIO_STREAM_DURATION; // 20
       if(m_prevAudioDts > 0LL) {
           duration = dts - m_prevAudioDts;
       }
       m_prevAudioDts = dts;
       pkt.pts = AV_NOPTS_VALUE;
       pkt.dts = m_currAudioDts;
       m_currAudioDts += duration;
       pkt.duration = duration;

       pkt.stream_index = m_pAudioStream->index;
       pkt.flags |= AV_PKT_FLAG_KEY;
       pkt.data = (uint8_t*) pEncodedData;
       pkt.size = iDataSize;

       int ret = av_interleaved_write_frame(m_pFormatCtx, &amp;pkt);
       if(ret &lt; 0) {
           LogErr("Writing audio frame failed: %d", ret);
           return false;
       }

       Log("Writing audio frame done.");

       av_free_packet(&amp;pkt);
       return true;
    }

    And I added stream like this -

    AVStream* AudioVideoRecorder::AddMediaStream(enum AVCodecID codecID) {
       ................................
       .................................  
       pStream = avformat_new_stream(m_pFormatCtx, codec);
       if (!pStream) {
           LogErr("Could not allocate stream.");
           return NULL;
       }
       pStream->id = m_pFormatCtx->nb_streams - 1;
       pCodecCtx = pStream->codec;
       pCodecCtx->codec_id = codecID;

       switch(codec->type) {
       case AVMEDIA_TYPE_VIDEO:
           pCodecCtx->bit_rate = VIDEO_BIT_RATE;
           pCodecCtx->width = PICTURE_WIDTH;
           pCodecCtx->height = PICTURE_HEIGHT;
           pStream->time_base = (AVRational){1, 90000};
           pStream->avg_frame_rate = (AVRational){90000, 1};
           pStream->r_frame_rate = (AVRational){90000, 1}; // though the frame rate is variable and around 15 fps
           pCodecCtx->pix_fmt = STREAM_PIX_FMT;
           m_pVideoStream = pStream;
           break;

       case AVMEDIA_TYPE_AUDIO:
           pCodecCtx->sample_fmt = AV_SAMPLE_FMT_S16;
           pCodecCtx->bit_rate = AUDIO_BIT_RATE;
           pCodecCtx->sample_rate = AUDIO_SAMPLE_RATE;
           pCodecCtx->channels = 1;
           m_pAudioStream = pStream;
           break;

       default:
           break;
       }

       /* Some formats want stream headers to be separate. */
       if (m_pOutputFmt->flags &amp; AVFMT_GLOBALHEADER)
           m_pFormatCtx->flags |= CODEC_FLAG_GLOBAL_HEADER;

       return pStream;
    }

    There are several problems with this calculation :

    1. The video is laggy and lags behind than audio increasingly with time.

    2. Suppose, an audio frame is received (WriteAudio(..)) little lately like 3 seconds, then the late frame should be started playing with 3 second delay, but it’s not. The delayed frame is played consecutively with previous frame.

    3. Sometimes I recorded for 40 seconds but the file duration is much like 2 minutes, but audio/video is played only few moments like 40 seconds and rest of the file contains nothing and seekbar jumps at en immediately after 40 seconds (tested in VLC).

    EDIT :

    According to Ronald S. Bultje’s suggestion, what I’ve understand :

    m_pAudioStream->time_base = (AVRational){1, 9000}; // actually no need to set as 9000 is already default value for audio as you said
    m_pVideoStream->time_base = (AVRational){1, 9000};

    should be set as now both audio and video streams are now in same time base units.

    And for video :

    ...................
    ...................

    int64_t dts = av_gettime(); // get current time in microseconds
    dts *= 9000;
    dts /= 1000000; // 1 second = 10^6 microseconds
    pkt.pts = AV_NOPTS_VALUE; // is it okay?
    pkt.dts = dts;
    // and no need to set pkt.duration, right?

    And for audio : (exactly same as video, right ?)

    ...................
    ...................

    int64_t dts = av_gettime(); // get current time in microseconds
    dts *= 9000;
    dts /= 1000000; // 1 second = 10^6 microseconds
    pkt.pts = AV_NOPTS_VALUE; // is it okay?
    pkt.dts = dts;
    // and no need to set pkt.duration, right?

    And I think they are now like sharing same currDts, right ? Please correct me if I am wrong anywhere or missing anything.

    Also, if I want to use video stream time base as (AVRational){1, frameRate} and audio stream time base as (AVRational){1, sampleRate}, how the correct code should look like ?

    EDIT 2.0 :

    m_pAudioStream->time_base = (AVRational){1, VIDEO_FRAME_RATE};
    m_pVideoStream->time_base = (AVRational){1, VIDEO_FRAME_RATE};

    And

    bool AudioVideoRecorder::WriteAudio(const unsigned char *pEncodedData, size_t iDataSize) {
       ...........................
       ......................
       AVPacket pkt = {0};
       av_init_packet(&amp;pkt);

       int64_t dts = av_gettime() / 1000; // convert into millisecond
       dts = dts * VIDEO_FRAME_RATE;
       if(m_dtsOffset &lt; 0) {
           m_dtsOffset = dts;
       }

       pkt.pts = AV_NOPTS_VALUE;
       pkt.dts = (dts - m_dtsOffset);

       pkt.stream_index = m_pAudioStream->index;
       pkt.flags |= AV_PKT_FLAG_KEY;
       pkt.data = (uint8_t*) pEncodedData;
       pkt.size = iDataSize;

       int ret = av_interleaved_write_frame(m_pFormatCtx, &amp;pkt);
       if(ret &lt; 0) {
           LogErr("Writing audio frame failed: %d", ret);
           return false;
       }

       Log("Writing audio frame done.");

       av_free_packet(&amp;pkt);
       return true;
    }

    bool AudioVideoRecorder::WriteVideo(const unsigned char *pData, size_t iDataSize, bool const bIFrame) {
       ........................................
       .................................
       AVPacket pkt = {0};
       av_init_packet(&amp;pkt);
       int64_t dts = av_gettime() / 1000;
       dts = dts * VIDEO_FRAME_RATE;
       if(m_dtsOffset &lt; 0) {
           m_dtsOffset = dts;
       }
       pkt.pts = AV_NOPTS_VALUE;
       pkt.dts = (dts - m_dtsOffset);

       if(bIFrame) {
           pkt.flags |= AV_PKT_FLAG_KEY;
       }
       pkt.stream_index = m_pVideoStream->index;
       pkt.data = (uint8_t*) pData;
       pkt.size = iDataSize;

       int ret = av_interleaved_write_frame(m_pFormatCtx, &amp;pkt);

       if(ret &lt; 0) {
           LogErr("Writing video frame failed.");
           return false;
       }

       Log("Writing video frame done.");

       av_free_packet(&amp;pkt);
       return true;
    }

    Is the last change okay ? The video and audio seems synced. Only problem is - the audio is played without the delay regardless the packet arrived in delay.
    Like -

    packet arrival : 1 2 3 4... (then next frame arrived after 3 sec) .. 5

    audio played : 1 2 3 4 (no delay) 5

    EDIT 3.0 :

    zeroed audio sample data :

    AVFrame* pSilentData;
    pSilentData = av_frame_alloc();
    memset(&amp;pSilentData->data[0], 0, iDataSize);

    pkt.data = (uint8_t*) pSilentData;
    pkt.size = iDataSize;

    av_freep(&amp;pSilentData->data[0]);
    av_frame_free(&amp;pSilentData);

    Is this okay ? But after writing this into file container, there are dot dot noise during playing the media. Whats the problem ?

    EDIT 4.0 :

    Well, For µ-Law audio the zero value is represented as 0xff. So -

    memset(&amp;pSilentData->data[0], 0xff, iDataSize);

    solve my problem.

  • ffmpeg ffserver - create a mosaic from two 720p webcam feeds

    28 juillet 2015, par der_felix

    for a project i would like to take the video feeds (NO audio) of two logitech c920 webcams, put them side-by-side and stream them.
    the c920 is able to compress the video feed with h264 itself(if enabled) and delivers 1080p with upto 30fps.
    the stream is then loaded in an android app by a ffmpeg library and rendered to the screen.

    what i already know :
    i know that i can take multiple streams or input files and create a mosaic stream via the filter_complex module.
    http and h264 seem to be good for streaming, but other configurations are also welcom if they are faster/better.

    the question :
    but how can i start the cameras with v4l2, set the camera resoltution and camera internal encoding and use these streams to create the mosaic ?
    the mosaic should be unscaled (=2560x720px).

    and i very often get the error code 256 but didnt find a solution what it means.

    the system : laptop with usb3, ubuntu 15.04x64 ffmpeg 2.7.1 and ffserver 2.5.7

    thanks for your help

    ffserver config :

    HTTPPort 8080                
    HTTPBindAddress 0.0.0.0      
    MaxHTTPConnections 2000  
    MaxClients 1000        
    MaxBandwidth 50000
    CustomLog -      
    #NoDaemon      

    <feed>        

    File /tmp/feed1.ffm
    Launch ffmpeg -f v4l2 - input_format h264 -i /dev/video0 -i /dev/video1 -size 1280x720 -r 30 -filter_complex "nullsrc=size=2560x720 [base]; [0:v] setpts=PTS-STARTPTS [left]; [1:v] setpts=PTS-STARTPTS [right]; [base][left] overlay=shortest=1 [tmp1]; [tmp1][right] overlay=shortest=1:x=1280"  -c:v libx264 -f mpegts

    </feed>

    <stream>

    Feed feed1.ffm
    Format mpegts      
    VideoBitRate 1024  
    #VideoBufferSize 1024
    VideoFrameRate 30      
    #VideoSize hd720      
    VideoSize 2560x720
    #VideoIntraOnly        
    #VideoGopSize 12      
    VideoCodec libx264      
    NoAudio            
    VideoQMin 3        
    VideoQMax 31
    NoDefaults

    </stream>

    <stream>        
      Format status
      #Only allow local people to get the status
      ACL allow localhost
      ACL allow 192.168.0.0 192.168.255.255
    </stream>

    output :

    ubuntu@ubuntu:~$ ffserver
    ffserver version 2.5.7-0ubuntu0.15.04.1 Copyright (c) 2000-2015 the FFmpeg developers
     built with gcc 4.9.2 (Ubuntu 4.9.2-10ubuntu13)
     configuration: --prefix=/usr --extra-version=0ubuntu0.15.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --shlibdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --enable-shared --disable-stripping --enable-avresample --enable-avisynth --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libshine --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libwavpack --enable-libwebp --enable-libxvid --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzvbi --enable-libzmq --enable-frei0r --enable-libvpx --enable-libx264 --enable-libsoxr --enable-gnutls --enable-openal --enable-libopencv --enable-librtmp --enable-libx265
     libavutil      54. 15.100 / 54. 15.100
     libavcodec     56. 13.100 / 56. 13.100
     libavformat    56. 15.102 / 56. 15.102
     libavdevice    56.  3.100 / 56.  3.100
     libavfilter     5.  2.103 /  5.  2.103
     libavresample   2.  1.  0 /  2.  1.  0
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    Tue Jul 28 10:13:44 2015 FFserver started.
    Tue Jul 28 10:13:44 2015 Launch command line: ffmpeg -f v4l2 - input_format h264 -i /dev/video0 -i /dev/video1 -size 1280x720 -r 30 -filter_complex nullsrc=size=2560x720 [base]; [0:v] setpts=PTS-STARTPTS [left]; [1:v] setpts=PTS-STARTPTS [right]; [base][left] overlay=shortest=1 [tmp1]; [tmp1][right] overlay=shortest=1:x=1280 -c:v libx264 -f mpegts http://127.0.0.1:8080/feed1.ffm
    feed1.ffm: Pid 17388 exited with status 256 after 0 seconds

    Hey guys !

    Here is our plan b for the mosaic stream.

    Alternative config :

    HTTPPort 8080        
    HTTPBindAddress 0.0.0.0  
    MaxHTTPConnections 2000  
    MaxClients 1000        
    MaxBandwidth 50000      
    CustomLog -        

    <feed>
    File /tmp/feedlinks.ffm
    Launch ffmpeg -f v4l2 -input_format h264 -vcodec h264 -i /dev/video0 -video_size 1280x720 -r 30
    </feed>

    <feed>
    File /tmp/feedrechts.ffm
    Launch ffmpeg -f v4l2 -input_format h264 -vcodec h264 -i /dev/video1 -video_size 1280x720 -r 30
    </feed>

    <stream>
    Feed feedlinks.ffm
    Format mpegts
    VideoBitRate 512
    VideoFrameRate 30
    VideoSize hd720
    VideoCodec libx264
    NoAudio
    VideoQMin 3
    VideoQMax 31
    </stream>

    <stream>
    Feed feedrechts.ffm
    Format mpegts
    VideoBitRate 512
    VideoFrameRate 30
    VideoSize hd720
    VideoCodec libx264
    NoAudio
    VideoQMin 3
    VideoQMax 31
    </stream>

    <feed>
    File /tmp/feedmosaic.ffm
    Launch ffmpeg -i http://localhost:8080/testlinks.mpg -i http://localhost:8080/testrechts.mpg -filter_complex "nullsrc=size=2560x720 [base]; [0:v] setpts=PTS-STARTPTS [left]; [1:v] setpts=PTS-STARTPTS [right]; [base][left] overlay=shortest=1 [tmp1]; [tmp1][right] overlay=shortest=1:x=1280" -c:v libx264 -preset ultrafast -f mpegts
    </feed>

    <stream>
    Feed feedmosaic.ffm
    Format mpegts         # Format of the stream
    VideoFrameRate 30      # Number of frames per second
    VideoSize 2560x720
    VideoCodec libx264      # Choose your codecs.
    NoAudio            # Suppress audio
    VideoQMin 3         # Videoquality ranges from 1 - 31 (worst to best)
    VideoQMax 31
    NoDefaults
    </stream>

    <stream>           # Server status URL
      Format status
      # Only allow local people to get the status
      ACL allow localhost
      ACL allow 192.168.0.0 192.168.255.255
      ACL allow 192.168.178.0 192.168.255.255
    </stream>

    And this is the new output :

    ubuntu@ubuntu:~$ ffserver
    ffserver version 2.5.7-0ubuntu0.15.04.1 Copyright (c) 2000-2015 the FFmpeg developers
     built with gcc 4.9.2 (Ubuntu 4.9.2-10ubuntu13)
     configuration: --prefix=/usr --extra-version=0ubuntu0.15.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --shlibdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --enable-shared --disable-stripping --enable-avresample --enable-avisynth --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libshine --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libwavpack --enable-libwebp --enable-libxvid --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzvbi --enable-libzmq --enable-frei0r --enable-libvpx --enable-libx264 --enable-libsoxr --enable-gnutls --enable-openal --enable-libopencv --enable-librtmp --enable-libx265
     libavutil      54. 15.100 / 54. 15.100
     libavcodec     56. 13.100 / 56. 13.100
     libavformat    56. 15.102 / 56. 15.102
     libavdevice    56.  3.100 / 56.  3.100
     libavfilter     5.  2.103 /  5.  2.103
     libavresample   2.  1.  0 /  2.  1.  0
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    /etc/ffserver.conf:44: Setting default value for video bit rate tolerance = 128000. Use NoDefaults to disable it.
    /etc/ffserver.conf:44: Setting default value for video rate control equation = tex^qComp. Use NoDefaults to disable it.
    /etc/ffserver.conf:44: Setting default value for video max rate = 1024000. Use NoDefaults to disable it.
    /etc/ffserver.conf:44: Setting default value for video buffer size = 1024000. Use NoDefaults to disable it.
    /etc/ffserver.conf:61: Setting default value for video bit rate tolerance = 128000. Use NoDefaults to disable it.
    /etc/ffserver.conf:61: Setting default value for video rate control equation = tex^qComp. Use NoDefaults to disable it.
    /etc/ffserver.conf:61: Setting default value for video max rate = 1024000. Use NoDefaults to disable it.
    /etc/ffserver.conf:61: Setting default value for video buffer size = 1024000. Use NoDefaults to disable it.
    Tue Jul 28 11:13:01 2015 Codec bitrates do not match for stream 0
    Tue Jul 28 11:13:01 2015 FFserver started.
    Tue Jul 28 11:13:01 2015 Launch command line: ffmpeg -f v4l2 -input_format h264 -vcodec h264 -i /dev/video0 -video_size 1280x720 -r 30 http://127.0.0.1:8080/feedlinks.ffm
    Tue Jul 28 11:13:01 2015 Launch command line: ffmpeg -f v4l2 -input_format h264 -vcodec h264 -i /dev/video1 -video_size 1280x720 -r 30 http://127.0.0.1:8080/feedrechts.ffm
    Tue Jul 28 11:13:01 2015 Launch command line: ffmpeg -i http://localhost:8080/testlinks.mpg -i http://localhost:8080/testrechts.mpg -filter_complex nullsrc=size=2560x720 [base]; [0:v] setpts=PTS-STARTPTS [left]; [1:v] setpts=PTS-STARTPTS [right]; [base][left] overlay=shortest=1 [tmp1]; [tmp1][right] overlay=shortest=1:x=1280 -c:v libx264 -preset ultrafast -f mpegts http://127.0.0.1:8080/feedmosaic.ffm
    Tue Jul 28 11:13:02 2015 127.0.0.1 - - [GET] "/feedlinks.ffm HTTP/1.1" 200 4175
    Tue Jul 28 11:13:02 2015 127.0.0.1 - - [GET] "/feedrechts.ffm HTTP/1.1" 200 4175
    Tue Jul 28 11:13:18 2015 127.0.0.1 - - [POST] "/feedmosaic.ffm HTTP/1.1" 200 4096
    Tue Jul 28 11:13:18 2015 127.0.0.1 - - [GET] "/testlinks.mpg HTTP/1.1" 200 2130291
    Tue Jul 28 11:13:18 2015 127.0.0.1 - - [GET] "/testrechts.mpg HTTP/1.1" 200 1244999
    feedmosaic.ffm: Pid 18775 exited with status 256 after 17 seconds

    Thanks for your help !