Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (82)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (7228)

  • how to stream h.264 video with mp3 audio using libavcodec ?

    18 septembre 2012, par dasg

    I read h.264 frames from webcamera and capture audio from microphone. I need to stream live video to ffserver. During debug I read video from ffserver using ffmpeg with following command :

    ffmpeg -i http://127.0.0.1:12345/robot.avi -vcodec copy -acodec copy out.avi

    My video in output file is slightly accelerated. If I add a audio stream it is accelerated several times. Sometimes there is no audio in the output file.

    Here is my code for encoding audio :

    #include "v_audio_encoder.h"

    extern "C" {
    #include <libavcodec></libavcodec>avcodec.h>
    }
    #include <cassert>

    struct VAudioEncoder::Private
    {
       AVCodec *m_codec;
       AVCodecContext *m_context;

       std::vector m_outBuffer;
    };

    VAudioEncoder::VAudioEncoder( int sampleRate, int bitRate )
    {
       d = new Private( );
       d->m_codec = avcodec_find_encoder( CODEC_ID_MP3 );
       assert( d->m_codec );
       d->m_context = avcodec_alloc_context3( d->m_codec );

       // put sample parameters
       d->m_context->channels = 2;
       d->m_context->bit_rate = bitRate;
       d->m_context->sample_rate = sampleRate;
       d->m_context->sample_fmt = AV_SAMPLE_FMT_S16;
       strcpy( d->m_context->codec_name, "libmp3lame" );

       // open it
       int res = avcodec_open2( d->m_context, d->m_codec, 0 );
       assert( res >= 0 );

       d->m_outBuffer.resize( d->m_context->frame_size );
    }

    VAudioEncoder::~VAudioEncoder( )
    {
       avcodec_close( d->m_context );
       av_free( d->m_context );
       delete d;
    }

    void VAudioEncoder::encode( const std::vector&amp; samples, std::vector&amp; outbuf )
    {
       assert( (int)samples.size( ) == d->m_context->frame_size );

       int outSize = avcodec_encode_audio( d->m_context, d->m_outBuffer.data( ),
                                           d->m_outBuffer.size( ), reinterpret_cast<const>( samples.data( ) ) );
       if( outSize ) {
           outbuf.resize( outSize );
           memcpy( outbuf.data( ), d->m_outBuffer.data( ), outSize );
       }
       else
           outbuf.clear( );
    }

    int VAudioEncoder::getFrameSize( ) const
    {
       return d->m_context->frame_size;
    }
    </const></cassert>

    Here is my code for streaming video :

    #include "v_out_video_stream.h"

    extern "C" {
    #include <libavformat></libavformat>avformat.h>
    #include <libavutil></libavutil>opt.h>
    #include <libavutil></libavutil>avstring.h>
    #include <libavformat></libavformat>avio.h>
    }

    #include <stdexcept>
    #include <cassert>

    struct VStatticRegistrar
    {
       VStatticRegistrar( )
       {
           av_register_all( );
           avformat_network_init( );
       }
    };

    VStatticRegistrar __registrar;

    struct VOutVideoStream::Private
    {
       AVFormatContext * m_context;
       int m_videoStreamIndex;
       int m_audioStreamIndex;

       int m_videoBitrate;
       int m_width;
       int m_height;
       int m_fps;
       int m_bitrate;

       bool m_waitKeyFrame;
    };

    VOutVideoStream::VOutVideoStream( int width, int height, int fps, int bitrate )
    {
       d = new Private( );
       d->m_width = width;
       d->m_height = height;
       d->m_fps = fps;
       d->m_context = 0;
       d->m_videoStreamIndex = -1;
       d->m_audioStreamIndex = -1;
       d->m_bitrate = bitrate;
       d->m_waitKeyFrame = true;
    }

    bool VOutVideoStream::connectToServer( const std::string&amp; uri )
    {
       assert( ! d->m_context );

       // initalize the AV context
       d->m_context = avformat_alloc_context();
       if( !d->m_context )
           return false;
       // get the output format
       d->m_context->oformat = av_guess_format( "ffm", NULL, NULL );
       if( ! d->m_context->oformat )
           return false;

       strcpy( d->m_context->filename, uri.c_str( ) );

       // add an H.264 stream
       AVStream *stream = avformat_new_stream( d->m_context, NULL );
       if ( ! stream )
           return false;
       // initalize codec
       AVCodecContext* codec = stream->codec;
       if( d->m_context->oformat->flags &amp; AVFMT_GLOBALHEADER )
           codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
       codec->codec_id = CODEC_ID_H264;
       codec->codec_type = AVMEDIA_TYPE_VIDEO;
       strcpy( codec->codec_name, "libx264" );
    //    codec->codec_tag = ( unsigned(&#39;4&#39;) &lt;&lt; 24 ) + (unsigned(&#39;6&#39;) &lt;&lt; 16 ) + ( unsigned(&#39;2&#39;) &lt;&lt; 8 ) + &#39;H&#39;;
       codec->width = d->m_width;
       codec->height = d->m_height;
       codec->time_base.den = d->m_fps;
       codec->time_base.num = 1;
       codec->bit_rate = d->m_bitrate;
       d->m_videoStreamIndex = stream->index;

       // add an MP3 stream
       stream = avformat_new_stream( d->m_context, NULL );
       if ( ! stream )
           return false;
       // initalize codec
       codec = stream->codec;
       if( d->m_context->oformat->flags &amp; AVFMT_GLOBALHEADER )
           codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
       codec->codec_id = CODEC_ID_MP3;
       codec->codec_type = AVMEDIA_TYPE_AUDIO;
       strcpy( codec->codec_name, "libmp3lame" );
       codec->sample_fmt = AV_SAMPLE_FMT_S16;
       codec->channels = 2;
       codec->bit_rate = 64000;
       codec->sample_rate = 44100;
       d->m_audioStreamIndex = stream->index;

       // try to open the stream
       if( avio_open( &amp;d->m_context->pb, d->m_context->filename, AVIO_FLAG_WRITE ) &lt; 0 )
            return false;

       // write the header
       return avformat_write_header( d->m_context, NULL ) == 0;
    }

    void VOutVideoStream::disconnect( )
    {
       assert( d->m_context );

       avio_close( d->m_context->pb );
       avformat_free_context( d->m_context );
       d->m_context = 0;
    }

    VOutVideoStream::~VOutVideoStream( )
    {
       if( d->m_context )
           disconnect( );
       delete d;
    }

    int VOutVideoStream::getVopType( const std::vector&amp; image )
    {
       if( image.size( ) &lt; 6 )
           return -1;
       unsigned char *b = (unsigned char*)image.data( );

       // Verify NAL marker
       if( b[ 0 ] || b[ 1 ] || 0x01 != b[ 2 ] ) {
           ++b;
           if ( b[ 0 ] || b[ 1 ] || 0x01 != b[ 2 ] )
               return -1;
       }

       b += 3;

       // Verify VOP id
       if( 0xb6 == *b ) {
           ++b;
           return ( *b &amp; 0xc0 ) >> 6;
       }

       switch( *b ) {
       case 0x65: return 0;
       case 0x61: return 1;
       case 0x01: return 2;
       }

       return -1;
    }

    bool VOutVideoStream::sendVideoFrame( std::vector&amp; image )
    {
       // Init packet
       AVPacket pkt;
       av_init_packet( &amp;pkt );
       pkt.flags |= ( 0 >= getVopType( image ) ) ? AV_PKT_FLAG_KEY : 0;

       // Wait for key frame
       if ( d->m_waitKeyFrame ) {
           if( pkt.flags &amp; AV_PKT_FLAG_KEY )
               d->m_waitKeyFrame = false;
           else
               return true;
       }

       pkt.stream_index = d->m_videoStreamIndex;
       pkt.data = image.data( );
       pkt.size = image.size( );
       pkt.pts = pkt.dts = AV_NOPTS_VALUE;

       return av_write_frame( d->m_context, &amp;pkt ) >= 0;
    }

    bool VOutVideoStream::sendAudioFrame( std::vector&amp; audio )
    {
       // Init packet
       AVPacket pkt;
       av_init_packet( &amp;pkt );
       pkt.stream_index = d->m_audioStreamIndex;
       pkt.data = audio.data( );
       pkt.size = audio.size( );
       pkt.pts = pkt.dts = AV_NOPTS_VALUE;

       return av_write_frame( d->m_context, &amp;pkt ) >= 0;
    }
    </cassert></stdexcept>

    Here is how I use it :

    BOOST_AUTO_TEST_CASE(testSendingVideo)
    {
       const int framesToGrab = 90000;

       VOutVideoStream stream( VIDEO_WIDTH, VIDEO_HEIGHT, FPS, VIDEO_BITRATE );
       if( stream.connectToServer( URI ) ) {
           VAudioEncoder audioEncoder( AUDIO_SAMPLE_RATE, AUDIO_BIT_RATE );
           VAudioCapture microphone( MICROPHONE_NAME, AUDIO_SAMPLE_RATE, audioEncoder.getFrameSize( ) );

           VLogitecCamera camera( VIDEO_WIDTH, VIDEO_HEIGHT );
           BOOST_REQUIRE( camera.open( CAMERA_PORT ) );
           BOOST_REQUIRE( camera.startCapturing( ) );

           std::vector image, encodedAudio;
           std::vector voice;
           boost::system_time startTime;
           int delta;
           for( int i = 0; i &lt; framesToGrab; ++i ) {
               startTime = boost::posix_time::microsec_clock::universal_time( );

               BOOST_REQUIRE( camera.read( image ) );
               BOOST_REQUIRE( microphone.read( voice ) );
               audioEncoder.encode( voice, encodedAudio );

               BOOST_REQUIRE( stream.sendVideoFrame( image ) );
               BOOST_REQUIRE( stream.sendAudioFrame( encodedAudio ) );

               delta = ( boost::posix_time::microsec_clock::universal_time( ) - startTime ).total_milliseconds( );
               if( delta &lt; 1000 / FPS )
                   boost::thread::sleep( startTime + boost::posix_time::milliseconds( 1000 / FPS - delta ) );
           }

           BOOST_REQUIRE( camera.stopCapturing( ) );
           BOOST_REQUIRE( camera.close( ) );
       }
       else
           std::cout &lt;&lt; "failed to connect to server" &lt;&lt; std::endl;
    }

    I think my problem is in PTS and DTS. Can anyone help me ?

  • Method For Crawling Google

    28 mai 2011, par Multimedia Mike — Big Data

    I wanted to crawl Google in order to harvest a large corpus of certain types of data as yielded by a certain search term (we’ll call it “term” for this exercise). Google doesn’t appear to offer any API to automatically harvest their search results (why would they ?). So I sat down and thought about how to do it. This is the solution I came up with.



    FAQ
    Q : Is this legal / ethical / compliant with Google’s terms of service ?
    A : Does it look like I care ? Moving right along…

    Manual Crawling Process
    For this exercise, I essentially automated the task that would be performed by a human. It goes something like this :

    1. Search for “term”
    2. On the first page of results, download each of the 10 results returned
    3. Click on the next page of results
    4. Go to step 2, until Google doesn’t return anymore pages of search results

    Google returns up to 1000 results for a given search term. Fetching them 10 at a time is less than efficient. Fortunately, the search URL can easily be tweaked to return up to 100 results per page.

    Expanding Reach
    Problem : 1000 results for the “term” search isn’t that many. I need a way to expand the search. I’m not aiming for relevancy ; I’m just searching for random examples of some data that occurs around the internet.

    My solution for this is to refine the search using the “site” wildcard. For example, you can ask Google to search for “term” at all Canadian domains using “site :.ca”. So, the manual process now involves harvesting up to 1000 results for every single internet top level domain (TLD). But many TLDs can be more granular than that. For example, there are 50 sub-domains under .us, one for each state (e.g., .ca.us, .ny.us). Those all need to be searched independently. Same for all the sub-domains under TLDs which don’t allow domains under the main TLD, such as .uk (search under .co.uk, .ac.uk, etc.).

    Another extension is to combine “term” searches with other terms that are likely to have a rich correlation with “term”. For example, if “term” is relevant to various scientific fields, search for “term” in conjunction with various scientific disciplines.

    Algorithmically
    My solution is to create an SQLite database that contains a table of search seeds. Each seed is essentially a “site :” string combined with a starting index.

    Each TLD and sub-TLD is inserted as a searchseed record with a starting index of 0.

    A script performs the following crawling algorithm :

    • Fetch the next record from the searchseed table which has not been crawled
    • Fetch search result page from Google
    • Scrape URLs from page and insert each into URL table
    • Mark the searchseed record as having been crawled
    • If the results page indicates there are more results for this search, insert a new searchseed for the same seed but with a starting index 100 higher

    Digging Into Sites
    Sometimes, Google notes that certain sites are particularly rich sources of “term” and offers to let you search that site for “term”. This basically links to another search for ‘term site:somesite”. That site gets its own search seed and the program might harvest up to 1000 URLs from that site alone.

    Harvesting the Data
    Armed with a database of URLs, employ the following algorithm :

    • Fetch a random URL from the database which has yet to be downloaded
    • Try to download it
    • For goodness sake, have a mechanism in place to detect whether the download process has stalled and automatically kill it after a certain period of time
    • Store the data and update the database, noting where the information was stored and that it is already downloaded

    This step is easy to parallelize by simply executing multiple copies of the script. It is useful to update the URL table to indicate that one process is already trying to download a URL so multiple processes don’t duplicate work.

    Acting Human
    A few factors here :

    • Google allegedly doesn’t like automated programs crawling its search results. Thus, at the very least, don’t let your script advertise itself as an automated program. At a basic level, this means forging the User-Agent : HTTP header. By default, Python’s urllib2 will identify itself as a programming language. Change this to a well-known browser string.
    • Be patient ; don’t fire off these search requests as quickly as possible. My crawling algorithm inserts a random delay of a few seconds in between each request. This can still yield hundreds of useful URLs per minute.
    • On harvesting the data : Even though you can parallelize this and download data as quickly as your connection can handle, it’s a good idea to randomize the URLs. If you hypothetically had 4 download processes running at once and they got to a point in the URL table which had many URLs from a single site, the server might be configured to reject too many simultaneous requests from a single client.

    Conclusion
    Anyway, that’s just the way I would (and did) do it. What did I do with all the data ? That’s a subject for a different post.

    Adorable spider drawing from here.

  • Révision 19897 : Dans une requete HEAD renvoyer le vrai en-tête. Pour cela on est bien obligé de ...

    11 septembre 2012, par cedric -

    On perd donc la rapidité de calcul des requetes HEAD au profit de leur exactitude. Dans la mesure ou HEAD représente en général moins de 1% des requetes (voire de l’ordre de 2/1000 sur un echantillon de sites en production) la perte en performance serveur est négligeable (d’autant plus qu’une (...)