Recherche avancée

Médias (0)

Mot : - Tags -/diogene

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (54)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 is the first MediaSPIP stable release.
    Its official release date is June 21, 2013 and is announced here.
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

Sur d’autres sites (6426)

  • Trying to sync audio/visual using FFMpeg and openAL

    22 août 2013, par user1379811

    hI have been studying dranger ffmpeg tutorial which explains how to sync audio and visual once you have the frames displayed and audio playing which is where im at.

    Unfortunately, the tutorial is out of date (Stephen Dranger explaained that himself to me) and also uses sdl which im not doing - this is for Blackberry 10 application.

    I just cannot make the video frames display at the correct speed (they are just playing very fast) and I have been trying for over a week now - seriously !

    I have 3 threads happening - one to read from stream into audio and video queues and then 2 threads for audio and video.

    If somebody could explain whats happening after scanning my relevent code you would be a lifesaver.

    The delay (what I pass to usleep(testDelay) seems to be going up (incrementing) which doesn't seem right to me.

    count = 1;
       MyApp* inst = worker->app;//(VideoUploadFacebook*)arg;
       qDebug() << "\n start loadstream";
       w = new QWaitCondition();
       w2 = new QWaitCondition();
       context = avformat_alloc_context();
       inst->threadStarted = true;
       cout << "start of decoding thread";
       cout.flush();


       av_register_all();
       avcodec_register_all();
       avformat_network_init();
       av_log_set_callback(&log_callback);
       AVInputFormat   *pFormat;
       //const char      device[]     = "/dev/video0";
       const char      formatName[] = "mp4";
       cout << "2start of decoding thread";
       cout.flush();



       if (!(pFormat = av_find_input_format(formatName))) {
           printf("can't find input format %s\n", formatName);
           //return void*;
       }
       //open rtsp
       if(avformat_open_input(&context, inst->capturedUrl.data(), pFormat,NULL) != 0){
           // return ;
           cout << "error opening of decoding thread: " << inst->capturedUrl.data();
           cout.flush();
       }

       cout << "3start of decoding thread";
       cout.flush();
       // av_dump_format(context, 0, inst->capturedUrl.data(), 0);
       /*   if(avformat_find_stream_info(context,NULL) < 0){
           return EXIT_FAILURE;
       }
        */
       //search video stream
       for(int i =0;inb_streams;i++){
           if(context->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
               inst->video_stream_index = i;
       }
       cout << "3z start of decoding thread";
       cout.flush();
       AVFormatContext* oc = avformat_alloc_context();
       av_read_play(context);//play RTSP
       AVDictionary *optionsDict = NULL;
       ccontext = context->streams[inst->video_stream_index]->codec;

       inst->audioc = context->streams[1]->codec;

       cout << "4start of decoding thread";
       cout.flush();
       codec = avcodec_find_decoder(ccontext->codec_id);
       ccontext->pix_fmt = PIX_FMT_YUV420P;

       AVCodec* audio_codec = avcodec_find_decoder(inst->audioc->codec_id);
       inst->packet = new AVPacket();
       if (!audio_codec) {
           cout << "audio codec not found\n"; //fflush( stdout );
           exit(1);
       }

       if (avcodec_open2(inst->audioc, audio_codec, NULL) < 0) {
           cout << "could not open codec\n"; //fflush( stdout );
           exit(1);
       }

       if (avcodec_open2(ccontext, codec, &optionsDict) < 0) exit(1);

       cout << "5start of decoding thread";
       cout.flush();
       inst->pic = avcodec_alloc_frame();

       av_init_packet(inst->packet);

       while(av_read_frame(context,inst->packet) >= 0 && &inst->keepGoing)
       {

           if(inst->packet->stream_index == 0){//packet is video

               int check = 0;



               // av_init_packet(inst->packet);
               int result = avcodec_decode_video2(ccontext, inst->pic, &check, inst->packet);

               if(check)
                   break;
           }
       }



       inst->originalVideoWidth = inst->pic->width;
       inst->originalVideoHeight = inst->pic->height;
       float aspect = (float)inst->originalVideoHeight / (float)inst->originalVideoWidth;
       inst->newVideoWidth = inst->originalVideoWidth;
       int newHeight = (int)(inst->newVideoWidth * aspect);
       inst->newVideoHeight = newHeight;//(int)inst->originalVideoHeight / inst->originalVideoWidth * inst->newVideoWidth;// = new height
       int size = avpicture_get_size(PIX_FMT_YUV420P, inst->originalVideoWidth, inst->originalVideoHeight);
       uint8_t* picture_buf = (uint8_t*)(av_malloc(size));
       avpicture_fill((AVPicture *) inst->pic, picture_buf, PIX_FMT_YUV420P, inst->originalVideoWidth, inst->originalVideoHeight);

       picrgb = avcodec_alloc_frame();
       int size2 = avpicture_get_size(PIX_FMT_YUV420P, inst->newVideoWidth, inst->newVideoHeight);
       uint8_t* picture_buf2 = (uint8_t*)(av_malloc(size2));
       avpicture_fill((AVPicture *) picrgb, picture_buf2, PIX_FMT_YUV420P, inst->newVideoWidth, inst->newVideoHeight);



       if(ccontext->pix_fmt != PIX_FMT_YUV420P)
       {
           std::cout << "fmt != 420!!!: " << ccontext->pix_fmt << std::endl;//
           // return (EXIT_SUCCESS);//-1;

       }


       if (inst->createForeignWindow(inst->myForeignWindow->windowGroup(),
               "HelloForeignWindowAppIDqq", 0,
               0, inst->newVideoWidth,
               inst->newVideoHeight)) {

       } else {
           qDebug() << "The ForeginWindow was not properly initialized";
       }




       inst->keepGoing = true;

       inst->img_convert_ctx = sws_getContext(inst->originalVideoWidth, inst->originalVideoHeight, PIX_FMT_YUV420P, inst->newVideoWidth, inst->newVideoHeight,
               PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);

       is = (VideoState*)av_mallocz(sizeof(VideoState));
       if (!is)
           return NULL;

       is->audioStream = 1;
       is->audio_st = context->streams[1];
       is->audio_buf_size = 0;
       is->audio_buf_index = 0;
       is->videoStream = 0;
       is->video_st = context->streams[0];

       is->frame_timer = (double)av_gettime() / 1000000.0;
       is->frame_last_delay = 40e-3;

       is->av_sync_type = DEFAULT_AV_SYNC_TYPE;
       //av_strlcpy(is->filename, filename, sizeof(is->filename));
       is->iformat = pFormat;
       is->ytop    = 0;
       is->xleft   = 0;

       /* start video display */
       is->pictq_mutex = new QMutex();
       is->pictq_cond  = new QWaitCondition();

       is->subpq_mutex = new QMutex();
       is->subpq_cond  = new QWaitCondition();

       is->video_current_pts_time = av_gettime();


       packet_queue_init(&audioq);

       packet_queue_init(&videoq);
       is->audioq = audioq;
       is->videoq = videoq;
       AVPacket* packet2  = new AVPacket();

       ccontext->get_buffer = our_get_buffer;
       ccontext->release_buffer = our_release_buffer;


       av_init_packet(packet2);
       while(inst->keepGoing)
       {


           if(av_read_frame(context,packet2) < 0 && keepGoing)
           {
               printf("bufferframe Could not read a frame from stream.\n");
               fflush( stdout );


           }else {



               if(packet2->stream_index == 0) {
                   packet_queue_put(&videoq, packet2);
               } else if(packet2->stream_index == 1) {
                   packet_queue_put(&audioq, packet2);
               } else {
                   av_free_packet(packet2);
               }


               if(!videoThreadStarted)
               {
                   videoThreadStarted = true;
                   QThread* thread = new QThread;
                   videoThread = new VideoStreamWorker(this);

                   // Give QThread ownership of Worker Object
                   videoThread->moveToThread(thread);
                   connect(videoThread, SIGNAL(error(QString)), this, SLOT(errorHandler(QString)));
                   QObject::connect(videoThread, SIGNAL(refreshNeeded()), this, SLOT(refreshNeededSlot()));
                   connect(thread, SIGNAL(started()), videoThread, SLOT(doWork()));
                   connect(videoThread, SIGNAL(finished()), thread, SLOT(quit()));
                   connect(videoThread, SIGNAL(finished()), videoThread, SLOT(deleteLater()));
                   connect(thread, SIGNAL(finished()), thread, SLOT(deleteLater()));

                   thread->start();
               }

               if(!audioThreadStarted)
               {
                   audioThreadStarted = true;
                   QThread* thread = new QThread;
                   AudioStreamWorker* videoThread = new AudioStreamWorker(this);

                   // Give QThread ownership of Worker Object
                   videoThread->moveToThread(thread);

                   // Connect videoThread error signal to this errorHandler SLOT.
                   connect(videoThread, SIGNAL(error(QString)), this, SLOT(errorHandler(QString)));

                   // Connects the thread’s started() signal to the process() slot in the videoThread, causing it to start.
                   connect(thread, SIGNAL(started()), videoThread, SLOT(doWork()));
                   connect(videoThread, SIGNAL(finished()), thread, SLOT(quit()));
                   connect(videoThread, SIGNAL(finished()), videoThread, SLOT(deleteLater()));

                   // Make sure the thread object is deleted after execution has finished.
                   connect(thread, SIGNAL(finished()), thread, SLOT(deleteLater()));

                   thread->start();
               }

           }

       } //finished main loop

       int MyApp::video_thread() {
       //VideoState *is = (VideoState *)arg;
       AVPacket pkt1, *packet = &pkt1;
       int len1, frameFinished;

       double pts;
       pic = avcodec_alloc_frame();

       for(;;) {
           if(packet_queue_get(&videoq, packet, 1) < 0) {
               // means we quit getting packets
               break;
           }

           pts = 0;

           global_video_pkt_pts2 = packet->pts;
           // Decode video frame
           len1 =  avcodec_decode_video2(ccontext, pic, &frameFinished, packet);
           if(packet->dts == AV_NOPTS_VALUE
                   && pic->opaque && *(uint64_t*)pic->opaque != AV_NOPTS_VALUE) {
               pts = *(uint64_t *)pic->opaque;
           } else if(packet->dts != AV_NOPTS_VALUE) {
               pts = packet->dts;
           } else {
               pts = 0;
           }
           pts *= av_q2d(is->video_st->time_base);
           // Did we get a video frame?

                   if(frameFinished) {
                       pts = synchronize_video(is, pic, pts);
                       actualPts = pts;
                       refreshSlot();
                   }
                   av_free_packet(packet);
       }
       av_free(pic);
       return 0;
    }


    int MyApp::audio_thread() {
       //VideoState *is = (VideoState *)arg;
       AVPacket pkt1, *packet = &pkt1;
       int len1, frameFinished;
       ALuint source;
       ALenum format = 0;
       //   ALuint frequency;
       ALenum alError;
       ALint val2;
       ALuint buffers[NUM_BUFFERS];
       int dataSize;


       ALCcontext *aContext;
       ALCdevice *device;
       if (!alutInit(NULL, NULL)) {
           // printf(stderr, "init alut error\n");
       }
       device = alcOpenDevice(NULL);
       if (device == NULL) {
           // printf(stderr, "device error\n");
       }

       //Create a context
       aContext = alcCreateContext(device, NULL);
       alcMakeContextCurrent(aContext);
       if(!(aContext)) {
           printf("Could not create the OpenAL context!\n");
           return 0;
       }

       alListener3f(AL_POSITION, 0.0f, 0.0f, 0.0f);









       //ALenum alError;
       if(alGetError() != AL_NO_ERROR) {
           cout << "could not create buffers";
           cout.flush();
           fflush( stdout );
           return 0;
       }
       alGenBuffers(NUM_BUFFERS, buffers);
       alGenSources(1, &source);
       if(alGetError() != AL_NO_ERROR) {
           cout << "after Could not create buffers or the source.\n";
           cout.flush(  );
           return 0;
       }

       int i;
       int indexOfPacket;
       double pts;
       //double pts;
       int n;


       for(i = 0; i < NUM_BUFFERS; i++)
       {
           if(packet_queue_get(&audioq, packet, 1) < 0) {
               // means we quit getting packets
               break;
           }
           cout << "streamindex=audio \n";
           cout.flush(  );
           //printf("before decode  audio\n");
           //fflush( stdout );
           // AVPacket *packet = new AVPacket();//malloc(sizeof(AVPacket*));
           AVFrame *decodedFrame = NULL;
           int gotFrame = 0;
           // AVFrame* decodedFrame;

           if(!decodedFrame) {
               if(!(decodedFrame = avcodec_alloc_frame())) {
                   cout << "Run out of memory, stop the streaming...\n";
                   fflush( stdout );
                   cout.flush();


                   return -2;
               }
           } else {
               avcodec_get_frame_defaults(decodedFrame);
           }

           int  len = avcodec_decode_audio4(audioc, decodedFrame, &gotFrame, packet);
           if(len < 0) {
               cout << "Error while decoding.\n";
               cout.flush(  );

               return -3;
           }
           if(len < 0) {
               /* if error, skip frame */
               is->audio_pkt_size = 0;
               //break;
           }
           is->audio_pkt_data += len;
           is->audio_pkt_size -= len;

           pts = is->audio_clock;
           // *pts_ptr = pts;
           n = 2 * is->audio_st->codec->channels;
           is->audio_clock += (double)packet->size/
                   (double)(n * is->audio_st->codec->sample_rate);
           if(gotFrame) {
               cout << "got audio frame.\n";
               cout.flush(  );
               // We have a buffer ready, send it
               dataSize = av_samples_get_buffer_size(NULL, audioc->channels,
                       decodedFrame->nb_samples, audioc->sample_fmt, 1);

               if(!format) {
                   if(audioc->sample_fmt == AV_SAMPLE_FMT_U8 ||
                           audioc->sample_fmt == AV_SAMPLE_FMT_U8P) {
                       if(audioc->channels == 1) {
                           format = AL_FORMAT_MONO8;
                       } else if(audioc->channels == 2) {
                           format = AL_FORMAT_STEREO8;
                       }
                   } else if(audioc->sample_fmt == AV_SAMPLE_FMT_S16 ||
                           audioc->sample_fmt == AV_SAMPLE_FMT_S16P) {
                       if(audioc->channels == 1) {
                           format = AL_FORMAT_MONO16;
                       } else if(audioc->channels == 2) {
                           format = AL_FORMAT_STEREO16;
                       }
                   }

                   if(!format) {
                       cout << "OpenAL can't open this format of sound.\n";
                       cout.flush(  );

                       return -4;
                   }
               }
               printf("albufferdata audio b4.\n");
               fflush( stdout );
               alBufferData(buffers[i], format, *decodedFrame->data, dataSize, decodedFrame->sample_rate);
               cout << "after albufferdata all buffers \n";
               cout.flush(  );
               av_free_packet(packet);
               //=av_free(packet);
               av_free(decodedFrame);

               if((alError = alGetError()) != AL_NO_ERROR) {
                   printf("Error while buffering.\n");

                   printAlError(alError);
                   return -6;
               }
           }
       }


       cout << "before quoe buffers \n";
       cout.flush();
       alSourceQueueBuffers(source, NUM_BUFFERS, buffers);
       cout << "before play.\n";
       cout.flush();
       alSourcePlay(source);
       cout << "after play.\n";
       cout.flush();
       if((alError = alGetError()) != AL_NO_ERROR) {
           cout << "error strating stream.\n";
           cout.flush();
           printAlError(alError);
           return 0;
       }


       // AVPacket *pkt = &is->audio_pkt;

       while(keepGoing)
       {
           while(packet_queue_get(&audioq, packet, 1)  >= 0) {
               // means we quit getting packets

               do {
                   alGetSourcei(source, AL_BUFFERS_PROCESSED, &val2);
                   usleep(SLEEP_BUFFERING);
               } while(val2 <= 0);
               if(alGetError() != AL_NO_ERROR)
               {
                   fprintf(stderr, "Error gettingsource :(\n");
                   return 1;
               }

               while(val2--)
               {



                   ALuint buffer;
                   alSourceUnqueueBuffers(source, 1, &buffer);
                   if(alGetError() != AL_NO_ERROR)
                   {
                       fprintf(stderr, "Error unqueue buffers :(\n");
                       //  return 1;
                   }
                   AVFrame *decodedFrame = NULL;
                   int gotFrame = 0;
                   // AVFrame* decodedFrame;

                   if(!decodedFrame) {
                       if(!(decodedFrame = avcodec_alloc_frame())) {
                           cout << "Run out of memory, stop the streaming...\n";
                           //fflush( stdout );
                           cout.flush();


                           return -2;
                       }
                   } else {
                       avcodec_get_frame_defaults(decodedFrame);
                   }

                   int  len = avcodec_decode_audio4(audioc, decodedFrame, &gotFrame, packet);
                   if(len < 0) {
                       cout << "Error while decoding.\n";
                       cout.flush(  );
                       is->audio_pkt_size = 0;
                       return -3;
                   }

                   is->audio_pkt_data += len;
                   is->audio_pkt_size -= len;
                   if(packet->size <= 0) {
                       /* No data yet, get more frames */
                       //continue;
                   }


                   if(gotFrame) {
                       pts = is->audio_clock;
                       len = synchronize_audio(is, (int16_t *)is->audio_buf,
                               packet->size, pts);
                       is->audio_buf_size = packet->size;
                       pts = is->audio_clock;
                       // *pts_ptr = pts;
                       n = 2 * is->audio_st->codec->channels;
                       is->audio_clock += (double)packet->size /
                               (double)(n * is->audio_st->codec->sample_rate);
                       if(packet->pts != AV_NOPTS_VALUE) {
                           is->audio_clock = av_q2d(is->audio_st->time_base)*packet->pts;
                       }
                       len = av_samples_get_buffer_size(NULL, audioc->channels,
                               decodedFrame->nb_samples, audioc->sample_fmt, 1);
                       alBufferData(buffer, format, *decodedFrame->data, len, decodedFrame->sample_rate);
                       if(alGetError() != AL_NO_ERROR)
                       {
                           fprintf(stderr, "Error buffering :(\n");
                           return 1;
                       }
                       alSourceQueueBuffers(source, 1, &buffer);
                       if(alGetError() != AL_NO_ERROR)
                       {
                           fprintf(stderr, "Error queueing buffers :(\n");
                           return 1;
                       }
                   }





               }

               alGetSourcei(source, AL_SOURCE_STATE, &val2);
               if(val2 != AL_PLAYING)
                   alSourcePlay(source);

           }


           //pic = avcodec_alloc_frame();
       }
       qDebug() << "end audiothread";
       return 1;
    }

    void MyApp::refreshSlot()
    {


       if(true)
       {

           printf("got frame %d, %d\n", pic->width, ccontext->width);
           fflush( stdout );

           sws_scale(img_convert_ctx, (const uint8_t **)pic->data, pic->linesize,
                   0, originalVideoHeight, &picrgb->data[0], &picrgb->linesize[0]);

           printf("rescaled frame %d, %d\n", newVideoWidth, newVideoHeight);
           fflush( stdout );
           //av_free_packet(packet);
           //av_init_packet(packet);

           qDebug() << "waking audio as video finished";
           ////mutex.unlock();
           //mutex2.lock();
           doingVideoFrame = false;
           //doingAudioFrame = false;
           ////mutex2.unlock();


           //mutex2.unlock();
           //w2->wakeAll();
           //w->wakeAll();
           qDebug() << "now woke audio";

           //pic = picrgb;
           uint8_t *srcy = picrgb->data[0];
           uint8_t *srcu = picrgb->data[1];
           uint8_t *srcv = picrgb->data[2];
           printf("got src yuv frame %d\n", &srcy);
           fflush( stdout );
           unsigned char *ptr = NULL;
           screen_get_buffer_property_pv(mScreenPixelBuffer, SCREEN_PROPERTY_POINTER, (void**) &ptr);
           unsigned char *y = ptr;
           unsigned char *u = y + (newVideoHeight * mStride) ;
           unsigned char *v = u + (newVideoHeight * mStride) / 4;
           int i = 0;
           printf("got buffer  picrgbwidth= %d \n", newVideoWidth);
           fflush( stdout );
           for ( i = 0; i < newVideoHeight; i++)
           {
               int doff = i * mStride;
               int soff = i * picrgb->linesize[0];
               memcpy(&y[doff], &srcy[soff], newVideoWidth);
           }

           for ( i = 0; i < newVideoHeight / 2; i++)
           {
               int doff = i * mStride / 2;
               int soff = i * picrgb->linesize[1];
               memcpy(&u[doff], &srcu[soff], newVideoWidth / 2);
           }

           for ( i = 0; i < newVideoHeight / 2; i++)
           {
               int doff = i * mStride / 2;
               int soff = i * picrgb->linesize[2];
               memcpy(&v[doff], &srcv[soff], newVideoWidth / 2);
           }
           printf("before posttoscreen \n");
           fflush( stdout );

           video_refresh_timer();
           qDebug() << "end refreshslot";

       }
       else
       {

       }





    }

    void  MyApp::refreshNeededSlot2()
       {
           printf("blitting to buffer");
           fflush(stdout);

           screen_buffer_t screen_buffer;
           screen_get_window_property_pv(mScreenWindow, SCREEN_PROPERTY_RENDER_BUFFERS, (void**) &screen_buffer);
           int attribs[] = { SCREEN_BLIT_SOURCE_WIDTH, newVideoWidth, SCREEN_BLIT_SOURCE_HEIGHT, newVideoHeight, SCREEN_BLIT_END };
           int res2 = screen_blit(mScreenCtx, screen_buffer, mScreenPixelBuffer, attribs);
           printf("dirty rectangles");
           fflush(stdout);
           int dirty_rects[] = { 0, 0, newVideoWidth, newVideoHeight };
           screen_post_window(mScreenWindow, screen_buffer, 1, dirty_rects, 0);
           printf("done screneposdtwindow");
           fflush(stdout);

       }

    void MyApp::video_refresh_timer() {
       testDelay = 0;
       //  VideoState *is = ( VideoState* )userdata;
       VideoPicture *vp;
       //double pts = 0    ;
       double actual_delay, delay, sync_threshold, ref_clock, diff;

       if(is->video_st) {
           if(false)////is->pictq_size == 0)
           {
               testDelay = 1;
               schedule_refresh(is, 1);
           } else {
               // vp = &is->pictq[is->pictq_rindex];

               delay = actualPts - is->frame_last_pts; /* the pts from last time */
               if(delay <= 0 || delay >= 1.0) {
                   /* if incorrect delay, use previous one */
                   delay = is->frame_last_delay;
               }
               /* save for next time */
               is->frame_last_delay = delay;
               is->frame_last_pts = actualPts;

               is->video_current_pts = actualPts;
               is->video_current_pts_time = av_gettime();
               /* update delay to sync to audio */
               ref_clock = get_audio_clock(is);
               diff = actualPts - ref_clock;

               /* Skip or repeat the frame. Take delay into account
        FFPlay still doesn't "know if this is the best guess." */
               sync_threshold = (delay > AV_SYNC_THRESHOLD) ? delay : AV_SYNC_THRESHOLD;
               if(fabs(diff) < AV_NOSYNC_THRESHOLD) {
                   if(diff <= -sync_threshold) {
                       delay = 0;
                   } else if(diff >= sync_threshold) {
                       delay = 2 * delay;
                   }
               }
               is->frame_timer += delay;
               /* computer the REAL delay */
               actual_delay = is->frame_timer - (av_gettime() / 1000000.0);
               if(actual_delay < 0.010) {
                   /* Really it should skip the picture instead */
                   actual_delay = 0.010;
               }
               testDelay = (int)(actual_delay * 1000 + 0.5);
               schedule_refresh(is, (int)(actual_delay * 1000 + 0.5));
               /* show the picture! */
               //video_display(is);


               // SDL_CondSignal(is->pictq_cond);
               // SDL_UnlockMutex(is->pictq_mutex);
           }
       } else {
           testDelay = 100;
           schedule_refresh(is, 100);

       }
    }

    void MyApp::schedule_refresh(VideoState *is, int delay) {
       qDebug() << "start schedule refresh timer" << delay;
       typeOfEvent = FF_REFRESH_EVENT2;
       w->wakeAll();
       //  SDL_AddTimer(delay,


    }

    I am currently waiting on data in a loop in the following way

    QMutex mutex;
       mutex.lock();
       while(keepGoing)
       {



           qDebug() << "MAINTHREAD" << testDelay;


           w->wait(&mutex);
           mutex.unlock();
           qDebug() << "MAINTHREAD past wait";

           if(!keepGoing)
           {
               break;
           }
           if(testDelay > 0 && typeOfEvent == FF_REFRESH_EVENT2)
           {
               usleep(testDelay);
               refreshNeededSlot2();
           }
           else   if(testDelay > 0 && typeOfEvent == FF_QUIT_EVENT2)
           {
               keepGoing = false;
               exit(0);
               break;
               // usleep(testDelay);
               // refreshNeededSlot2();
           }
           qDebug() << "MAINTHREADend";
           mutex.lock();

       }
       mutex.unlock();

    Please let me know if I need to provide any more relevent code. I'm sorry my code is untidy - I still learning c++ and have been modifying this code for over a week now as previously mentioned.

    Just added a sample of output I'm seeing from print outs I do to console - I can't get my head around it (it's almost too complicated for my level of expertise) but when you see the frames being played and audio playing it's very difficult to give up especially when it took me a couple of weeks to get to this stage.

    Please someone give me a hand if they spot the problem.

    MAINTHREAD past wait
    pts after syncvideo= 1073394046
    got frame 640, 640
    start video_refresh_timer
    actualpts = 1.66833
    frame lastpts = 1.63497
    start schedule refresh timer need to delay for 123

    pts after syncvideo= 1073429033
    got frame 640, 640
    MAINTHREAD loop delay before refresh = 123
    start video_refresh_timer
    actualpts = 1.7017
    frame lastpts = 1.66833
    start schedule refresh timer need to delay for 115

    MAINTHREAD past wait
    pts after syncvideo= 1073464021
    got frame 640, 640
    start video_refresh_timer
    actualpts = 1.73507
    frame lastpts = 1.7017
    start schedule refresh timer need to delay for 140

    MAINTHREAD loop delay before refresh = 140
    pts after syncvideo= 1073499008
    got frame 640, 640
    start video_refresh_timer
    actualpts = 1.76843
    frame lastpts = 1.73507
    start schedule refresh timer need to delay for 163

    MAINTHREAD past wait
    pts after syncvideo= 1073533996
    got frame 640, 640
    start video_refresh_timer
    actualpts = 1.8018
    frame lastpts = 1.76843
    start schedule refresh timer need to delay for 188

    MAINTHREAD loop delay before refresh = 188
    pts after syncvideo= 1073568983
    got frame 640, 640
    start video_refresh_timer
    actualpts = 1.83517
    frame lastpts = 1.8018
    start schedule refresh timer need to delay for 246

    MAINTHREAD past wait
    pts after syncvideo= 1073603971
    got frame 640, 640
    start video_refresh_timer
    actualpts = 1.86853
    frame lastpts = 1.83517
    start schedule refresh timer need to delay for 299

    MAINTHREAD loop delay before refresh = 299
    pts after syncvideo= 1073638958
    got frame 640, 640
    start video_refresh_timer
    actualpts = 1.9019
    frame lastpts = 1.86853
    start schedule refresh timer need to delay for 358

    MAINTHREAD past wait
    pts after syncvideo= 1073673946
    got frame 640, 640
    start video_refresh_timer
    actualpts = 1.93527
    frame lastpts = 1.9019
    start schedule refresh timer need to delay for 416

    MAINTHREAD loop delay before refresh = 416
    pts after syncvideo= 1073708933
    got frame 640, 640
    start video_refresh_timer
    actualpts = 1.96863
    frame lastpts = 1.93527
    start schedule refresh timer need to delay for 474

    MAINTHREAD past wait
    pts after syncvideo= 1073742872
    got frame 640, 640
    MAINTHREAD loop delay before refresh = 474
    start video_refresh_timer
    actualpts = 2.002
    frame lastpts = 1.96863
    start schedule refresh timer need to delay for 518

    MAINTHREAD past wait
    pts after syncvideo= 1073760366
    got frame 640, 640
    start video_refresh_timer
    actualpts = 2.03537
    frame lastpts = 2.002
    start schedule refresh timer need to delay for 575

  • Alias Artifacts

    26 avril 2013, par Multimedia Mike — General

    Throughout my own life, I have often observed that my own sense of nostalgia has a window that stretches about 10-15 years past from the current moment. Earlier this year, I discovered the show “Alias” and watched through the entire series thanks to Amazon Prime Instant Video (to be fair, I sort of skimmed the fifth and final season which I found to be horribly dull, or maybe franchise fatigue had set in). The show originally aired from 2001-2006 so I found that it fit well within the aforementioned nostalgia window.


    Alias (TV Series) logo

    But what was it, exactly, about the show that triggered nostalgia ? The computers, of course ! The show revolved around spies and espionage and cutting-edge technology necessarily played a role. The production designer for the series must have decided that Unix/Linux == awesome hacking and so many screenshots featured Linux.

    Since this is still nominally a multimedia blog, I’ll start of the screenshot recon with an old multimedia player. Here is a vintage Mac OS desktop running an ancient web browser (probably Netscape) that’s playing a full-window video (probably QuickTime embedded directly into the browser).


    Old Mac OS with old browser

    Click for larger image


    Let’s jump right into the Linux side of things. This screenshot makes me particularly sentimental since this is exactly what a stock Linux/KDE desktop looked like circa 2001-2003 and is more or less what I would have worked with on my home computer at the time :


    Alias: Linux/KDE desktop

    Click for larger image


    Studying that screenshot, we see that the user logs in as root, even to the desktop environment. Poor security practice ; I would expect better from a bunch of spooks.

    Echelon
    Look at the terminal output in the above screenshot– it’s building a program named Echelon, an omniscient spy tool inspired by a real-world surveillance network of the same name. In the show, Echelon is used to supply plot-convenient intelligence. At one point, some antagonists get their hands on the Echelon source code and seek to compile it. When they do, they will have access to the vast surveillance network. If you know anything about how computers work, don’t think about that too hard.

    Anyway, it’s interesting to note that Echelon is a properly autotool’d program– when the bad guys finally got Echelon, installation was just a ‘make install’ command away. The compilation was very user-friendly, though, as it would pop up a nice dialog box showing build progress :


    Alias: Compiling Echelon

    Click for larger image


    Examining the build lines in both that screenshot and the following lines, we can see that Echelon cares about files such as common/db_err.c and bt_curadj.c :


    Alias: Echelon used Berkeley DB

    Click for larger image


    A little googling reveals that these files both belong to the Berkeley DB library. That works ; I can imagine a program like this leveraging various database packages.

    Computer Languages
    The Echelon source code stuff comes from episode 2.11 : “A Higher Echelon”. While one faction had gotten a hold of the actual Echelon source code, a rival faction had abducted the show’s resident uber-nerd and, learning that they didn’t actually receive the Echelon code, force the nerd to re-write Echelon from scratch. Which he then proceeds to do…


    Alias: Rewriting Echelon

    Click for larger image


    The code he’s examining there appears to be C code that has something to do with joystick programming (JS_X_0, JS_Y_1, etc.). An eagle-eyed IMDb user contributed the trivia that he is looking at the file /usr/include/Linux/joystick.h.

    Getting back to the plot, how could the bad buys possibly expect him to re-write a hugely complex piece of software from scratch ? You might think this is the height of absurdity for a computer-oriented story. You’ll be pleased to know that the writers agreed with that assessment since, when the program was actually executed, it claimed to be Echelon, but that broke into a game of Pong (or some simple game). Suddenly, it makes perfect sense why the guy was looking at the joystick header file.

    This is the first bit of computer-oriented fun that I captured when I was watching the series :


    Alias: Java on the mainframe

    Click for larger image


    This printout purports to be a “mainframe log summary”. After some plot-advancing text about a security issue, it proceeds to dump out some Java source code.

    SSH
    Secure Shell (SSH) frequently showed up. Here’s a screenshot in which a verbose ‘ssh -v’ connection has just been closed, while a telnet command has apparently just been launched (evidenced by “Escape character is ‘^]’.”) :


    Alias: SSH/telnet

    Click for larger image


    This is followed by some good old Hollywood Hacking in which a free-form database command is entered through any available command line interface :


    Alias: Intuitive command line interface

    Click for larger image


    I don’t remember the episode details, but I’m pretty sure the output made perfect sense to the character typing the command. Here’s another screenshot where the SSH client pops up an extra-large GUI dialog element to notify the user that it’s currently negotiating with the host :


    Alias: SSH negotiation dialog

    Click for larger image


    Now that I look at that screenshot a little more closely, it appears to be a Win95/98 program. I wonder if there was an SSH client that actually popped up that gaudy dialog.

    There’s a lot of gibberish in this screenshot and I wish I had written down some details about what it represented according to the episode’s plot :


    Alias: Public key

    Click for larger image


    It almost sounds like they were trying to break into a network computer. Analyzing MD5 structure… public key synthesized. To me, the funniest feature is the 7-digit public key. I’m a bit rusty on the math of the RSA cryptosystem, but intuitively, it seems that the public and private keys need to be of roughly equal lengths. I.e., the private key in this scenario would also be 7 digits long.

    Gadgets
    Various devices and gadgets were seen at various junctures in the show. Here’s a tablet computer from back when tablet computers seemed like fantastical (albeit stylus-requiring) devices– the Fujitsu Stylistic 2300 :


    Alias: Fujitsu Stylistic 2300 tablet

    Click for larger image


    Here’s a videophone from an episode that aired in 2005. The specific model is the Packet8 DV326 (MSRP of US$500). As you can see from the screenshot, it can do 384 kbps both down and up.


    Alias: Packet8 DV326

    Click for larger image


    I really regret not writing down the episode details surrounding this gadget. I just know that it was critical that the good guys get it and keep from falling into the hands of the bad guys.


    Alias: Gadget using Samsung and Lexar chips

    Click for larger image


    As you can see, the (presumably) deadly device contains a Samsung chip and a Lexar chip. I have to wonder what device the production crew salvaged this from (probably just an old cell phone).

    Other Programs

    The GIMP photo editor makes an appearance while scrubbing security camera footage, and serves as the magical Enhance Button (at least they slung around the term “gamma”) :


    Alias: GIMP editor

    Click for larger image


    I have no idea what MacOS-based audio editing program this is. Any ideas ?


    Alias: Apple MacOS-based audio editor

    Click for larger image


    FTP shows up in episode 2.12, “The Getaway”. It’s described as a “secure channel” for communication, which is quite humorous to anyone versed in internet technology.


    Alias: FTP secure channel

    Click for larger image


  • Neutral net or neutered

    4 juin 2013, par Mans — Law and liberty

    In recent weeks, a number of high-profile events, in the UK and elsewhere, have been quickly seized upon to promote a variety of schemes for monitoring or filtering Internet access. These proposals, despite their good intentions of protecting children or fighting terrorism, pose a serious threat to fundamental liberties. Although at a glance the ideas may seem like a reasonable price to pay for the prevention of some truly hideous crimes, there is more than first meets the eye. Internet regulation in any form whatsoever is the thin end of a wedge at whose other end we find severely restricted freedom of expression of the kind usually associated with oppressive dictatorships. Where the Internet was once a novelty, it now forms an integrated part of modern society ; regulating the Internet means regulating our lives.

    Terrorism

    Following the brutal murder of British soldier Lee Rigby in Woolwich, attempts were made in the UK to revive the controversial Communications Data Bill, also dubbed the snooper’s charter. The bill would give police and security services unfettered access to details (excluding content) of all digital communication in the UK without needing so much as a warrant.

    The powers afforded by the snooper’s charter would, the argument goes, enable police to prevent crimes such as the one witnessed in Woolwich. True or not, the proposal would, if implemented, also bring about infrastructure for snooping on anyone at any time for any purpose. Once available, the temptation may become strong to extend, little by little, the legal use of these abilities to cover ever more everyday activities, all in the name of crime prevention, of course.

    In the emotional aftermath of a gruesome act, anything with the promise of preventing it happening again may seem like a good idea. At times like these it is important, more than ever, to remain rational and carefully consider all the potential consequences of legislation, not only the intended ones.

    Hate speech

    Hand in hand with terrorism goes hate speech, preachings designed to inspire violence against people of some singled-out nation, race, or other group. Naturally, hate speech is often to be found on the Internet, where it can reach large audiences while the author remains relatively protected. Naturally, we would prefer for it not to exist.

    To fulfil the utopian desire of a clean Internet, some advocate mandatory filtering by Internet service providers and search engines to remove this unwanted content. Exactly how such censoring might be implemented is however rarely dwelt upon, much less the consequences inadvertent blocking of innocent material might have.

    Pornography

    Another common target of calls for filtering is pornography. While few object to the blocking of child pornography, at least in principle, the debate runs hotter when it comes to the legal variety. Pornography, it is claimed, promotes violence towards women and is immoral or generally offensive. As such it ought to be blocked in the name of the greater good.

    The conviction last week of paedophile Mark Bridger for the abduction and murder of five-year-old April Jones renewed the debate about filtering of pornography in the UK ; his laptop was found to contain child pornography. John Carr of the UK government’s Council on Child Internet Safety went so far as suggesting a default blocking of all pornography, access being granted to an Internet user only once he or she had registered with some unspecified entity. Registering people wishing only to access perfectly legal material is not something we do in a democracy.

    The reality is that Google and other major search engines already remove illegal images from search results and report them to the appropriate authorities. In the UK, the Internet Watch Foundation, a non-government organisation, maintains a blacklist of what it deems ‘potentially criminal’ content, and many Internet service providers block access based on this list.

    While well-intentioned, the IWF and its blacklist should raise some concerns. Firstly, a vigilante organisation operating in secret and with no government oversight acting as the nation’s morality police has serious implications for freedom of speech. Secondly, the blocks imposed are sometimes more far-reaching than intended. In one incident, an attempt to block the cover image of the Scorpions album Virgin Killer hosted by Wikipedia (in itself a dubious decision) rendered the entire related article inaccessible as well as interfered with editing.

    Net neutrality

    Content filtering, or more precisely the lack thereof, is central to the concept of net neutrality. Usually discussed in the context of Internet service providers, this is the principle that the user should have equal, unfiltered access to all content. As a consequence, ISPs should not be held responsible for the content they deliver. Compare this to how the postal system works.

    The current debate shows that the principle of net neutrality is important not only at the ISP level, but should also include providers of essential services on the Internet. This means search engines should not be responsible for or be required to filter results, email hosts should not be required to scan users’ messages, and so on. No mandatory censoring can be effective without infringing the essential liberties of freedom of speech and press.

    Social networks operate in a less well-defined space. They are clearly not part of the essential Internet infrastructure, and they require that users sign up and agree to their terms and conditions. Because of this, they can include restrictions that would be unacceptable for the Internet as a whole. At the same time, social networks are growing in importance as means of communication between people, and as such they have a moral obligation to act fairly and apply their rules in a transparent manner.

    Facebook was recently under fire, accused of not taking sufficient measures to curb ‘hate speech,’ particularly against women. Eventually they pledged to review their policies and methods, and reducing the proliferation of such content will surely make the web a better place. Nevertheless, one must ask how Facebook (or another social network) might react to similar pressure from, say, a religious group demanding removal of ‘blasphemous’ content. What about demands from a foreign government ? Only yesterday, the Turkish prime minister Erdogan branded Twitter ‘a plague’ in a TV interview.

    Rather than impose upon Internet companies the burden of law enforcement, we should provide them the latitude to set their own policies as well as the legal confidence to stand firm in the face of unreasonable demands. The usual market forces will promote those acting responsibly.

    Further reading