Recherche avancée

Médias (91)

Autres articles (41)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (5724)

  • How to fix av_interleaved_write_frame() broken pipe error in php

    31 mars, par Adekunle Adeyeye

    I have an issue using ffmpeg to stream audio and parse to google cloud speech to text in PHP.

    


    It returns this output.
I have tried delaying some part of the script, that did not solve it.
I have also checked for similar questions. however, they are mostly in python and none of the solutions actually work for this.

    


      built with gcc 8 (GCC)
  cpudetect
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
Input #0, mp3, from 'https://npr-ice.streamguys1.com/live.mp3':
  Metadata:
    icy-br          : 96
    icy-description : NPR Program Stream
    icy-genre       : News and Talk
    icy-name        : NPR Program Stream
    icy-pub         : 0
    StreamTitle     :
  Duration: N/A, start: 0.000000, bitrate: 96 kb/s
    Stream #0:0: Audio: mp3, 32000 Hz, stereo, fltp, 96 kb/s
Stream mapping:
  Stream #0:0 -> #0:0 (mp3 (mp3float) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Output #0, s16le, to 'pipe:':
  Metadata:
    icy-br          : 96
    icy-description : NPR Program Stream
    icy-genre       : News and Talk
    icy-name        : NPR Program Stream
    icy-pub         : 0
    StreamTitle     :
    encoder         : Lavf58.29.100
    Stream #0:0: Audio: pcm_s16le, 16000 Hz, mono, s16, 256 kb/s
    Metadata:
      encoder         : Lavc58.54.100 pcm_s16le
**av_interleaved_write_frame(): Broken pipe** 256.0kbits/s speed=1.02x
**Error writing trailer of pipe:: Broken pipe**
size=      54kB time=00:00:01.76 bitrate= 250.8kbits/s speed=0.465x
video:0kB audio:55kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Conversion failed!


    


    this is my PHP code

    


    require_once 'vendor/autoload.php';
    
    $projectId = "xxx-45512";
    putenv('GOOGLE_APPLICATION_CREDENTIALS=' . __DIR__ . '/xxx-45512-be3eb805f1d7.json');
    
    // Database connection
    $pdo = new PDO('mysql:host=localhost;dbname=', '', '');
    $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
    
    $url = "https://npr-ice.streamguys1.com/live.mp3";
    
    $ffmpegCmd = "ffmpeg -re -i $url -acodec pcm_s16le -ac 1 -ar 16000 -f s16le -";
    
    $fp = popen($ffmpegCmd, "r");
    if (!$fp) {
        die("Failed to open FFmpeg stream.");
    }
    sleep(5);

    try {
        $client = new SpeechClient(['transport' => 'grpc', 'credentials' => json_decode(file_get_contents(getenv('GOOGLE_APPLICATION_CREDENTIALS')), true)]);
    } catch (Exception $e) {
        echo 'Error: ' . $e->getMessage(); 
        exit;
    }
    
    $recognitionConfig = new RecognitionConfig([
        'auto_decoding_config' => new AutoDetectDecodingConfig(),
        'language_codes' => ['en-US'],
        'model' => 'long',
    ]);
    
    $streamingConfig = new StreamingRecognitionConfig([
        'config' => $recognitionConfig,
    ]);
    
    $configRequest = new StreamingRecognizeRequest([
        'recognizer' => "projects/$projectId/locations/global/recognizers/_",
        'streaming_config' => $streamingConfig,
    ]);
    
    
    function streamAudio($fp)
    {
        while (!feof($fp)) {
            yield fread($fp, 4096);
        }
    }
    
    $responses = $client->streamingRecognize([
    'requests' => (function () use ($configRequest, $fp) {
            yield $configRequest; // Send initial config
            foreach (streamAudio($fp) as $audioChunk) {
                yield new StreamingRecognizeRequest(['audio' => $audioChunk]);
            }
        })()]
    );
    
    // $responses = $speechClient->streamingRecognize();
    // $responses->writeAll([$request,]);
    
    foreach ($responses as $response) {
        foreach ($response->getResults() as $result) {
            $transcript = $result->getAlternatives()[0]->getTranscript();
            // echo "Transcript: $transcript\n";
    
            // Insert into the database
            $stmt = $pdo->prepare("INSERT INTO transcriptions (transcript) VALUES (:transcript)");
            $stmt->execute(['transcript' => $transcript]);
        }
    }
    
    
    pclose($fp);
    $client->close();


    


    I'm not sure what the issue is at this time.

    


    UPDATE

    


    I've done some more debugging and i have gotten the error to clear and to stream actually starts.
However, I expect the audio to transcribe and update my database but instead I get this error when i close the stream

    


    error after closing stream

    


    this is my updated code

    


        $handle = popen($ffmpegCommand, "r");

    try {
        $client = new SpeechClient(['transport' => 'grpc', 'credentials' => json_decode(file_get_contents(getenv('GOOGLE_APPLICATION_CREDENTIALS')), true)]);
    } catch (Exception $e) {
        echo 'Error: ' . $e->getMessage(); 
        exit;
    }
    
    try {
    $recognitionConfig = (new RecognitionConfig())
        ->setAutoDecodingConfig(new AutoDetectDecodingConfig())
        ->setLanguageCodes(['en-US'], ['en-UK'])
        ->setModel('long');
    } catch (Exception $e) {
        echo 'Error: ' . $e->getMessage(); 
        exit;
    }
    
    try {
        $streamConfig = (new StreamingRecognitionConfig())
        ->setConfig($recognitionConfig);
    } catch (Exception $e) {
        echo 'Error: ' . $e->getMessage();
        exit;
    }
    try {
        $configRequest = (new StreamingRecognizeRequest())
        ->setRecognizer("projects/$projectId/locations/global/recognizers/_")
        ->setStreamingConfig($streamConfig);
    } catch (Exception $e) {
        echo 'Error: ' . $e->getMessage(); 
        exit;
    }
    
    $stream = $client->streamingRecognize();
    $stream->write($configRequest);
    
    mysqli_query($conn, "INSERT INTO transcriptions (transcript) VALUES ('bef')");
    
    while (!feof($handle)) {
        $chunk = fread($handle, 25600);
        // printf('chunk: ' . $chunk);
        if ($chunk !== false) {
            try {
                $request = (new StreamingRecognizeRequest())
                        ->setAudio($chunk);
                    $stream->write($request);
            } catch (Exception $e) {
                printf('Errorc: ' . $e->getMessage());
            }
        }
    }
    
    
    $insr = json_encode($stream);
    mysqli_query($conn, "INSERT INTO transcriptions (transcript) VALUES ('$insr')");
    
    foreach ($stream->read() as $response) {
        mysqli_query($conn, "INSERT INTO transcriptions (transcript) VALUES ('loop1')");
        foreach ($response->getResults() as $result) {
            mysqli_query($conn, "INSERT INTO transcriptions (transcript) VALUES ('loop2')");
            foreach ($result->getAlternatives() as $alternative) {
                $trans = $alternative->getTranscript();
                mysqli_query($conn, "INSERT INTO transcriptions (transcript) VALUES ('$trans')");
            }
        }
    }
    
    pclose($handle);
    $stream->close();
    $client->close();```


    


  • Streaming client over TCP and RTSP through Wi-Fi or LAN in Android

    6 janvier 2015, par Gowtham

    I am struggling to develop streaming client for DVR camera’s, I tried with VLC Media player through RTSP protocol I got the solution (used Wi-Fi standard model like, Netgear etc.,), but the same code is not supporting for other Wi-Fi Modem’s, now am working with FFMPEG framework to implement the streaming client in android using JNI API. Not getting any proper idea to implement JNI api

    Network Camera working with IP Cam Viewer App

    code below,

    /*****************************************************/
    /* functional call */
    /*****************************************************/

    jboolean Java_FFmpeg_allocateBuffer( JNIEnv* env, jobject thiz )
    {

       // Allocate an AVFrame structure
       pFrameRGB=avcodec_alloc_frame();
       if(pFrameRGB==NULL)
           return 0;
    sprintf(debugMsg, "%d %d", screenWidth, screenHeight);
    INFO(debugMsg);
       // Determine required buffer size and allocate buffer
       numBytes=avpicture_get_size(dstFmt, screenWidth, screenHeight);
    /*
       numBytes=avpicture_get_size(dstFmt, pCodecCtx->width,
                     pCodecCtx->height);
    */
       buffer=(uint8_t *)av_malloc(numBytes * sizeof(uint8_t));

       // Assign appropriate parts of buffer to image planes in pFrameRGB
       // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
       // of AVPicture
       avpicture_fill((AVPicture *)pFrameRGB, buffer, dstFmt, screenWidth, screenHeight);

       return 1;
    }


    /* for each decoded frame */
    jbyteArray Java_FFmpeg_getNextDecodedFrame( JNIEnv* env, jobject thiz )
    {


    av_free_packet(&packet);

    while(av_read_frame(pFormatCtx, &packet)>=0) {

       if(packet.stream_index==videoStream) {

           avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

           if(frameFinished) {    

           img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, screenWidth, screenHeight, dstFmt, SWS_BICUBIC, NULL, NULL, NULL);

    /*
    img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, dstFmt, SWS_BICUBIC, NULL, NULL, NULL);
    */

           sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize,
        0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);

    ++frameCount;

           /* uint8_t == unsigned 8 bits == jboolean */
           jbyteArray nativePixels = (*env)->NewByteArray(env, numBytes);
           (*env)->SetByteArrayRegion(env, nativePixels, 0, numBytes, buffer);
           return nativePixels;
           }

       }

       av_free_packet(&packet);
    }

    return NULL;
    }

    /*****************************************************/
    /* / functional call */
    /*****************************************************/


    jstring
    Java_FFmpeg_play( JNIEnv* env, jobject thiz, jstring jfilePath )
    {
       INFO("--- Play");
    char* filePath = (char *)(*env)->GetStringUTFChars(env, jfilePath, NULL);
    RE(filePath);

    /*****************************************************/

     AVFormatContext *pFormatCtx;
     int             i, videoStream;
     AVCodecContext  *pCodecCtx;
     AVCodec         *pCodec;
     AVFrame         *pFrame;
     AVPacket        packet;
     int             frameFinished;
     float           aspect_ratio;
     struct SwsContext *img_convert_ctx;

    INFO(filePath);

    /* FFmpeg */

     av_register_all();

     if(av_open_input_file(&pFormatCtx, filePath, NULL, 0, NULL)!=0)
       RE("failed av_open_input_file ");

     if(av_find_stream_info(pFormatCtx)<0)
           RE("failed av_find_stream_info");

     videoStream=-1;
     for(i=0; inb_streams; i++)
       if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
         videoStream=i;
         break;
       }
     if(videoStream==-1)
           RE("failed videostream == -1");

     pCodecCtx=pFormatCtx->streams[videoStream]->codec;

     pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
     if(pCodec==NULL) {
       RE("Unsupported codec!");
     }

     if(avcodec_open(pCodecCtx, pCodec)<0)
       RE("failed codec_open");

     pFrame=avcodec_alloc_frame();

    /* /FFmpeg */

    INFO("codec name:");
    INFO(pCodec->name);
    INFO("Getting into stream decode:");

    /* video stream */

     i=0;
     while(av_read_frame(pFormatCtx, &packet)>=0) {

       if(packet.stream_index==videoStream) {
         avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
         if(frameFinished) {
    ++i;
    INFO("frame finished");

       AVPicture pict;
    /*
       img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
    pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
    PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);

       sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize,
    0, pCodecCtx->height, pict.data, pict.linesize);
    */
         }
       }
       av_free_packet(&packet);
     }

    /* /video stream */

     av_free(pFrame);

     avcodec_close(pCodecCtx);

     av_close_input_file(pFormatCtx);

     RE("end of main");
    }

    I can’t able to get the frames from Network camera

    And give some idea to implement the live stream client for DVR camera in Android

  • Revision 1fc70e47cd : Adds code for corner detection and ransac This code is to start experiments wit

    7 février 2015, par Deb Mukherjee

    Changed Paths :
     Add /vp9/encoder/vp9_corner_detect.c


     Add /vp9/encoder/vp9_corner_detect.h


     Add /vp9/encoder/vp9_corner_match.c


     Add /vp9/encoder/vp9_corner_match.h


     Add /vp9/encoder/vp9_global_motion.c


     Add /vp9/encoder/vp9_global_motion.h


     Add /vp9/encoder/vp9_ransac.c


     Add /vp9/encoder/vp9_ransac.h


     Modify /vp9/vp9cx.mk



    Adds code for corner detection and ransac

    This code is to start experiments with global motion models.

    The corner detection can be either fast_9 or Harris.
    Corner matching is currently based on normalized correlation.
    Three flavors of ransac are used to estimate either a
    homography (8-param), or an affine model (6-param) or a
    rotation-zoom only affine model (4-param).

    The highest level API for the library is in vp9_global_motion.h,
    where there are two functions - one for computing a single model
    and another for computing multiple models up to a maximum number
    provided or until a desired inlier probability is achieved.

    Change-Id : I3f9788ec2dc0635cbc65f5c66c6ea8853cfcf2dd