Recherche avancée

Médias (1)

Mot : - Tags -/belgique

Autres articles (55)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

Sur d’autres sites (7122)

  • Writing Live-Multimedia-Application using OpenGL & Co. saving output to disc [closed]

    21 janvier 2013, par user1997286

    I want to write an application that does the following thing :

    • Getting Commands via ArtNET (DMX over Ethernet, a Control Protocol) for each object (called Layer)
    • each Layer could be one of the following : Live Camera Stream, Movie, Image
    • each layer could be translated, rotated or stretched
    • on each layer I can set filters (Like a Kaleidoscope Effect, Blur, Color Correction, etc.)
    • the rsulting video-stream is in the 3d-space
    • I want to display each part of the image on one Projector (in total up to 3 ones) using a TripleHead2GO (3 Projectors display a different region of my DVI-Output). Each Projecector-Image should have own Soft-Edge and Keystone parameters.
    • the resulting image will also be shown on a Preview-Screen with some Information overlay.

    I think all that should be possible with opengl and openal (for the movie audio)

    I think I'll use C++, OpenGL for Graphics, OpenAL for Audio, if needed ffmpeg for Video conversion, Ubuntu/Debian as OS.

    The software is used to do Multimedia-Shows on Concerts including Cameras & Co.

    All that should happen Live (On a FullHD output), Having i7 3770, GLX 670 and 16GB of Ram for at least 8 Layers. (4 Live-Images at once + Some Overlays like the Actors Name and some Logos)

    But now comes the question.

    Is it also Posible to do the following with that setting :

    • Writing the output Image with all the 3d translations to a Movie File (To Master a DVD later) with Audio
    • Mixing Audio from different Inputs & Files (Ambience Mics, Signal from the Sound Mixer, Playbacks from my own application) to more than one Mix (eg. one Mix for the Recording, one Mix for Live)
    • Stream that Output Complete or in Parts (e.g. the left Part of the Image) over the Network (For example, Projector 1 is near the Server, so I connect it using DVI, Projector 2+3 is connected to a Computer that receives the streams for that two projectors (with soft edge on each stream) and Screen 4 is outside the Concert Hall and shows the complete Live-Stream.
    • What GUI-Framework should I use for that ?
    • is it perhaps event performant enough to use Java for that ?
    • is it posible to use that mechanism for just rendering (eg. I have stored the cut points on Disc and saved every single camera stream to change some errors later or cut out some parts)
  • Revision 29159 : Tester une autre méthode pour que le multilinguisme fonctionne à la ...

    13 juin 2009, par marcimat@… — Log

    Tester une autre méthode pour que le multilinguisme fonctionne à la création de la mutualisation. Ca me parait plus portable.

  • Sending raw h264 video and aac audio frames to an RTMP server using ffmpeg

    15 août 2016, par codeimpaler

    I am receiving raw h264 and aac audio frames from an even driven source. I am trying to send these frames to an rtmp server.
    I started working from the ffmpeg example muxing.c which successfully sends a custom stream to the rtmp server. I figure I just need to replace their frame data with my own.I found this suggestion online. I have tried How to pack raw h264 stream to flv container and send over rtmp using ffmpeg (not command)
    and
    How to publish selfmade stream with ffmpeg and c++ to rtmp server ?
    and a few other suggestions but none have worked for me.
    I have tried to directly memcpy my byte buffer but my code keeps failing
    at ret = avcodec_encode_video2(c, &pkt, frame, &got_packet).
    Specifically, I get an invalid access error.
    For a little more context, anytime I receive a frame (which is event driven), void RTMPWriter::WriteVideoFrame(...) is called. Assume the constructor has already been called before the first frame is received.
    I am not that familiar with ffmpeg and there could be several things wrong with the code. Any input will be really appreciated.

       #define STREAM_FRAME_RATE 25 /* 25 images/s */
       #define STREAM_PIX_FMT    AV_PIX_FMT_YUV420P /* default pix_fmt */
       #define SCALE_FLAGS SWS_BICUBIC
       RTMPWriter::RTMPWriter()
         : seenKeyFrame(false),
           video_st({ 0 }),
           audio_st({ 0 }),
           have_video(0),
           have_audio(0)
       {

           const char *filename;
           AVCodec *audio_codec = NULL, *video_codec = NULL;
           int ret;

           int encode_video = 0, encode_audio = 0;
           AVDictionary *opt = NULL;
           int i;

           /* Initialize libavcodec, and register all codecs and formats. */
           av_register_all();

           avformat_network_init();

          String^ StreamURL = "StreamURL";
          String^ out_uri = safe_cast(ApplicationData::Current->LocalSettings->Values->Lookup(StreamURL));
          std::wstring out_uriW(out_uri->Begin());
          std::string out_uriA(out_uriW.begin(), out_uriW.end());
          filename = out_uriA.c_str();  

          /* allocate the output media context */
          avformat_alloc_output_context2(&oc, NULL, "flv", filename);
          if (!oc)
          {
              OutputDebugString(L"Could not deduce output format from file extension: using MPEG.\n");
              avformat_alloc_output_context2(&oc, NULL, "mpeg", filename);
          }
          if (!oc)
          {
              OutputDebugString(L"Could not allocate  using MPEG.\n");
          }


          fmt = oc->oformat;

          /* Add the audio and video streams using the default format codecs
          * and initialize the codecs. */
          if (fmt->video_codec != AV_CODEC_ID_NONE) {
              add_stream(&video_st, oc, &video_codec, fmt->video_codec);
              have_video = 1;
              encode_video = 1;
          }
          if (fmt->audio_codec != AV_CODEC_ID_NONE) {
              add_stream(&audio_st, oc, &audio_codec, fmt->audio_codec);
              have_audio = 1;
              encode_audio = 1;
          }

          /* Now that all the parameters are set, we can open the audio and
           * video codecs and allocate the necessary encode buffers. */
          if (have_video)
          {
              open_video(oc, video_codec, &video_st, opt);
          }

          if (have_audio)
          {
              open_audio(oc, audio_codec, &audio_st, opt);
          }

          av_dump_format(oc, 0, filename, 1);

          /* open the output file, if needed */
          if (!(fmt->flags & AVFMT_NOFILE))
          {
              ret = avio_open(&oc->pb, filename, AVIO_FLAG_WRITE);
              if (ret < 0)
              {
                  OutputDebugString(L"Could not open ");
                  OutputDebugString(out_uri->Data());
              }
          }

          /* Write the stream header, if any. */
          ret = avformat_write_header(oc, &opt);
          if (ret < 0)
          {
              OutputDebugString(L"Error occurred when writing stream header \n");
          }

       }

       void RTMPWriter::WriteVideoFrame(
           boolean isKeyFrame,
           boolean hasDiscontinuity,
           UINT64 frameId,
           UINT32 videoBufferLength,
           BYTE *videoBytes)
       {

           int ret;
           AVCodecContext *c;
           AVFrame* frame;
           int got_packet = 0;
           AVPacket pkt = { 0 };

           c = video_st.enc;

           frame = get_video_frame(videoBufferLength, videoBytes);

           /* encode the image */
           ret = avcodec_encode_video2(c, &pkt, frame, &got_packet);
           if (ret < 0) {
                OutputDebugString(L"Error encoding video frame: \n")
           }

           if (got_packet)
           {
               ret = write_frame(oc, &c->time_base, video_st.st, &pkt);
           }
           else {
               ret = 0;
           }

           if (ret < 0) {
                OutputDebugString(L"Error while writing video frame: %s\n");
           }
       }

       AVFrame * RTMPWriter::get_video_frame(
          UINT32 videoBufferLength,
          BYTE *videoBytes)
       {
           AVCodecContext *c = video_st.enc;

           if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
               /* as we only generate a YUV420P picture, we must convert it
               * to the codec pixel format if needed */
               if (!video_st.sws_ctx) {
                   video_st.sws_ctx = sws_getContext(c->width, c->height,
                       AV_PIX_FMT_YUV420P,
                       c->width, c->height,
                       c->pix_fmt,
                       SCALE_FLAGS, NULL, NULL, NULL);
                   if (!video_st.sws_ctx) {
                       fprintf(stderr,
                           "Could not initialize the conversion context\n");
                           exit(1);
                   }
               }
               fill_yuv_image(video_st.tmp_frame, video_st.next_pts, c->width, c->height, videoBufferLength, videoBytes);
               sws_scale(video_st.sws_ctx,
               (const uint8_t * const *)video_st.tmp_frame->data, video_st.tmp_frame->linesize,
               0, c->height, video_st.frame->data, video_st.frame->linesize);
           }
           else {
               fill_yuv_image(video_st.frame, video_st.next_pts, c->width, c->height, videoBufferLength, videoBytes);
           }

           video_st.frame->pts = video_st.next_pts++;

           return video_st.frame;
       }

       /* Prepare a dummy image. */
       void  RTMPWriter::fill_yuv_image(
            AVFrame *pict,
            int frame_index,
            int width,
            int height,
            UINT32 videoBufferLength,
            BYTE *videoBytes)
       {
           //int x, y, i, ret;

           /* when we pass a frame to the encoder, it may keep a reference to it
           * internally;
           * make sure we do not overwrite it here
           */
           ret = av_frame_make_writable(pict);
           if (ret < 0)
           {
                OutputDebugString(L"Unable to make piture writable");
           }

           memcpy(pict->data, videoBytes, videoBufferLength);

           //i = frame_index;

           ///* Y */
           //for (y = 0; y < height; y++)
           //  for (x = 0; x < width; x++)
           //      pict->data[0][y * pict->linesize[0] + x] = x + y + i * 3;

           ///* Cb and Cr */
           //for (y = 0; y < height / 2; y++) {
           //  for (x = 0; x < width / 2; x++) {
           //      pict->data[1][y * pict->linesize[1] + x] = 128 + y + i * 2;
           //      pict->data[2][y * pict->linesize[2] + x] = 64 + x + i * 5;
           //  }
           //}
       }

       void RTMPWriter::WriteAudioFrame()
       {

       }

       /* Add an output stream. */
       void  RTMPWriter::add_stream(
           OutputStream *ost,
           AVFormatContext *oc,
           AVCodec **codec,
           enum AVCodecID codec_id)
      {
       AVCodecContext *c;
       int i;

       /* find the encoder */
       *codec = avcodec_find_encoder(codec_id);
       if (!(*codec)) {
           OutputDebugString(L"Could not find encoder for '%s'\n");
           //avcodec_get_name(codec_id));
           exit(1);
       }

       ost->st = avformat_new_stream(oc, NULL);
       if (!ost->st) {
           OutputDebugString(L"Could not allocate stream\n");
           exit(1);
       }
       ost->st->id = oc->nb_streams - 1;
       c = avcodec_alloc_context3(*codec);
       if (!c) {
           OutputDebugString(L"Could not alloc an encoding context\n");
           exit(1);
       }
       ost->enc = c;

       switch ((*codec)->type) {
       case AVMEDIA_TYPE_AUDIO:
           c->sample_fmt = (*codec)->sample_fmts ?
               (*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
           c->bit_rate = 64000;
           c->sample_rate = 44100;
           if ((*codec)->supported_samplerates) {
               c->sample_rate = (*codec)->supported_samplerates[0];
               for (i = 0; (*codec)->supported_samplerates[i]; i++) {
                   if ((*codec)->supported_samplerates[i] == 44100)
                       c->sample_rate = 44100;
               }
           }
           c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
           c->channel_layout = AV_CH_LAYOUT_STEREO;
           if ((*codec)->channel_layouts) {
               c->channel_layout = (*codec)->channel_layouts[0];
               for (i = 0; (*codec)->channel_layouts[i]; i++) {
                   if ((*codec)->channel_layouts[i] == AV_CH_LAYOUT_STEREO)
                       c->channel_layout = AV_CH_LAYOUT_STEREO;
               }
           }
           c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
           ost->st->time_base = /*(AVRational)*/{ 1, c->sample_rate };
           break;

       case AVMEDIA_TYPE_VIDEO:
           c->codec_id = codec_id;

           c->bit_rate = 400000;
           /* Resolution must be a multiple of two. */
           c->width = 352;
           c->height = 288;
           /* timebase: This is the fundamental unit of time (in seconds) in terms
           * of which frame timestamps are represented. For fixed-fps content,
           * timebase should be 1/framerate and timestamp increments should be
           * identical to 1. */
           ost->st->time_base = /*(AVRational)*/{ 1, STREAM_FRAME_RATE };
           c->time_base = ost->st->time_base;

           c->gop_size = 12; /* emit one intra frame every twelve frames at most */
           c->pix_fmt = STREAM_PIX_FMT;
               if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
                   /* just for testing, we also add B-frames */
                   c->max_b_frames = 2;
               }
               if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
                   /* Needed to avoid using macroblocks in which some coeffs overflow.
                   * This does not happen with normal video, it just happens here as
                   * the motion of the chroma plane does not match the luma plane. */
                   c->mb_decision = 2;
               }
               break;

           default:
               break;
           }

            /* Some formats want stream headers to be separate. */
           if (oc->oformat->flags & AVFMT_GLOBALHEADER)
               c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }

    AVFrame * RTMPWriter::alloc_audio_frame(
       enum AVSampleFormat sample_fmt,
       uint64_t channel_layout,
       int sample_rate, int nb_samples)
    {
       AVFrame *frame = av_frame_alloc();
       int ret;

       if (!frame) {
           OutputDebugString(L"Error allocating an audio frame\n");
           exit(1);
       }

       frame->format = sample_fmt;
       frame->channel_layout = channel_layout;
       frame->sample_rate = sample_rate;
       frame->nb_samples = nb_samples;

       if (nb_samples) {
           ret = av_frame_get_buffer(frame, 0);
           if (ret < 0) {
               OutputDebugString(L"Error allocating an audio buffer\n");
               exit(1);
           }
       }

           return frame;
       }




    void  RTMPWriter::open_audio(
       AVFormatContext *oc,
       AVCodec *codec,
       OutputStream *ost,
       AVDictionary *opt_arg)
    {
       AVCodecContext *c;
       int nb_samples;
       int ret;
       AVDictionary *opt = NULL;

       c = ost->enc;

       /* open it */
       av_dict_copy(&opt, opt_arg, 0);
       ret = avcodec_open2(c, codec, &opt);
       av_dict_free(&opt);
       if (ret < 0) {
           OutputDebugString(L"Could not open audio codec: %s\n");// , av_err2str(ret));
           exit(1);
       }

       /* init signal generator */
       ost->t = 0;
       ost->tincr = 2 * M_PI * 110.0 / c->sample_rate;
       /* increment frequency by 110 Hz per second */
       ost->tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;

       if (c->codec->capabilities & AV_CODEC_CAP_VARIABLE_FRAME_SIZE)
           nb_samples = 10000;
       else
           nb_samples = c->frame_size;

       ost->frame = alloc_audio_frame(c->sample_fmt, c->channel_layout,
           c->sample_rate, nb_samples);
       ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,
           c->sample_rate, nb_samples);

       /* copy the stream parameters to the muxer */
       ret = avcodec_parameters_from_context(ost->st->codecpar, c);
       if (ret < 0) {
           OutputDebugString(L"Could not copy the stream parameters\n");
           exit(1);
       }

       /* create resampler context */
       ost->swr_ctx = swr_alloc();
       if (!ost->swr_ctx) {
           OutputDebugString(L"Could not allocate resampler context\n");
           exit(1);
       }

       /* set options */
       av_opt_set_int(ost->swr_ctx, "in_channel_count", c->channels, 0);
       av_opt_set_int(ost->swr_ctx, "in_sample_rate", c->sample_rate, 0);
       av_opt_set_sample_fmt(ost->swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0);
       av_opt_set_int(ost->swr_ctx, "out_channel_count", c->channels, 0);
       av_opt_set_int(ost->swr_ctx, "out_sample_rate", c->sample_rate, 0);
       av_opt_set_sample_fmt(ost->swr_ctx, "out_sample_fmt", c->sample_fmt, 0);

       /* initialize the resampling context */
       if ((ret = swr_init(ost->swr_ctx)) < 0) {
           OutputDebugString(L"Failed to initialize the resampling context\n");
           exit(1);
       }
    }

    int RTMPWriter::write_frame(
       AVFormatContext *fmt_ctx,
       const AVRational *time_base,
       AVStream *st,
       AVPacket *pkt)
    {
       /* rescale output packet timestamp values from codec to stream timebase */
       av_packet_rescale_ts(pkt, *time_base, st->time_base);
       pkt->stream_index = st->index;

       /* Write the compressed frame to the media file. */
       //log_packet(fmt_ctx, pkt);
       OutputDebugString(L"Actually sending video frame: %s\n");
       return av_interleaved_write_frame(fmt_ctx, pkt);
    }


    AVFrame  *RTMPWriter::alloc_picture(
       enum AVPixelFormat pix_fmt,
       int width,
       int height)
    {
       AVFrame *picture;
       int ret;

       picture = av_frame_alloc();
       if (!picture)
           return NULL;

       picture->format = pix_fmt;
       picture->width = width;
       picture->height = height;

       /* allocate the buffers for the frame data */
       ret = av_frame_get_buffer(picture, 32);
       if (ret < 0) {
           fprintf(stderr, "Could not allocate frame data.\n");
           exit(1);
       }

       return picture;
    }

    void RTMPWriter::open_video(
       AVFormatContext *oc,
       AVCodec *codec,
       OutputStream *ost,
       AVDictionary *opt_arg)
    {
       int ret;
       AVCodecContext *c = ost->enc;
       AVDictionary *opt = NULL;

       av_dict_copy(&opt, opt_arg, 0);

       /* open the codec */
       ret = avcodec_open2(c, codec, &opt);
       av_dict_free(&opt);
       if (ret < 0) {
           OutputDebugString(L"Could not open video codec: %s\n");// , av_err2str(ret));
           exit(1);
       }

       /* allocate and init a re-usable frame */
       ost->frame = alloc_picture(c->pix_fmt, c->width, c->height);
       if (!ost->frame) {
           OutputDebugString(L"Could not allocate video frame\n");
           exit(1);
       }

       /* If the output format is not YUV420P, then a temporary YUV420P
       * picture is needed too. It is then converted to the required
       * output format. */
       ost->tmp_frame = NULL;
       if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
           ost->tmp_frame = alloc_picture(AV_PIX_FMT_YUV420P, c->width, c->height);
           if (!ost->tmp_frame) {
               OutputDebugString(L"Could not allocate temporary picture\n");
               exit(1);
           }
       }

       /* copy the stream parameters to the muxer */
       ret = avcodec_parameters_from_context(ost->st->codecpar, c);
       if (ret < 0) {
           OutputDebugString(L"Could not copy the stream parameters\n");
           exit(1);
       }
    }

    void RTMPWriter::close_stream(AVFormatContext *oc, OutputStream *ost)
    {
       avcodec_free_context(&ost->enc);
       av_frame_free(&ost->frame);
       av_frame_free(&ost->tmp_frame);
       sws_freeContext(ost->sws_ctx);
       swr_free(&ost->swr_ctx);
    }

    RTMPWriter::~RTMPWriter()
    {
       av_write_trailer(oc);
       /* Close each codec. */
       if (have_video)
           close_stream(oc, &video_st);
       if (have_audio)
           close_stream(oc, &audio_st);

       if (!(fmt->flags & AVFMT_NOFILE))
           /* Close the output file. */
           avio_closep(&oc->pb);

       /* free the stream */
       avformat_free_context(oc);
    }