Recherche avancée

Médias (0)

Mot : - Tags -/objet éditorial

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (19)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

Sur d’autres sites (4236)

  • Rotation on Video frame image makes video quality low and becomes green ffmpeg opencv

    21 novembre 2013, par bindal

    I am working on one application in which i have to record video on touch which includes pause recording so , i am using FFmpegFrameRecorder for that

    But when i am recording video with rear camera then in onpreviewframe
    i am getting yuvIplImage in portrait mode that is correct but when i
    am recording with front camera in portrait mode then in onPreviewframe
    i am getting image upside down , so my half, so from that result my
    half video is showing in correct portrait mode and remaining half
    video id showing upside down, so i am applying rotation on yuvIplImage
    when recording from front camera

    Here is my onPreviewFrame Method

    @Override
               public void onPreviewFrame(byte[] data, Camera camera) {

                   long frameTimeStamp = 0L;
                   if (mAudioTimestamp == 0L && firstTime > 0L)
                       frameTimeStamp = 1000L * (System.currentTimeMillis() - firstTime);
                   else if (mLastAudioTimestamp == mAudioTimestamp)
                       frameTimeStamp = mAudioTimestamp + frameTime;
                   else {
                       long l2 = (System.nanoTime() - mAudioTimeRecorded) / 1000L;
                       frameTimeStamp = l2 + mAudioTimestamp;
                       mLastAudioTimestamp = mAudioTimestamp;
                   }
                   synchronized (mVideoRecordLock) {
                       if (recording && rec && lastSavedframe != null
                               && lastSavedframe.getFrameBytesData() != null
                               && yuvIplImage != null) {
                           mVideoTimestamp += frameTime;
                           if (lastSavedframe.getTimeStamp() > mVideoTimestamp)
                               mVideoTimestamp = lastSavedframe.getTimeStamp();
                           try {
                               yuvIplImage.getByteBuffer().put(
                                       lastSavedframe.getFrameBytesData());
                               videoRecorder.setTimestamp(lastSavedframe
                                       .getTimeStamp());

                               // if (defaultCameraId == 1) {
                               // CvSize size = new CvSize(yuvIplImage.height(),
                               // yuvIplImage.width());
                               // IplImage yuvIplImage2 = opencv_core.cvCreateImage(
                               // size, yuvIplImage.depth(),
                               // yuvIplImage.nChannels());
                               //
                               // videoRecorder.record(yuvIplImage2);
                               // } else {

                               // }



                               if (defaultCameraId == 1) {
                                   yuvIplImage = rotate(yuvIplImage, 270);

                                   videoRecorder.record(yuvIplImage);
                               }else
                               {
                                   videoRecorder.record(yuvIplImage);
                               }
                               // else

                               // opencv_core.cvTranspose(yuvIplImage, yuvIplImage);
                           } catch (com.googlecode.javacv.FrameRecorder.Exception e) {
                               e.printStackTrace();
                           }
                       }
                       lastSavedframe = new SavedFrames(data, frameTimeStamp);
                   }
               }
           }

    Here is rotation function

    public static IplImage rotate(IplImage image, double angle) {
       IplImage copy = opencv_core.cvCloneImage(image);

       IplImage rotatedImage = opencv_core.cvCreateImage(
               opencv_core.cvGetSize(copy), copy.depth(), copy.nChannels());
       CvMat mapMatrix = opencv_core.cvCreateMat(2, 3, opencv_core.CV_32FC1);

       // Define Mid Point
       CvPoint2D32f centerPoint = new CvPoint2D32f();
       centerPoint.x(copy.width() / 2);
       centerPoint.y(copy.height() / 2);

       // Get Rotational Matrix
       opencv_imgproc.cv2DRotationMatrix(centerPoint, angle, 1.0, mapMatrix);
       // opencv_core.cvReleaseImage(copy);

       // Rotate the Image
       opencv_imgproc.cvWarpAffine(copy, rotatedImage, mapMatrix,
               opencv_imgproc.CV_INTER_CUBIC
                       + opencv_imgproc.CV_WARP_FILL_OUTLIERS,
               opencv_core.cvScalarAll(170));
       opencv_core.cvReleaseImage(copy);
       opencv_core.cvReleaseMat(mapMatrix);
       return rotatedImage;
    }

    But Final output of video makes half of the video green

    Thanks in advance

  • libav + AV_PIX_FMT_YUV420P + nvjpeg gives green images

    18 mai 2023, par george_d

    I need to grab frames from remote source and save them as JPEG, and I want to utilize GPU for that purpose.

    


    To achieve that, I made my own grabber based on this libav example, which decodes frames using hardware. After that I pass them to nvjpeg.

    


    I also set software frame to be in format of planar yuv 4:2:0 (and not nv12, which is not planar), which is mandatory for nvjpeg's nvjpegEncodeYUV function.

    


    But as I pass the frame to nvjpegEncodeYUV(), the resulting frame comes in green (example).

    


    So, here is the libav code :

    


    static void
encode_to_jpeg(const uint8_t* raw_data,
               const int size,
               const int width,
               const int height,
               char* output_filename)
{
  JpegCoder jpegCoder = JpegCoder();
  JpegCoderImage* jpegImage =
    new JpegCoderImage(width, height, 3, JPEGCODER_CSS_420);
  jpegImage->fill(raw_data);
  JpegCoderBytes* dataContainer = jpegCoder.encode(jpegImage, 70);

  maybe_create_dir_for_output_filename(output_filename);
  write_bin_data_to_file(
    output_filename, (char*)dataContainer->data, dataContainer->size);

  delete dataContainer;
  delete jpegImage;
}

static int
process_packet(AVCodecContext* avctx,
               AVPacket* packet)
{
  AVFrame* frame = NULL;
  AVFrame* sw_frame = NULL;
  AVFrame* tmp_frame = NULL;
  uint8_t* buffer = NULL;
  char* frame_filename = NULL;
  int size;
  int ret = 0;

  ret = avcodec_send_packet(avctx, packet);
  if (ret < 0) {
    av_log(NULL, AV_LOG_ERROR, "Error during decoding\n");
    return ret;
  }

  while (true) {
    if (!(frame = av_frame_alloc()) || !(sw_frame = av_frame_alloc())) {
      av_log(NULL, AV_LOG_ERROR, "Can not alloc frame\n");
      ret = AVERROR(ENOMEM);
      goto fail;
    }

    sw_frame->format = AV_PIX_FMT_YUV420P; // here i force the frames to be in yuv 4:2:0 planar format

    ret = avcodec_receive_frame(avctx, frame);
    if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
      av_frame_free(&frame);
      av_frame_free(&sw_frame);
      av_freep(&buffer);
      return 0;
    } else if (ret < 0) {
      av_log(NULL, AV_LOG_ERROR, "Error while decoding\n");
      goto fail;
    }

    if (frame->format == hw_pix_fmt) {
      // pass the data from GPU to CPU
      if ((ret = av_hwframe_transfer_data(sw_frame, frame, 0)) < 0) {
        av_log(
          NULL, AV_LOG_ERROR, "Error transferring the data to system memory\n");
        goto fail;
      }
      tmp_frame = sw_frame;
    } else {
      tmp_frame = frame;
    }

    size = av_image_get_buffer_size(
      (AVPixelFormat)tmp_frame->format, tmp_frame->width, tmp_frame->height, 1);

    buffer = (uint8_t*)av_malloc(size);
    if (!buffer) {
      av_log(NULL, AV_LOG_ERROR, "Can not alloc buffer\n");
      ret = AVERROR(ENOMEM);
      goto fail;
    }
    ret = av_image_copy_to_buffer(buffer,
                                  size,
                                  (const uint8_t* const*)tmp_frame->data,
                                  (const int*)tmp_frame->linesize,
                                  (AVPixelFormat)tmp_frame->format,
                                  tmp_frame->width,
                                  tmp_frame->height,
                                  1);
    if (ret < 0) {
      av_log(NULL, AV_LOG_ERROR, "Can not copy image to buffer\n");
      goto fail;
    }
    frame_filename = get_frame_filename((uintmax_t)avctx->frame_number);
    encode_to_jpeg(
      buffer, size, tmp_frame->width, tmp_frame->height, frame_filename);
    free(frame_filename);

  fail:
    // av_frame_free(&filtered_frame);
    av_frame_free(&frame);
    av_frame_free(&sw_frame);
    av_freep(&buffer);
    if (ret < 0) {
      return ret;
    }
  }
}



    


    Basically, I adopted nvjpeg part from the nvjpeg Python wrapper, so here i will post only the differing parts.

    


    #define JPEGCODER_GLOBAL_CONTEXT                                               \
  ((NvJpegGlobalContext*)(JpegCoder::_global_context))
#define ChromaSubsampling_Covert_JpegCoderToNvJpeg(subsampling)                \
  ((nvjpegChromaSubsampling_t)(subsampling))
#define ChromaSubsampling_Covert_NvJpegToJpegCoder(subsampling)                \
  ((JpegCoderChromaSubsampling)(subsampling))

size_t
getBufferSize(size_t width, size_t height)
{
  return (size_t)(width * height);
}

JpegCoderImage::JpegCoderImage(size_t width,
                               size_t height,
                               short nChannel,
                               JpegCoderChromaSubsampling subsampling)
{
  unsigned char* pBuffer = nullptr;
  cudaError_t eCopy =
    cudaMalloc((void**)&pBuffer, width * height * NVJPEG_MAX_COMPONENT);
  if (cudaSuccess != eCopy) {
    throw JpegCoderError(eCopy, cudaGetErrorString(eCopy));
  }

  this->height = height;
  this->width = width;
  this->nChannel = nChannel;
  this->subsampling = subsampling;

  nvjpegImage_t* img = (nvjpegImage_t*)malloc(sizeof(nvjpegImage_t));
  // More verbose, but readable
  img->channel[0] = pBuffer;
  img->channel[1] = pBuffer + (width * height);
  img->channel[2] = pBuffer + (width * height) + ((width / 2) * height);
  img->channel[3] = NULL;

  img->pitch[0] = (unsigned int)width;
  img->pitch[1] = (unsigned int)width / 2;
  img->pitch[2] = (unsigned int)width / 2;
  img->pitch[3] = 0;

  this->img = img;
}

void
JpegCoderImage::fill(const unsigned char* data)
{
  cudaError_t eCopy = cudaMemcpy(((nvjpegImage_t*)(this->img))->channel[0],
                                 data,
                                 getBufferSize(width, height),
                                 cudaMemcpyHostToDevice);
  if (cudaSuccess != eCopy) {
    throw JpegCoderError(eCopy, cudaGetErrorString(eCopy));
  }
  this->subsampling = JPEGCODER_CSS_420;
}


JpegCoderBytes*
JpegCoder::encode(JpegCoderImage* img, int quality)
{
  nvjpegHandle_t nv_handle = JPEGCODER_GLOBAL_CONTEXT->nv_handle;
  nvjpegEncoderState_t nv_enc_state = JPEGCODER_GLOBAL_CONTEXT->nv_enc_state;
  nvjpegEncoderParams_t nv_enc_params;

  nvjpegEncoderParamsCreate(nv_handle, &nv_enc_params, NULL);

  nvjpegEncoderParamsSetQuality(nv_enc_params, quality, NULL);
  nvjpegEncoderParamsSetOptimizedHuffman(nv_enc_params, 1, NULL);
  nvjpegEncoderParamsSetSamplingFactors(
    nv_enc_params,
    ChromaSubsampling_Covert_JpegCoderToNvJpeg(img->subsampling),
    NULL);
  int nReturnCode = nvjpegEncodeYUV(nv_handle,
                                    nv_enc_state,
                                    nv_enc_params,
                                    (nvjpegImage_t*)(img->img),
                                    ChromaSubsampling_Covert_JpegCoderToNvJpeg(img->subsampling),
                                    (int)img->width,
                                    (int)img->height,
                                    NULL);
  if (NVJPEG_STATUS_SUCCESS != nReturnCode) {
    throw JpegCoderError(nReturnCode, "NvJpeg Encoder Error");
  }

  size_t length;
  nvjpegEncodeRetrieveBitstream(nv_handle, nv_enc_state, NULL, &length, NULL);

  JpegCoderBytes* jpegData = new JpegCoderBytes(length);
  nvjpegEncodeRetrieveBitstream(
    nv_handle, nv_enc_state, jpegData->data, &(jpegData->size), NULL);

  nvjpegEncoderParamsDestroy(nv_enc_params);
  return jpegData;
}


    


    I tried removing implicit pixel format conversion and implementing nv12 to yuv420p conversion by myself, but it gave the same result.

    


    I also tried using AV_PIX_FMT_BGR24 and nvJpegEncode, which did not work either - pictures become completely messed up.

    


    The only thing worked for me before was using swscale + AV_PIX_FMT_BGR24 + nvjpegEncodeImage - but swscale gives large CPU overhead, which is not something I want to have.

    


    How can I make this thing work properly ?

    


  • Discord.py repository example : bot can join voice channel, shows voice activity (green ring), but no sound is produced ?

    19 juin 2023, par Jared Robertson

    I copied an example from the discord.py repository to test out a bot with music capabilities. Though the code enables YouTube playback, I am only concerned with local audio playback. See code here : https://github.com/Rapptz/discord.py/blob/v2.3.0/examples/basic_voice.py. I tested this code as is with only the Discord token portion updated (from constants.py). See code below :

    


    # This example requires the 'message_content' privileged intent to function.

#my constants.py with API keys, tokens, etc. stored
import constants

import asyncio

import discord
import youtube_dl

from discord.ext import commands

# Suppress noise about console usage from errors
youtube_dl.utils.bug_reports_message = lambda: ''


ytdl_format_options = {
    'format': 'bestaudio/best',
    'outtmpl': '%(extractor)s-%(id)s-%(title)s.%(ext)s',
    'restrictfilenames': True,
    'noplaylist': True,
    'nocheckcertificate': True,
    'ignoreerrors': False,
    'logtostderr': False,
    'quiet': True,
    'no_warnings': True,
    'default_search': 'auto',
    'source_address': '0.0.0.0',  # bind to ipv4 since ipv6 addresses cause issues sometimes
}

ffmpeg_options = {
    'options': '-vn',
}

ytdl = youtube_dl.YoutubeDL(ytdl_format_options)


class YTDLSource(discord.PCMVolumeTransformer):
    def __init__(self, source, *, data, volume=0.5):
        super().__init__(source, volume)

        self.data = data

        self.title = data.get('title')
        self.url = data.get('url')

    @classmethod
    async def from_url(cls, url, *, loop=None, stream=False):
        loop = loop or asyncio.get_event_loop()
        data = await loop.run_in_executor(None, lambda: ytdl.extract_info(url, download=not stream))

        if 'entries' in data:
            # take first item from a playlist
            data = data['entries'][0]

        filename = data['url'] if stream else ytdl.prepare_filename(data)
        return cls(discord.FFmpegPCMAudio(filename, **ffmpeg_options), data=data)


class Music(commands.Cog):
    def __init__(self, bot):
        self.bot = bot

    @commands.command()
    async def join(self, ctx, *, channel: discord.VoiceChannel):
        """Joins a voice channel"""

        if ctx.voice_client is not None:
            return await ctx.voice_client.move_to(channel)

        await channel.connect()

    @commands.command()
    async def play(self, ctx, *, query):
        """Plays a file from the local filesystem"""

        source = discord.PCMVolumeTransformer(discord.FFmpegPCMAudio(query))
        ctx.voice_client.play(source, after=lambda e: print(f'Player error: {e}') if e else None)

        await ctx.send(f'Now playing: {query}')

    @commands.command()
    async def yt(self, ctx, *, url):
        """Plays from a url (almost anything youtube_dl supports)"""

        async with ctx.typing():
            player = await YTDLSource.from_url(url, loop=self.bot.loop)
            ctx.voice_client.play(player, after=lambda e: print(f'Player error: {e}') if e else None)

        await ctx.send(f'Now playing: {player.title}')

    @commands.command()
    async def stream(self, ctx, *, url):
        """Streams from a url (same as yt, but doesn't predownload)"""

        async with ctx.typing():
            player = await YTDLSource.from_url(url, loop=self.bot.loop, stream=True)
            ctx.voice_client.play(player, after=lambda e: print(f'Player error: {e}') if e else None)

        await ctx.send(f'Now playing: {player.title}')

    @commands.command()
    async def volume(self, ctx, volume: int):
        """Changes the player's volume"""

        if ctx.voice_client is None:
            return await ctx.send("Not connected to a voice channel.")

        ctx.voice_client.source.volume = volume / 100
        await ctx.send(f"Changed volume to {volume}%")

    @commands.command()
    async def stop(self, ctx):
        """Stops and disconnects the bot from voice"""

        await ctx.voice_client.disconnect()

    @play.before_invoke
    @yt.before_invoke
    @stream.before_invoke
    async def ensure_voice(self, ctx):
        if ctx.voice_client is None:
            if ctx.author.voice:
                await ctx.author.voice.channel.connect()
            else:
                await ctx.send("You are not connected to a voice channel.")
                raise commands.CommandError("Author not connected to a voice channel.")
        elif ctx.voice_client.is_playing():
            ctx.voice_client.stop()


intents = discord.Intents.default()
intents.message_content = True

bot = commands.Bot(
    command_prefix=commands.when_mentioned_or("!"),
    description='Relatively simple music bot example',
    intents=intents,
)


@bot.event
async def on_ready():
    print(f'Logged in as {bot.user} (ID: {bot.user.id})')
    print('------')


async def main():
    async with bot:
        await bot.add_cog(Music(bot))
        #referenced my discord token in constants.py
        await bot.start(constants.discord_token)


asyncio.run(main())


    


    This code should cause the Discord bot to join the voice channel and play a local audio file when the message " !play query" is entered into the Discord text channel, with query in my case being "test.mp3." When " !play test.mp3" is entered, the bot does join the voice channel and appears to generate voice activity, however no sound is heard. No errors are thrown in the output. The bot simply continues on silently.

    


    Here's what I've checked and tried :

    


      

    1. I am in the Discord voice channel. The code will alert that user is not in the voice channel if a user attempts to summon the bot without being in a voice channel.

      


    2. 


    3. FFmpeg is installed and added to PATH. I even stored a copy of the .exe in the project folder.

      


    4. 


    5. I've tried specifying the full path of both FFmpeg (adding the "executable=" arg to the play function) and the audio file (stored at C :/test.mp3 and a copy in the project folder).

      


    6. 


    7. All libraries up to date.

      


    8. 


    9. Reviewed discord.py docs (https://discordpy.readthedocs.io/en/stable/api.html#voice-related) and played around with opuslib (docs say not necessary on Windows, which I'm on) and FFmpegOpusAudio, but results were the same.

      


    10. 


    11. Referenced numerous StackOverflow threads, including every one suggested when I typed the title of this post. Tried each suggestion individually and in various combinations when possible. See a few below :
Discord.py music_cog, bot joins channel but plays no sound
Discord.py Music Bot doesn’t play music
Discord.py Bot Not Playing music

      


    12. 


    13. Double checked my sound is on and volume turned up. My sound is working I can hear the Discord notification when the bot joins the voice channel.

      


    14. 


    15. Issue is the same if I try to play a YouTube file with !yt command. Bot joins channel fine but no sound is produced.

      


    16. 


    


    That's everything I can think of at the moment. There seem to be many posts regarding this exact topic and also variations of it. I've been unable to find a clear and consistent answer, nor one the works for me. I am willing to try anything at this point as it is obviously possible, but for whatever reason success eludes me. Thank you in advance for any assistance you are willing to offer.