Recherche avancée

Médias (0)

Mot : - Tags -/signalement

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (25)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

Sur d’autres sites (6190)

  • FFmpeg : Pure white image becomes gray after overlaying

    17 décembre 2019, par hanswim

    I am using FFmpeg to overlay a sticker image to another image. The problem is, the pure white sticker image becomes gray after overlaying. Here is my command :

    -i, input.jpg, -i, white.png, -filter_complex, [1]scale=162:63[v1];[0][v1]overlay=125:337[v2], -map, [v2], -map, 0:a?, output.jpg

    The input.jpg and the white.png are both pure white image. I create the pure white image by taking a screen shot of an empty website, and I confirm it’s pure white by Color Meter(RGB=255,255,255).

    After running the command, the output image :

    ffmpeg output image

    I have searched Google and tried many ways like -color_range 2, format=rgb24, -pix_fmt rgb24, but none work for me. (Maybe I haven’t use it in the right way.)

    Anyone can help me ? Thanks a lot !

  • Is there a pure golang implementation of aac transcode opus ? [closed]

    15 février 2023, par fanzhang

    Please note : pure golang implementation, no ffmpeg-wrapper / c-go

    


    Is there a pure golang implementation of aac transcode opus ?

    


    I've written a streaming-WebRTC gateway app that can convert av streams from streaming devices to WebRTC via pion, but now there's a tricky problem, the audio encoding provided by these media devices is usually aac(WebRTC do not support aac), I can't find a library that implements aac -> opus (or pcm -> opus) in pure go, only some library based on c-go (like this one). The c-go based library has some limitations, e.g. it can't be self-contained, so there a pure golang implementation of aac transcode opus ?

    


    The code (snippet) below is my current implementation using Glimesh's fdk-aac and hraban's opus

    


    ...

    // 音频相关配置
    // https://github.com/Glimesh/go-fdkaac
    aacDecoder := fdkaac.NewAacDecoder()
    defer func() {
        _ = aacDecoder.Close()
    }()
    aacDecoderInitDone := false

    var opusEncoder *hrabanopus.Encoder
    minAudioSampleRate := 16000
    var opusAudioBuffer []byte
    opusBlockSize := 960
    opusBufferSize := 1000
    opusFramesSize := 120

...

            // 用 AAC 的元数据初始化 AAC 编码器
            // https://github.com/winlinvip/go-fdkaac
            if tag.AACPacketType == flvio.AAC_SEQHDR {
                if !aacDecoderInitDone {
                    if err := aacDecoder.InitRaw(tagData); err != nil {
                        return errors.Wrapf(err, "从 %s.%s 的(音频元数据 %s)标签初始化 AAC 解码器失败", flowControlGroup, streamKey, hex.EncodeToString(tagData))
                    }
                    aacDecoderInitDone = true

                    logrus.Infof("从 %s.%s 的(音频元数据 %s)标签初始化了 AAC 解码器 %p", flowControlGroup, streamKey, hex.EncodeToString(tagData), aacDecoder)
                }
            } else {
                tagDataString := hex.EncodeToString(tagData)
                logrus.Tracef("使用已初始化了的 AAC 解码器 %p 解码 %s.%s 的音频数据 %s", aacDecoder, flowControlGroup, streamKey, tagDataString)

                // 解码 AAC 为 PCM
                decodeResult, err := aacDecoder.Decode(tagData)
                if err != nil {
                    return errors.Wrapf(err, "从 %s.%s 的标签解码 PCM 数据失败", flowControlGroup, streamKey)
                }

                rate := aacDecoder.SampleRate()
                channels := aacDecoder.NumChannels()

                if rate < minAudioSampleRate {
                    logrus.Tracef("从 %s.%s 的标签解码 PCM 数据得到音频采样频 %d小于要求的最小值(%d),将忽略编码opus的操作", flowControlGroup, streamKey, rate, minAudioSampleRate)
                } else {
                    if opusEncoder == nil {
                        oEncoder, err := hrabanopus.NewEncoder(rate, channels, hrabanopus.AppAudio)
                        if err != nil {
                            return err
                        }
                        opusEncoder = oEncoder
                    }

                    // https://github.com/Glimesh/waveguide/blob/a7e7745be31d0a112aa6adb6437df03960c4a5c5/internal/inputs/rtmp/rtmp.go#L289
                    // https://github.com/xiangxud/rtmp-to-webrtc/blob/07d7da9197cedc3756a1c87389806c3670b9c909/rtmp.go#L168
                    for opusAudioBuffer = append(opusAudioBuffer, decodeResult...); len(opusAudioBuffer) >= opusBlockSize*4; opusAudioBuffer = opusAudioBuffer[opusBlockSize*4:] {
                        pcm16 := make([]int16, opusBlockSize*2)
                        pcm16len := len(pcm16)
                        for i := 0; i < pcm16len; i++ {
                            pcm16[i] = int16(binary.LittleEndian.Uint16(opusAudioBuffer[i*2:]))
                        }
                        opusData := make([]byte, opusBufferSize)
                        n, err := opusEncoder.Encode(pcm16, opusData)
                        if err != nil {
                            return err
                        }
                        opusOutput := opusData[:n]

                        // https://datatracker.ietf.org/doc/html/rfc6716#section-2.1.4
                        // Opus can encode frames of 2.5, 5, 10, 20, 40, or 60 ms.  It can also
                        // combine multiple frames into packets of up to 120 ms.  For real-time
                        // applications, sending fewer packets per second reduces the bitrate,
                        // since it reduces the overhead from IP, UDP, and RTP headers.
                        // However, it increases latency and sensitivity to packet losses, as
                        // losing one packet constitutes a loss of a bigger chunk of audio.
                        // Increasing the frame duration also slightly improves coding
                        // efficiency, but the gain becomes small for frame sizes above 20 ms.
                        // For this reason, 20 ms frames are a good choice for most
                        // applications.
                        sampleDuration := time.Duration(opusFramesSize) * time.Millisecond
                        sample := media.Sample{
                            Data:     opusOutput,
                            Duration: sampleDuration,
                        }
                        if err := audioTrack.WriteSample(sample); err != nil {
                            return err
                        }
                    }
                }
            }

...


    


    Also, is there a pure-go ffmpeg alternative ? not wrappers

    


  • how to creating V210 encoder in pure c/c++ code [closed]

    18 mai 2024, par mans

    I am trying to implement V210 video encoding by writing the encoder by myself in c/c++ and not using any library.

    


    To achieve this, I am trying to see how I can create frame data and then use FFMPEG to put the frames in a video container using this command

    


    ffmpeg -s 1280x720 -f v210 -i frames.bin -c:v copy sample_video.mkv


    


    to create the frames.bin, I am creating a list of blocks based on the specifications that I found above.

    


    The code that I am using to pack YUV to the block is as follows :

    


    #pragma pack (push,1)
class V210Block
{
public:
    void Init(uint16_t y[], uint16_t u[], uint16_t v[])
    {
        Init(y[0], u[0], v[0],
            y[1], u[1], v[1],
            y[2], u[2], v[2],
            y[3], u[3], v[3],
            y[4], u[4], v[4],
            y[5], u[5], v[5]
            );
    }

    void Init(uint16_t y0, uint16_t u0, uint16_t v0,
        uint16_t y1, uint16_t u1, uint16_t v1,
        uint16_t y2, uint16_t u2, uint16_t v2,
        uint16_t y3, uint16_t u3, uint16_t v3,
        uint16_t y4, uint16_t u4, uint16_t v4,
        uint16_t y5, uint16_t u5, uint16_t v5
    )
    {
        PackValuesToBlockbe(u0, y0, v0, 0);
        PackValuesToBlockbe(y1, u2, y2, 1);
        PackValuesToBlockbe(v2, y3, u4, 2);
        PackValuesToBlockbe(y4, v4, y5, 3);
    }
private:
    uint32_t block[4] = { 0 };
    inline void PackValuesToBlockle(uint16_t value1, uint16_t value2, uint16_t value3, int blockNo)
    {
        // Little-endian packing (least significant bits to lower memory addresses)
        block[blockNo] = (value1 << 20) | ((value2 & 0x3FF) << 10) | (value3 & 0x3FF);
    }
    inline void PackValuesToBlockbe(uint16_t value1, uint16_t value2, uint16_t value3, int blockNo)
    {
        // big-endian packing (most significant bits to lower memory addresses)
        block[blockNo] = (value3 << 20) | ((value2 & 0x3FF) << 10) | (value1 & 0x3FF);
    }

};
#pragma pack (pop)


    


    I tested both versions of PackValuesToBlock (big endian and little endian) and I got the same result.

    


    In my test, I put Y=128 and u and V to zero for all pixels in a frame and all frames in the video. When I play the video, I can see that all pixels have values of R=0, G=173, B=0

    


    Why is this happening ?

    


    Is there can I can extract a file similar to frames.bin from a video that is already encoded in this format, so I can check the binary data and find what is wrong with my encoding ?

    


    Is there any sample c code that tries to encode one or a series of images into this format ?

    


    How can I do this using OpenCV then I can check the binary data or generated bin foe with mine to find what is the problem with my code.