Recherche avancée

Médias (1)

Mot : - Tags -/bug

Autres articles (49)

  • Monitoring de fermes de MediaSPIP (et de SPIP tant qu’à faire)

    31 mai 2013, par

    Lorsque l’on gère plusieurs (voir plusieurs dizaines) de MediaSPIP sur la même installation, il peut être très pratique d’obtenir d’un coup d’oeil certaines informations.
    Cet article a pour but de documenter les scripts de monitoring Munin développés avec l’aide d’Infini.
    Ces scripts sont installés automatiquement par le script d’installation automatique si une installation de munin est détectée.
    Description des scripts
    Trois scripts Munin ont été développés :
    1. mediaspip_medias
    Un script de (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Les notifications de la ferme

    1er décembre 2010, par

    Afin d’assurer une gestion correcte de la ferme, il est nécessaire de notifier plusieurs choses lors d’actions spécifiques à la fois à l’utilisateur mais également à l’ensemble des administrateurs de la ferme.
    Les notifications de changement de statut
    Lors d’un changement de statut d’une instance, l’ensemble des administrateurs de la ferme doivent être notifiés de cette modification ainsi que l’utilisateur administrateur de l’instance.
    À la demande d’un canal
    Passage au statut "publie"
    Passage au (...)

Sur d’autres sites (4111)

  • Injecting 360 Metadata into an .mp4 file

    17 décembre 2019, par TrueCP5

    I’m working on a library that injects metadata into a .mp4 file to allow the video to be displayed correctly as a 360 video. The input file is a standard .mp4 file in the equirectangular format. I know what metadata needs to be injected I just do not know how to inject it.

    I spent some time looking around for libraries that can do this but could only find ones for extracting metadata not injecting/embedding/writing it. The alternative I found was to use Spatial Media as a command line application to inject the metadata more easily. The problem is I know zero python whatsoever so I’m leaning towards a library/nuget package/ffmpeg script.

    Does a good nuget package/library exist that can do this or should I go for the alternative option ?

    Edit 1

    I have tried just pasting in the metadata into the correct place in the file, just in case it might work, but it didn’t.

    Edit 2

    This is the metadata injected by Google’s Spatial Media Tool which is what I am trying to achieve :

    <?xml version="1.0"?>truetrueSpherical Metadata Toolequirectangular`

    Edit 3

    I’ve also tried to do it with ffmpeg like so : ffmpeg -i input.mp4 -movflags use_metadata_tags -metadata Spherical=true -metadata Stitched=true -metadata ProjectionType=equirectangular -metadata StitchingSoftware=StreetviewJourney -codec copy output.mp4

    I think the issue with the ffmpeg method is that it does not contain the rdf:SphericalVideo part which allows the spherical video tags to be used.

    Edit 4

    When I extract the metadata using ffmpeg it contains the spherical tag in the logs but not when I output it to a ffmetadata file. This was the command I used : ffmpeg -i injected.mp4 -map_metadata -1 -f ffmetadata data.txt

    This is the output of the log :

    fps, 60 tbr, 15360 tbn, 120 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Side data:
         spherical: equirectangular (0.000000/0.000000/0.000000)

    Edit 5

    I also tried to get the metadata using this command : ffprobe -v error -select_streams v:0 -show_streams -of default=noprint_wrappers=1 injected.mp4

    This was the logs it outputted :

    TAG:handler_name=VideoHandler
    side_data_type=Spherical Mapping
    projection=equirectangular
    yaw=0
    pitch=0
    roll=0

    I then tried to use this command but it didn’t work : ffmpeg -i chapmanspeak.mp4 -movflags use_metadata_tags -metadata side_metadata_type="Spherical Mapping" -metadata projection=equirectangular -metadata yaw=0 -metadata pitch=0 -metadata roll=0 -codec copy output.mp4

    Edit 6

    I tried @VC.One’s method but I must be doing something wrong because the output file is unplayable. Here is my code :

           public static void Metadata(string inputFile, string outputFile)
           {
               byte[] metadata = HexStringToByteArray("3C 3F 78 6D 6C 20 76 65 72 73 69 6F 6E 3D 22 31 2E 30 22 3F 3E 3C 72 64 66 3A 53 70 68 65 72 69 63 61 6C 56 69 64 65 6F 0A 78 6D 6C 6E 73 3A 72 64 66 3D 22 68 74 74 70 3A 2F 2F 77 77 77 2E 77 33 2E 6F 72 67 2F 31 39 39 39 2F 30 32 2F 32 32 2D 72 64 66 2D 73 79 6E 74 61 78 2D 6E 73 23 22 0A 78 6D 6C 6E 73 3A 47 53 70 68 65 72 69 63 61 6C 3D 22 68 74 74 70 3A 2F 2F 6E 73 2E 67 6F 6F 67 6C 65 2E 63 6F 6D 2F 76 69 64 65 6F 73 2F 31 2E 30 2F 73 70 68 65 72 69 63 61 6C 2F 22 3E 3C 47 53 70 68 65 72 69 63 61 6C 3A 53 70 68 65 72 69 63 61 6C 3E 74 72 75 65 3C 2F 47 53 70 68 65 72 69 63 61 6C 3A 53 70 68 65 72 69 63 61 6C 3E 3C 47 53 70 68 65 72 69 63 61 6C 3A 53 74 69 74 63 68 65 64 3E 74 72 75 65 3C 2F 47 53 70 68 65 72 69 63 61 6C 3A 53 74 69 74 63 68 65 64 3E 3C 47 53 70 68 65 72 69 63 61 6C 3A 53 74 69 74 63 68 69 6E 67 53 6F 66 74 77 61 72 65 3E 53 70 68 65 72 69 63 61 6C 20 4D 65 74 61 64 61 74 61 20 54 6F 6F 6C 3C 2F 47 53 70 68 65 72 69 63 61 6C 3A 53 74 69 74 63 68 69 6E 67 53 6F 66 74 77 61 72 65 3E 3C 47 53 70 68 65 72 69 63 61 6C 3A 50 72 6F 6A 65 63 74 69 6F 6E 54 79 70 65 3E 65 71 75 69 72 65 63 74 61 6E 67 75 6C 61 72 3C 2F 47 53 70 68 65 72 69 63 61 6C 3A 50 72 6F 6A 65 63 74 69 6F 6E 54 79 70 65 3E 3C 2F 72 64 66 3A 53 70 68 65 72 69 63 61 6C 56 69 64 65 6F 3E");
               byte[] stco = HexStringToByteArray("73 74 63 6F");
               byte[] moov = HexStringToByteArray("6D 6F 6F 76");
               byte[] trak = HexStringToByteArray("74 72 61 6B");

               byte[] file = File.ReadAllBytes(inputFile);

               //find trak
               int trakPosition = 0;
               for (int a = 0; a < file.Length - trak.Length; a++)
               {
                   for (int b = 0; b < trak.Length; b++)
                   {
                       if (file[a + b] != trak[b])
                           break;
                       if (b == trak.Length - 1)
                           trakPosition = a;
                   }
               }
               if (trakPosition == 0)
                   throw new FileLoadException();

               //add metadata
               int trakLength = BitConverter.ToInt32(new ArraySegment<byte>(file, trakPosition - 4, 4).Reverse().ToArray(), 0);
               var fileList = file.ToList();
               fileList.InsertRange(trakPosition - 4 + trakLength, metadata);
               file = fileList.ToArray();

               ////change length - tried this as well
               //byte[] trakBytes = BitConverter.GetBytes(trakLength + metadata.Length).Reverse().ToArray();
               //for (int i = 0; i &lt; 4; i++)
               //    file[trakPosition - 4 + i] = trakBytes[i];

               //find moov
               int moovPosition = 0;
               for (int a = 0; a &lt; file.Length - moov.Length; a++)
               {
                   for (int b = 0; b &lt; moov.Length; b++)
                   {
                       if (file[a + b] != moov[b])
                           break;
                       if (b == moov.Length - 1)
                           moovPosition = a;
                   }
               }
               if (moovPosition == 0)
                   throw new FileLoadException();

               //change length
               int moovLength = BitConverter.ToInt32(new ArraySegment<byte>(file, moovPosition - 4, 4).Reverse().ToArray(), 0);
               byte[] moovBytes = BitConverter.GetBytes(moovLength + metadata.Length).Reverse().ToArray();
               for (int i = 0; i &lt; 4; i++)
                   file[moovPosition - 4 + i] = moovBytes[i];

               //find stco
               int stcoPosition = 0;
               for (int a = 0; a &lt; file.Length - stco.Length; a++)
               {
                   for (int b = 0; b &lt; stco.Length; b++)
                   {
                       if (file[a + b] != stco[b])
                           break;
                       if (b == stco.Length - 1)
                           stcoPosition = a;
                   }
               }
               if (stcoPosition == 0)
                   throw new FileLoadException();

               //modify entries
               int stcoEntries = BitConverter.ToInt32(new ArraySegment<byte>(file, stcoPosition + 8, 4).Reverse().ToArray(), 0);
               for (int a = stcoPosition + 12; a &lt; stcoPosition + 12 + stcoEntries * 4; a += 4)
               {
                   int entryLength = BitConverter.ToInt32(new ArraySegment<byte>(file, a, 4).Reverse().ToArray(), 0);
                   byte[] newEntry = BitConverter.GetBytes(entryLength + metadata.Length).Reverse().ToArray();
                   for (int b = 0; b &lt; 4; b++)
                       file[a + b] = newEntry[b];
               }

               File.WriteAllBytes(outputFile, file);
           }

           private static byte[] HexStringToByteArray(string hex)
           {
               hex = hex.Replace(" ", "");
               return Enumerable.Range(0, hex.Length)
                                .Where(x => x % 2 == 0)
                                .Select(x => Convert.ToByte(hex.Substring(x, 2), 16))
                                .ToArray();
           }
    </byte></byte></byte></byte>

    The bytes are reversed because .mp4s seem to be Little Endian. I tried to also update the length of trak but that didn’t work either.

  • send h264 video to nginx-rtmp server using ffmpeg API

    11 décembre 2019, par Glen

    I have C++ code that grabs frames from a GigE camera and writes them out to a file. I’m using the libx264 codec and ffmpeg version 4.0.

    Writing to the file works fine, however I would also like to send the video to nginx configured with the nginx-rtmp plug-in to make the video available live via HLS.

    I can use the ffmpeg command line program to stream one of my previously captured files to my nginx server and rebroadcast as HLS, however if I try to stream from my C++ code the nginx server closes the connection after one or two frames are sent.

    To test further, I used the ffmpeg command line program to receive a rtmp stream and write it out to a file. I am able to send video to ffmpeg from my C++ program with rtmp, however every frame generates a warning like this :

    [avi @ 0x1b6b6f0] Non-monotonous DTS in output stream 0:0; previous: 1771, current: 53; changing to 1772. This may result in incorrect timestamps in the output file.
    [avi @ 0x1b6b6f0] Non-monotonous DTS in output stream 0:0; previous: 1772, current: 53; changing to 1773. This may result in incorrect timestamps in the output file.
    [avi @ 0x1b6b6f0] Non-monotonous DTS in output stream 0:0; previous: 1773, current: 53; changing to 1774. This may result in incorrect timestamps in the output file.
    [avi @ 0x1b6b6f0] Non-monotonous DTS in output stream 0:0; previous: 1774, current: 53; changing to 1775. This may result in incorrect timestamps in the output file.
    [avi @ 0x1b6b6f0] Non-monotonous DTS in output stream 0:0; previous: 1775, current: 53; changing to 1776. This may result in incorrect timestamps in the output file.
    [avi @ 0x1b6b6f0] Non-monotonous DTS in output stream 0:0; previous: 1776, current: 53; changing to 1777. This may result in incorrect timestamps in the output file.
    [avi @ 0x1b6b6f0] Non-monotonous DTS in output stream 0:0; previous: 1777, current: 53; changing to 1778. This may result in incorrect timestamps in the output file.
    [avi @ 0x1b6b6f0] Non-monotonous DTS in output stream 0:0; previous: 1778, current: 53; changing to 1779. This may result in incorrect timestamps in the output file.
    [avi @ 0x1b6b6f0] Non-monotonous DTS in output stream 0:0; previous: 1779, current: 53; changing to 1780. This may result in incorrect timestamps in the output file.

    I printed PTS and DTS for my packet before writing it, and the numbers were monotonous (for example, in this last frame the pts and dts printed from my code were 1780, not the ’current : 53’ that ffmpeg reports>

    also, unless I tell ffmpeg what the output framerate should be I end up with a file that plays 2x speed.

    After ffmpeg receives the rtmp stream and writes it to the file, I am then able to successfully send that file to my nginx server using ffmpeg.

    here is some relevant code :

    //configuring the codec context
    // make sure that config.codec is something we support
    // for now we are only supporting LIBX264
    if (config.codec() != codecs::LIBX264) {
       throw std::invalid_argument("currently only libx264 codec is supported");
    }

    // lookup specified codec
    ffcodec_ = avcodec_find_encoder_by_name(config.codec().c_str());
    if (!ffcodec_) {
       throw std::invalid_argument("unable to get codec " + config.codec());
    }

    // unique_ptr to manage the codec_context
    codec_context_ = av_pointer::codec_context(avcodec_alloc_context3(ffcodec_));

    if (!codec_context_) {
       throw std::runtime_error("unable to initialize AVCodecContext");
    }

    // setup codec_context_
    codec_context_->width = frame_width;
    codec_context_->height = frame_height;
    codec_context_->time_base = (AVRational){1, config.target_fps()};
    codec_context_->framerate = (AVRational){config.target_fps(), 1};
    codec_context_->global_quality = 0;
    codec_context_->compression_level = 0;
    codec_context_->bits_per_raw_sample = 8;
    codec_context_->gop_size = 1;
    codec_context_->max_b_frames = 1;
    codec_context_->pix_fmt = AV_PIX_FMT_YUV420P;

    // x264 only settings
    if (config.codec() == codecs::LIBX264) {
       av_opt_set(codec_context_->priv_data, "preset", config.compression_target().c_str(), 0);
       av_opt_set(codec_context_->priv_data, "crf", std::to_string(config.crf()).c_str(), 0);
    }

    // Open up the codec
    if (avcodec_open2(codec_context_.get(), ffcodec_, NULL) &lt; 0) {
       throw std::runtime_error("unable to open ffmpeg codec");
    }

    // setup the output format context and stream for RTMP
    AVFormatContext *tmp_f_context;
    avformat_alloc_output_context2(&amp;tmp_f_context, NULL, "flv", uri.c_str());
    rtmp_format_context_ = av_pointer::format_context(tmp_f_context);
    rtmp_stream_ = avformat_new_stream(rtmp_format_context_.get(), ffcodec_);
    avcodec_parameters_from_context(rtmp_stream_->codecpar, codec_context_.get());
    rtmp_stream_->time_base = codec_context_->time_base;
    rtmp_stream_->r_frame_rate = codec_context_->framerate;

    /* open the output file */
    if (!(rtmp_format_context_->flags &amp; AVFMT_NOFILE)) {
       int r = avio_open(&amp;rtmp_format_context_->pb, uri.c_str(), AVIO_FLAG_WRITE);
       if (r &lt; 0) {
           throw std::runtime_error("unable to open " + uri + " : " + av_err2str(r));
       }
    }

    if (avformat_write_header(rtmp_format_context_.get(), NULL) &lt; 0) {
       throw std::runtime_error("unable to write header");
    }

    av_dump_format(rtmp_format_context_.get(), 0,uri.c_str() , 1);

    at this point the av_dump_format produces this output :

    Output #0, flv, to 'rtmp://[MY URI]':
     Metadata:
       encoder         : Lavf58.12.100
       Stream #0:0, 0, 1/1000: Video: h264 (libx264), 1 reference frame ([7][0][0][0] / 0x0007), yuv420p, 800x800 (0x0), 0/1, q=-1--1, 30 tbr, 1k tbn

    encoding and writing the frame :

    // send the frame to the encoder, filtering first if necessary
    void VideoWriter::Encode(AVFrame *frame)
    {
       int rval;
       if (!apply_filter_) {
           //send frame to encoder
           rval = avcodec_send_frame(codec_context_.get(), frame);
           if (rval &lt; 0) {
               throw std::runtime_error("error sending frame for encoding");
           }
       } else {
           // push frame to filter
           // REMOVED, currently testing without filtering
       }

       // get packets from encoder
       while (rval >= 0) {
           // create smart pointer to allocated packet
           av_pointer::packet pkt(av_packet_alloc());
           if (!pkt) {
               throw std::runtime_error("unable to allocate packet");
           }

           rval = avcodec_receive_packet(codec_context_.get(), pkt.get());
           if (rval == AVERROR(EAGAIN) || rval == AVERROR_EOF) {
               return;
           } else if (rval &lt; 0) {
               throw std::runtime_error("error during encoding");
           }

           // if I print pkt->pts and pkt->dts here, I see sequential numbers

           // write packet
           rval = av_interleaved_write_frame(rtmp_format_context_.get(), pkt.get());
           if (rval &lt; 0 ) {
               std::cerr &lt;&lt; av_err2str(rval) &lt;&lt; std::endl;
           }
       }
    }

    Since I am able to send video from a previously recorded file to nginx with the ffmpeg command line program, I believe the problem is in my code and not my nginx configuration.

    EDIT : I think it may have to do with SPS/PPS as I see a bunch of these error messages in the nginx log before it closes the stream

    2019/12/11 11:11:31 [error] 10180#0: *4 hls: failed to read 5 byte(s), client: XXX, server: 0.0.0.0:1935
    2019/12/11 11:11:31 [error] 10180#0: *4 hls: error appenging SPS/PPS NALs, client: XXX, server: 0.0.0.0:1935

    As I mentioned, this code works fine if I set it up to write to an avi file rather stream to rtmp, and I can stream to ffmpeg listening for rtmp but with lots of warnings about the DTS but if I try to send to nginx, it closes the connection almost immediately. My first thought was that there was something wrong with the frame timestamps, but when I print pts and dts prior to writing the packet to the stream they look okay to me.

    My end goal is to capture video to a file, and also be able to turn on the rtmp stream on demand — but for now I’m just trying to get the rtmp stream working continuously (without writing to a file)

    Thanks for any insights.

  • Send AVPacket over Network

    2 décembre 2019, par Yondonator

    I’m generating AVPackets with an encoder with ffmpeg and now I want to send them with UDP to another Computer and show them there.
    The problem is I don’t know how to convert the packet to bytes and back. I tried this to copy the package :

    AVPacket newPacket = avcodec.av_packet_alloc();


    ByteBuffer byteBuffer = packet.buf().buffer().asByteBuffer();
    int bufferSize = byteBuffer.capacity();
    byte bytes[] = new byte[bufferSize];
    byteBuffer.get(bytes);
    AVBufferRef newBufferRef = avutil.av_buffer_alloc(bufferSize);
    newBufferRef.data(new BytePointer(bytes));
    newPacket.buf(newBufferRef);


    ByteBuffer dataBuffer = packet.data().asByteBuffer();
    int dataSize = dataBuffer.capacity();
    byte dataBytes[] = new byte[dataSize];
    dataBuffer.get(dataBytes);
    BytePointer dataPointer = new BytePointer(dataBytes);
    newPacket.data(dataPointer);


    newPacket.dts(packet.dts());
    newPacket.duration(packet.duration());
    newPacket.flags(packet.flags());
    newPacket.pos(packet.pos());
    newPacket.pts(packet.pts());
    newPacket.side_data_elems(0);
    newPacket.size(packet.size());
    newPacket.stream_index(packet.stream_index());


    videoPlayer.sendPacket(newPacket);

    This gives me this Error :

    [h264 @ 0000018951be8440] Invalid NAL unit size (3290676 > 77).
    [h264 @ 0000018951be8440] Error splitting the input into NAL units.
    [h264 @ 0000018951bf6480] Invalid NAL unit size (15305314 > 163).
    [h264 @ 0000018951bf6480] Error splitting the input into NAL units.

    The problem is newPacket.data(). When I set it directly : newPacket.data(packet.data())
    it works. Also packet.data().asByteBuffer().capacity() returns 1 and packet.data().capacity() returns 0.

    This is my method that creates the decoder :

    private void startUnsafe() throws Exception
       {
           int result;

           convertContext = null;
           codec = null;
           codecContext = null;
           AVFrame = null;
           RGBAVFrame = null;
           frame = new Frame();

           codec = avcodec_find_decoder(codecID);
           if(codec == null)
           {
               throw new Exception("Unable to find decoder");
           }

           codecContext = avcodec_alloc_context3(codec);
           if(codecContext == null)
           {
               releaseUnsafe();
               throw new Exception("Unable to alloc codec context!");
           }

           AVCodecParameters para = avcodec_parameters_alloc();
           para.bit_rate(streamBitrate);
           para.width(streamWidth);
           para.height(streamHeight);
           para.codec_id(codecID);
           para.codec_type(AVMEDIA_TYPE_VIDEO);
           try
           {
               byte extradataByte[] = Files.readAllBytes(new File("extradata.byte").toPath());
               para.extradata(new BytePointer(extradataByte));
               para.extradata_size(extradataByte.length);
           }
           catch (IOException e1)
           {
               e1.printStackTrace();
               throw new Exception("extradata file not available");
           }

           result = avcodec_parameters_to_context(codecContext, para);
           if(result &lt; 0)
           {
               throw new Exception("Unable to copy parameters to context! [" + result + "]");
           }

           codecContext.thread_count(0);

           result = avcodec_open2(codecContext, codec, new AVDictionary());
           if(result &lt; 0)
           {
               releaseUnsafe();
               throw new Exception("Unable to open codec context![" + result + "]");
           }

           AVFrame = av_frame_alloc();
           if(AVFrame == null)
           {
               releaseUnsafe();
               throw new Exception("Unable to alloc AVFrame!");
           }

           RGBAVFrame = av_frame_alloc();
           if(RGBAVFrame == null)
           {
               releaseUnsafe();
               throw new Exception("Unable to alloc AVFrame!");
           }
           initRGBAVFrame();

           TimerTask task = new TimerTask() {

               @Override
               public void run()
               {
                   timerTask();
               }
           };
           timer = new Timer();
           timer.scheduleAtFixedRate(task, 0, (long) (1000/streamFramerateDouble));

           window.setVisible(true);
       }

    The file extradata.byte has some bytes that I got from another video, because without them it doesn’t work too.

    EDIT :

    package org.stratostream.streaming;

    import java.nio.ByteBuffer;


    import org.bytedeco.javacpp.BytePointer;
    import org.bytedeco.javacpp.Pointer;
    import org.bytedeco.javacpp.avcodec;
    import org.bytedeco.javacpp.avutil;
    import org.bytedeco.javacpp.avcodec.AVPacket;
    import org.bytedeco.javacpp.avcodec.AVPacketSideData;


    public class PacketIO {


       public static final int SIDE_DATA_FIELD = 0;
       public static final int SIDE_ELEMENTS_FIELD = 4;
       public static final int SIDE_TYPE_FIELD = 8;
       public static final int DTS_FIELD = 12;
       public static final int PTS_FIELD = 20;
       public static final int FLAGS_FIELD = 28;
       public static final int DATA_OFFSET = 32;

       public static byte[] toByte(AVPacket packet) throws Exception
       {
           int dataSize = packet.size();
           ByteBuffer dataBuffer = packet.data().capacity(dataSize).asByteBuffer();
           byte dataBytes[] = new byte[dataSize];
           dataBuffer.get(dataBytes);

           AVPacketSideData sideData = packet.side_data();
           int sideSize = sideData.size();
           ByteBuffer sideBuffer = sideData.data().capacity(sideSize).asByteBuffer();
           byte sideBytes[] = new byte[sideSize];
           sideBuffer.get(sideBytes);

           int sideOffset = DATA_OFFSET + dataSize;
           int resultSize = sideOffset + sideSize;
           byte resultBytes[] = new byte[resultSize];
           System.arraycopy(dataBytes, 0, resultBytes, DATA_OFFSET, dataSize);
           System.arraycopy(sideBytes, 0, resultBytes, sideOffset, sideSize);
           resultBytes[SIDE_DATA_FIELD] = (byte) (sideOffset >>> 24);
           resultBytes[SIDE_DATA_FIELD+1] = (byte) (sideOffset >>> 16);
           resultBytes[SIDE_DATA_FIELD+2] = (byte) (sideOffset >>> 8);
           resultBytes[SIDE_DATA_FIELD+3] = (byte) (sideOffset >>> 0);

           int sideType = sideData.type();
           intToByte(resultBytes, SIDE_TYPE_FIELD, sideType);

           int sideElements = packet.side_data_elems();
           intToByte(resultBytes, SIDE_ELEMENTS_FIELD, sideElements);

           long dts = packet.dts();
           longToByte(resultBytes, DTS_FIELD, dts);

           long pts = packet.pts();
           longToByte(resultBytes, PTS_FIELD, pts);

           int flags = packet.flags();
           intToByte(resultBytes, FLAGS_FIELD, flags);

           return resultBytes;
       }

       public static AVPacket toPacket(byte bytes[]) throws Exception
       {
           AVPacket packet = avcodec.av_packet_alloc();

           int sideOffset = byteToInt(bytes, SIDE_DATA_FIELD);
           int sideElements = byteToInt(bytes, SIDE_ELEMENTS_FIELD);
           int sideType = byteToInt(bytes, SIDE_TYPE_FIELD);
           int dataSize = sideOffset - DATA_OFFSET;
           int sideSize = bytes.length - sideOffset;

           long pts = byteToLong(bytes, PTS_FIELD);
           long dts = byteToLong(bytes, DTS_FIELD);
           int flags = byteToInt(bytes, FLAGS_FIELD);

           packet.pts(pts);
           packet.dts(dts);
           packet.flags(flags);


           Pointer newDataPointer =  avutil.av_malloc(bytes.length);
           BytePointer dataPointer = new BytePointer(newDataPointer);
           byte dataBytes[] = new byte[dataSize];
           System.arraycopy(bytes, DATA_OFFSET, dataBytes, 0, dataSize);
           dataPointer.put(dataBytes);
           packet.data(dataPointer);
           packet.size(dataSize);

           Pointer newSidePointer = avutil.av_malloc(sideSize);
           BytePointer sidePointer = new BytePointer(newSidePointer);
           byte sideBytes[] = new byte[sideSize];
           System.arraycopy(bytes, sideOffset, sideBytes, 0, sideSize);
           sidePointer.put(sideBytes);
           AVPacketSideData sideData = new AVPacketSideData();
           sideData.data(sidePointer);
           sideData.type(sideType);
           sideData.size(sideSize);
           //packet.side_data(sideData);
           //packet.side_data_elems(sideElements);

           return packet;
       }

       private static void intToByte(byte[] bytes, int offset, int value)
       {
           bytes[offset] = (byte) (value >>> 24);
           bytes[offset+1] = (byte) (value >>> 16);
           bytes[offset+2] = (byte) (value >>> 8);
           bytes[offset+3] = (byte) (value >>> 0);
       }

       private static void longToByte(byte[] bytes, int offset, long value)
       {
           bytes[offset] = (byte) (value >>> 56);
           bytes[offset+1] = (byte) (value >>> 48);
           bytes[offset+2] = (byte) (value >>> 40);
           bytes[offset+3] = (byte) (value >>> 32);
           bytes[offset+4] = (byte) (value >>> 24);
           bytes[offset+5] = (byte) (value >>> 16);
           bytes[offset+6] = (byte) (value >>> 8);
           bytes[offset+7] = (byte) (value >>> 0);
       }

       private static int byteToInt(byte[] bytes, int offset)
       {
           return (bytes[offset]&lt;&lt;24)&amp;0xff000000|(bytes[offset+1]&lt;&lt;16)&amp;0x00ff0000|(bytes[offset+2]&lt;&lt;8)&amp;0x0000ff00|(bytes[offset+3]&lt;&lt;0)&amp;0x000000ff;
       }

       private static long byteToLong(byte[] bytes, int offset)
       {
           return (bytes[offset]&lt;&lt;56)&amp;0xff00000000000000L|(bytes[offset+1]&lt;&lt;48)&amp;0x00ff000000000000L|(bytes[offset+2]&lt;&lt;40)&amp;0x0000ff0000000000L|(bytes[offset+3]&lt;&lt;32)&amp;0x000000ff00000000L|(bytes[offset+4]&lt;&lt;24)&amp;0x00000000ff000000L|(bytes[offset+5]&lt;&lt;16)&amp;0x0000000000ff0000L|(bytes[offset+6]&lt;&lt;8)&amp;0x000000000000ff00L|(bytes[offset+7]&lt;&lt;0)&amp;0x00000000000000ffL;
       }

    }

    Now I have this class that works fine on the same programm, but when I send the bytes over the network I get a bad output and this error is printed to the console :

    [h264 @ 00000242442acc40] Missing reference picture, default is 72646
    [h264 @ 000002424089de00] Missing reference picture, default is 72646
    [h264 @ 000002424089e3c0] mmco: unref short failure
    [h264 @ 000002424081a580] reference picture missing during reorder
    [h264 @ 000002424081a580] Missing reference picture, default is 72652
    [h264 @ 000002424082c400] mmco: unref short failure
    [h264 @ 000002424082c400] co located POCs unavailable
    [h264 @ 000002424082c9c0] co located POCs unavailable
    [h264 @ 00000242442acc40] co located POCs unavailable
    [h264 @ 000002424089de00] mmco: unref short failure

    I think its because I dont set the sidedata field but when I try to set it the encoder crashes with the second packet.

    The output looks like this :
    Decoder Output