Recherche avancée

Médias (0)

Mot : - Tags -/clipboard

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (23)

  • Récupération d’informations sur le site maître à l’installation d’une instance

    26 novembre 2010, par

    Utilité
    Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
    Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

Sur d’autres sites (5290)

  • Official Piwik Training in Berlin – 2014, June 6th

    6 mai 2014, par Piwik Core Team — Community

    This event will focus on providing training to users of the Piwik analytics platform. The training will provide attendees with the necessary skills and knowledge that they will need to be able to take their website to the next level with Piwik.

    Language : English

    Register to Piwik Training now.

    Location : The 25hours Hotel Bikini Berlin is as diverse as the big city it is located in and as wild as a jungle. The hotel showcases cosmopolitan Berlin at its location in the listed Bikini-Haus building between the Tiergarten park and Breitscheidplatz with Kaiser Wilhelm Memorial Church.

    Piwik Training Location - Berlin 25hours Hotel Bikini Berlin

    Why do you need training ?

    If you have just started using Piwik and are finding it a bit overwhelming, this training event will benefit you immensely. You will be able to learn all the necessary skills that will allow you move forward with Piwik.

    For users who have been using Piwik for a short time and have a bit of experience in using Piwik, you will be able to learn how to advance your skills and extend your knowledge of the Piwik platform.

    Advanced users will be able to gain more knowledge about the complex features and functions that Piwik incorporates, allowing you to customise different areas of the platform and learn about advanced topics.

    How can you benefit from this training event ?

    By understanding how Piwik works and how to use and operate Piwik more effectively, you will be able to make sound changes to your website that will allow you to achieve your business goals.

    Everyone, from ecommerce businesses to government organisations can benefit from this training event and learn the essential skills and gain the relevant knowledge to meet their goals and requirements.

    Some of the skills that you will learn during the training include :

    • How to install and get started with the Piwik platform
    • How Piwik will add value to your website
    • How to analyse and make sense of the data and information that you collect
    • How to create custom segments that will allow you to report on certain data and information
    • Advance exercises – Piwik settings, tweaking and basic diagnostics

    What equipment do I need in order to participate in the event ?

    You will need a computer that is able to connect to a Wifi network

    Are the tickets transferable ?

    Yes, the tickets are transferable.

    What is the refund policy on the tickets ?

    You are entitled to a refund up to 1 week before the commencement of the training.

    Training details

    <script type="text/javascript">          (function() { var scribd = document.createElement("script"); scribd.type = "text/javascript"; scribd.async = true; scribd.src = "#{root_url}javascripts/embed_code/inject.js"; var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(scribd, s); })()        </script>

    Contact us : contact@piwik.pro

    Registrations

    Register to Piwik Training now !

  • Android Camera Video frames decoding coming out distorted with horizontal lines

    13 novembre 2018, par Iain Stanford

    I’ve been porting over the following Test Android example to run in a simple Xamarin Android project.

    https://bigflake.com/mediacodec/ExtractMpegFramesTest_egl14.java.txt

    I’m running a video captured by the camera (on the same device) through this pipeline but the PNGs I’m getting out the other end are distorted, I assume due to the minefield of Android Camera color spaces.

    Here are the images I’m getting running a Camera Video through the pipeline...

    https://imgur.com/a/nrOVBPk

    Its hard to tell, but it ’kinda’ looks like it is a single line of the actual image, stretched across. But I honestly wouldn’t want to bank on that being the issue as it could be a red herring.

    However, when I run a ’normal’ video that I grabbed online through the same pipeline, it works completely fine.

    I used the first video found on here (the lego one) http://techslides.com/sample-webm-ogg-and-mp4-video-files-for-html5

    And I get frames like this...

    https://imgur.com/a/yV2vMMd

    Checking out some of the ffmpeg probe data of the video, both this and my camera video have the same pixel format (pix_fmt=yuv420p) but there are differences in color_range.

    The video that works has,

    color_range=tv
    color_space=bt709
    color_transfer=bt709
    color_primaries=bt709

    And the camera video just has...

    color_range=unknown
    color_space=unknown
    color_transfer=unknown
    color_primaries=unknown

    The media format of the camera video appears to be in SemiPlanar YUV, the codec output gets updated to that at least. I get an OutputBuffersChanged message which sets the output buffer of the MediaCodec to the following,

    {
       mime=video/raw,
       crop-top=0,
       crop-right=639,
       slice-height=480,
       color-format=21,
       height=480,
       width=640,
       what=1869968451,
       crop-bottom=479,
       crop-left=0,
       stride=640
    }

    I can also point the codec output to a TextureView as opposed to OpenGL surface, and just grab the Bitmap that way (obviously slower) and these frames look fine. So maybe its the OpenGL display of the raw codec output ? Does Android TextureView do its on decoding ?

    Note - The reason I’m looking into all this is I have a need to try and run some form of image processing on a raw camera feed at as close to 30fps as possible. Obviously, this is not possible some devices, but recording a video at 30fps and then processing the video after the fact is a possible workaround I’m investigating. I’d rather try and process the image in OpenGL for the improved speed than taking each frame as a Bitmap from the TextureView output.

    In researching this I’ve seen someone else with pretty much the exact same issue here How to properly save frames from mp4 as png files using ExtractMpegFrames.java ?
    although he didn’t seem to have much luck finding out what might be going wrong.

    EDIT - FFMpeg Probe outputs for both videos...

    Video that works - https://justpaste.it/484ec .
    Video that fails - https://justpaste.it/55in0 .

  • Reading FFmpeg bytes from named pipes, extracted NAL units are bad/corrupted

    12 avril 2023, par Mr Squidr

    I'm trying to read .mp4 file wtih ffmpeg and read bytes from the named pipe which I then want to package to RTP stream and send those packets over WebRTC.

    


    What I learned is that H264 video consists of many NAL units. So What I do in my code is read the bytes from the named pipe and try to extract NAL units. The problem is that the bytes I get seem to make no real sense as NAL unit start is sometimes only few bytes away.

    


    I tested on multiple different mp4 files and on multiple h264 files, all have the same issues. Start of NAL units are found but they aren't separated properly, or what I'm reading aren't NAL units at all. For example NAL units start from reading a sample .h264 file would be : 4, 32, 41, 717. This does not make a lot of sense if these are NAL units, they are too close and some are far apart. I'm lost at what I'm doing wrong.

    


    The issue might also be in the ffmpeg command itself. I do think I need "-c:v libx264 -bsf:v h264_mp4toannexb" arguments for the output to be in the correct format but I'm not certain.

    


    I did try sending NAL units that seemed ok over webrtc but nothing was displayed on the receiving end (probably because of how H264 works by needing previous frames, I'm not sure).

    


    I am struggling with this issue for past few days now and no matter what I tried NAL units were never as they should be.

    


    Code to start ffmpeg process from c# :

    


    var proc = new Process()
{
    StartInfo =
        {
            FileName = FFMPEG_LIB_PATH,
            Arguments = "-y -re -i input.mp4 -an -c:v libx264 -bsf:v h264_mp4toannexb -f image2pipe ffmpeg_rec_stream",
            UseShellExecute = false,
            CreateNoWindow = true,
            RedirectStandardInput = false,
            RedirectStandardOutput = true,
        }
};


    


    Code to connect to named pipe :

    


    var mOutputPipe = new NamedPipeServerStream($"ffmpeg_rec_stream", PipeDirection.InOut, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous, 102400, 102400);
mOutputPipe.BeginWaitForConnection(OnOutputPipeConnected, null);


    


    Code for OnOutputPipeConnected

    


    private void OnOutputPipeConnected(IAsyncResult ar)
        {
            try
            {
                mOutputPipe.EndWaitForConnection(ar);
                var buffer = new byte[65536];
                while (true)
                {
                    int bytesRead = mOutputPipe.Read(buffer, 0, buffer.Length);
                    if (bytesRead == 0)
                    {
                        break;
                    }

                    var nalUnitStarts = FindAllNalUnitIndexes(buffer, bytesRead);
                    for (int i = 0; i < nalUnitStarts.Count - 1; i++)
                    {
                        int nalStartIndex = nalUnitStarts[i];
                        int nalEndIndex = nalUnitStarts[i + 1] - 1;
                        int nalLength = nalEndIndex - nalStartIndex + 1;
                        byte[] nalUnit = new byte[nalLength];
                        Buffer.BlockCopy(buffer, nalStartIndex, nalUnit, 0, nalLength);

                        // send nalUnit over to webrtc client
                        var rtpPacket = new RTPPacket(nalUnit);
                        RecordingSession?.RTCPeer.SendRtpRaw(SDPMediaTypesEnum.video, rtpPacket.Payload, rtpPacket.Header.Timestamp, rtpPacket.Header.MarkerBit, 100);
                    }
                }
            }
            catch (Exception e)
            {
                
            }
        }


    


    Code for finding NAL units :

    


    private static List<int> FindAllNalUnitIndexes(byte[] buffer, int length)&#xA;{&#xA;    var indexes = new List<int>();&#xA;    int i = 0;&#xA;&#xA;    while (i &lt; length - 4)&#xA;    {&#xA;        int nalStart = FindNextNalUnit(buffer, i, length);&#xA;        if (nalStart == -1)&#xA;        {&#xA;            break;&#xA;        }&#xA;        else&#xA;        {&#xA;            indexes.Add(nalStart);&#xA;            i = nalStart &#x2B; 1;&#xA;        }&#xA;    }&#xA;&#xA;    return indexes;&#xA;}&#xA;&#xA;private static int FindNextNalUnit(byte[] buffer, int startIndex, int length)&#xA;{&#xA;    for (int i = startIndex; i &lt; length - 4; i&#x2B;&#x2B;)&#xA;    {&#xA;        if (buffer[i] == 0 &amp;&amp; buffer[i &#x2B; 1] == 0 &amp;&amp; (buffer[i &#x2B; 2] == 1 || (buffer[i &#x2B; 2] == 0 &amp;&amp; buffer[i &#x2B; 3] == 1)))&#xA;        {&#xA;            return i &#x2B; (buffer[i &#x2B; 2] == 1 ? 3 : 4);&#xA;        }&#xA;    }&#xA;    return -1;&#xA;}&#xA;</int></int>

    &#xA;