Recherche avancée

Médias (91)

Autres articles (62)

  • Récupération d’informations sur le site maître à l’installation d’une instance

    26 novembre 2010, par

    Utilité
    Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
    Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)

  • Taille des images et des logos définissables

    9 février 2011, par

    Dans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
    Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

Sur d’autres sites (3452)

  • Revision 7b756183aa : Fix to source scaling for dynamic_resize. The fast scaling for 1 pass mode was

    14 juillet 2015, par Marco

    Changed Paths :
     Modify /vp9/encoder/vp9_encoder.c



    Fix to source scaling for dynamic_resize.

    The fast scaling for 1 pass mode was being used only on the
    first frame after resizing event (because resize_scale_num/den
    is set to 1 and only changed for first frame following resize event).

    Change-Id : I723b63e21823eb858f25f5662d2bbe4f1842e61f

  • Updating SDL yuv Texture

    15 juin 2015, par madprogrammer2015

    I am receiving an H.264 video stream and successfully decoding it with FFMPEG. It can display the first frame of data but then after that the screen never updates. It just appears to become a static image. I am using YUV pixel format, and I am receiving it in that format as well. Also I am using SDL_UpdateYUVTexture().

    Here is my code :

    int main()
    {
       WORD wVersionRequested;
       WSADATA wsaData;
       int wsaerr;

       if (SDL_Init(SDL_INIT_EVERYTHING)) {
           fprintf(stderr, "Could not initialize SDL - %s\n", SDL_GetError());
           exit(1);
       }

       // Using MAKEWORD macro, Winsock version request 2.2
       wVersionRequested = MAKEWORD(2, 2);

       wsaerr = WSAStartup(wVersionRequested, &wsaData);

       if (wsaerr != 0)
       {
           /* Tell the user that we could not find a usable */
           /* WinSock DLL.*/
           printf("The Winsock dll not found!\n");
           return 0;
       }
       else
       {
           printf("The Winsock dll found!\n");
           printf("The status: %s.\n", wsaData.szSystemStatus);
       }

       /* Confirm that the WinSock DLL supports 2.2.*/
       /* Note that if the DLL supports versions greater    */
       /* than 2.2 in addition to 2.2, it will still return */
       /* 2.2 in wVersion since that is the version we      */
       /* requested.                                        */
       if (LOBYTE(wsaData.wVersion) != 2 || HIBYTE(wsaData.wVersion) != 2)
       {
           /* Tell the user that we could not find a usable */
           /* WinSock DLL.*/
           printf("The dll do not support the Winsock version %u.%u!\n", LOBYTE(wsaData.wVersion), HIBYTE(wsaData.wVersion));
           WSACleanup();
           return 0;
       }
       else
       {
           printf("The dll supports the Winsock version %u.%u!\n", LOBYTE(wsaData.wVersion), HIBYTE(wsaData.wVersion));
           printf("The highest version this dll can support: %u.%u\n", LOBYTE(wsaData.wHighVersion), HIBYTE(wsaData.wHighVersion));
       }

       ULONG              localif;
       /*INT Ret;
       HANDLE ThreadHandle;
       DWORD ThreadId;
       WSAEVENT AcceptEvent;
       char               buf[1024];
       int                buflen = 1024, rc, err;*/

       SOCKET s;
       SOCKET ns;
       SOCKADDR_IN        multi, safrom;
       int fromlen;

       int totalSize = 0;

       AVCodec *codec;
       AVCodecContext *codecContext;
       int frame;
       int got_picture;
       AVFrame *picture;
       AVPacket packet;
       SwsContext* convertContext;
       uint16_t i = 1;
       //std::queue<madproto> queue;
       //std::list<madproto> list;
       AVCodecParserContext *parser;
       std::vector buffer;
       //moodycamel::ConcurrentQueue<madproto> protoQueue;

       SDL_Window *window;
       SDL_Renderer *renderer;
       SDL_Texture *bmp;
       SDL_Rect rect;

       file.open("log.txt");

       s = socket(AF_INET, SOCK_STREAM, IPPROTO_RM);

       multi.sin_family = AF_INET;
       multi.sin_port = htons(5150);
       multi.sin_addr.s_addr = inet_addr("234.5.6.7");
       int bindResult = bind(s, (PSOCKADDR)&amp;multi, sizeof(multi));

       if (bindResult &lt; 0)
       {
           std::cout &lt;&lt; "bindResult: " &lt;&lt; WSAGetLastError() &lt;&lt; std::endl;
       }

       listen(s, 10);

       //if ((AcceptEvent = WSACreateEvent()) == WSA_INVALID_EVENT)
       //{
       //  printf("WSACreateEvent() failed with error %d\n", WSAGetLastError());
       //  return 1;
       //}
       //else
       //  printf("WSACreateEvent() is OK!\n");

       //// Create a worker thread to service completed I/O requests
       //if ((ThreadHandle = CreateThread(NULL, 0, WorkerThread, (LPVOID)AcceptEvent, 0, &amp;ThreadId)) == NULL)
       //{
       //  printf("CreateThread() failed with error %d\n", GetLastError());
       //  return 1;
       //}
       //else
       //  printf("CreateThread() should be fine!\n");

       localif = inet_addr("192.168.1.2");
       setsockopt(s, IPPROTO_RM, RM_ADD_RECEIVE_IF, (char *)&amp;localif, sizeof(localif));

       fromlen = sizeof(safrom);
       ns = accept(s, (SOCKADDR *)&amp;safrom, &amp;fromlen);

       closesocket(s);  // Don't need to listen anymore

       std::string received;

       av_register_all();

       int horizontal = 0;
       int vertical = 0;

       GetDesktopResolution(horizontal, vertical);

       codec = avcodec_find_decoder(CODEC_ID_H264);
       if (!codec) {
           std::cout &lt;&lt; "codec not found" &lt;&lt; std::endl;
           std::cin.get();
       }

       codecContext = avcodec_alloc_context3(codec);

       /*if (codec->capabilities &amp; CODEC_CAP_TRUNCATED)
           codecContext->flags |= CODEC_FLAG_TRUNCATED;*/

       //codecContext->flags |= CODEC_FLAG_LOW_DELAY;
       codecContext->flags2 |= CODEC_FLAG2_CHUNKS;

       codecContext->width = horizontal;
       codecContext->height = vertical;
       codecContext->codec_id = CODEC_ID_H264;
       codecContext->codec_type = AVMEDIA_TYPE_VIDEO;
       codecContext->pix_fmt = PIX_FMT_YUV420P;
       codecContext->thread_type = 0;

       if (avcodec_open2(codecContext, codec, NULL) &lt; 0) {
           std::cout &lt;&lt; "could not open codec" &lt;&lt; std::endl;
           std::cin.get();
       }

       convertContext = sws_getContext(
           codecContext->width,
           codecContext->height,
           PIX_FMT_RGB32,
           codecContext->width,
           codecContext->height,
           PIX_FMT_YUV420P,
           SWS_BICUBIC,
           NULL,
           NULL,
           NULL
           );

       parser = av_parser_init(CODEC_ID_H264);

       picture = av_frame_alloc();

       if (ns == INVALID_SOCKET)
       {
           std::cout &lt;&lt; "accept didn't work!" &lt;&lt; std::endl;
           std::cin.get();
       }

       /*if (WSASetEvent(AcceptEvent) == FALSE)
       {
           printf("WSASetEvent() failed with error %d\n", WSAGetLastError());
           return 1;
       }
       else
           printf("WSASetEvent() should be working!\n");*/

       window = SDL_CreateWindow("YUV", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, codecContext->width, codecContext->height, SDL_WINDOW_SHOWN);
       renderer = SDL_CreateRenderer(window, -1, 0);
       bmp = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_IYUV, SDL_TEXTUREACCESS_STREAMING, codecContext->width, codecContext->height);

       //receive = SDL_CreateThread(receiveThread, "ReceiveThread", (void *)NULL);
       bool quit = false;

       rect.x = 0;
       rect.y = 0;
       rect.w = codecContext->width;
       rect.h = codecContext->height;

       while (!quit)
       {
           while (true)
           {
               MadProto proto;
               int result = recvfrom(ns, (char *)&amp;proto, sizeof(MadProto), 0, (struct sockaddr *)&amp;multi, &amp;fromlen);

               if (result &lt; 0)
               {
                   std::cout &lt;&lt; "receive failed! error: " &lt;&lt; WSAGetLastError() &lt;&lt; std::endl;
                   break;
               }
               else
               {
                   std::cout &lt;&lt; "receive successful, received " &lt;&lt; result &lt;&lt; " bytes" &lt;&lt; std::endl;

                   if (ntohs(proto.frame_end) == 1)
                   {
                       uint8_t *outbuffer = NULL;
                       int outBufSize = 0;
                       int rc = av_parser_parse2(parser, codecContext, &amp;outbuffer, &amp;outBufSize, buffer.data(), buffer.size(), 0, 0, 0);

                       if (outBufSize &lt;= 0)
                       {
                           std::cout &lt;&lt; "parsing failed!" &lt;&lt; std::endl;
                           std::cout &lt;&lt; "outBufSize: " &lt;&lt; outBufSize &lt;&lt; std::endl;
                           break;
                       }

                       if (rc)
                       {
                           std::cout &lt;&lt; "rc: " &lt;&lt; rc &lt;&lt; std::endl;
                           std::cout &lt;&lt; "parsing successful!" &lt;&lt; std::endl;
                           //std::cin.get();

                           av_init_packet(&amp;packet);
                           packet.size = outBufSize;
                           packet.data = outbuffer;

                           frame = avcodec_decode_video2(codecContext, picture, &amp;got_picture, &amp;packet);

                           if (frame &lt; 0)
                           {
                               std::cout &lt;&lt; "decoding was unsuccessful!" &lt;&lt; std::endl;
                               break;
                           }

                           if (got_picture)
                           {
                               std::cout &lt;&lt; "decoding was successful!" &lt;&lt; std::endl;
                               std::cout &lt;&lt; "decoded length was: " &lt;&lt; frame &lt;&lt; std::endl;
                               buffer.empty();

                               //std::cin.get();
                               int code = SDL_UpdateYUVTexture(bmp, NULL, picture->data[0], picture->linesize[0],
                                   picture->data[1], picture->linesize[1],
                                   picture->data[2], picture->linesize[2]);

                               if (code &lt; 0)
                               {
                                   std::cout &lt;&lt; "unable to update texture " &lt;&lt; SDL_GetError() &lt;&lt; std::endl;
                                   std::cin.get();
                               }

                               code = SDL_RenderClear(renderer);

                               if (code &lt; 0)
                               {
                                   std::cout &lt;&lt; "renderer clear failed " &lt;&lt; SDL_GetError() &lt;&lt; std::endl;
                                   std::cin.get();
                               }

                               code = SDL_RenderCopy(renderer, bmp, NULL, &amp;rect);

                               if (code &lt; 0)
                               {
                                   std::cout &lt;&lt; "renderer copy failed " &lt;&lt; SDL_GetError() &lt;&lt; std::endl;
                                   std::cin.get();
                               }

                               SDL_RenderPresent(renderer);

                               SDL_Delay(40);
                           }

                           av_free_packet(&amp;packet);
                       }
                   }
                   else
                   {
                       std::copy(proto.payload, proto.payload + ntohs(proto.nal_length), std::back_inserter(buffer));
                       std::cout &lt;&lt; "frame is continuing!" &lt;&lt; std::endl;

                       //queue.push(proto);
                       //list.push_front(proto);
                   }
               }
           }

           SDL_WaitEvent(&amp;event);
           switch (event.type)
           {
           case SDL_QUIT:
               quit = true;
               break;
           }
       }

       std::cout &lt;&lt; "closing everything!" &lt;&lt; std::endl;
       av_frame_free(&amp;picture);
       closesocket(ns);
       fclose(f);
       std::cin.get();
       return 0;
    }
    </madproto></madproto></madproto>
  • Managing Music Playback Channels

    30 juin 2013, par Multimedia Mike — General

    My Game Music Appreciation site allows users to interact with old video game music by toggling various channels, as long as the underlying synthesizer engine supports it.


    5 NES voices

    Users often find their way to the Nintendo DS section pretty quickly. This is when they notice an obnoxious quirk with the channel toggling feature : specifically, one channel doesn’t seem to map to a particular instrument or track.

    When it comes to computer music playback methodologies, I have long observed that there are 2 general strategies : Fixed channel and dynamic channel allocation.

    Fixed Channel Approach
    One of my primary sources of computer-based entertainment used to be watching music. Sure I listened to it as well. But for things like Amiga MOD files and related tracker formats, there was a rich ecosystem of fun music playback programs that visualized the music. There exist music visualization modes in various music players these days (such as iTunes and Windows Media Player), but those largely just show you a single wave form. These files were real time syntheses based on multiple audio channels and usually showed some form of analysis for each channel. My personal favorite was Cubic Player :


    Open Cubic Player -- oscilloscopes

    Most of these players supported the concept of masking individual channels. In doing so, the user could isolate, study, and enjoy different components of the song. For many 4-channel Amiga MOD files, I observed that the common arrangement was to use the 4 channels for beat (percussion track), bass line, chords, and melody. Thus, it was easy to just listen to, e.g., the bass line in isolation.

    MODs and similar formats specified precisely which digital audio sample to play at what time and on which specific audio channel. To view the internals of one of these formats, one gets the impression that they contain an extremely computer-centric view of music.

    Dynamic Channel Allocation Algorithm
    MODs et al. enjoyed a lot of popularity, but the standard for computer music is MIDI. While MOD and friends took a computer-centric view of music, MIDI takes, well, a music-centric view of music.

    There are MIDI visualization programs as well. The one that came with my Gravis Ultrasound was called PLAYMIDI.EXE. It looked like this…


    Gravis Ultrasound PLAYMIDI.EXE application

    … and it confused me. There are 16 distinct channels being visualized but some channels are shown playing multiple notes. When I dug into the technical details, I learned that MIDI just specifies what notes need to be played, at what times and frequencies and using which instrument samples, and it was the MIDI playback program’s job to make it happen.

    Thus, if a MIDI file specifies that track 1 should play a C major chord consisting of notes C, E, and G, it would transmit events “key-on C ; delta time 0 ; key-on E ; delta time 0 ; key-on G ; delta time … ; [other commands]“. If the playback program has access to multiple channels (say, up to 32, in the case of the GUS), the intuitive approach would be to maintain a pool of all available channels. Then, when it’s time to process the “key-on C” event, fetch the first available channel from the pool, mark it as in-use, play C on the channel, and return that channel to the pool when either the sample runs its course or the corresponding “key-off C” event is encountered in the MIDI command stream.

    About That Game Music
    Circling back around to my game music website, numerous supported systems use the fixed channel approach for playback while others use dynamic channel allocation approach, including evey Nintendo DS game I have so far analyzed.

    Which approach is better ? As in many technical matters, there are trade-offs either way. For many systems, the fixed channel approach is necessary because for many older audio synthesis systems, different channels had very specific purposes. The 8-bit NES had 5 channels : 2 square wave generators (used musically for melody/treble), 1 triangle wave generator (usually used for bass line), a noise generator (subverted for all manner of percussive sounds), and a limited digital channel (was sometimes assigned richer percussive sounds). Dynamic channel allocation wouldn’t work here.

    But the dynamic approach works great on hardware with 16 digital channels available like, for example, the Nintendo DS. Digital channels are very general-purpose. What about the SNES, with its 8 digital channels ? Either approach could work. In practice, most games used a fixed channel approach : Games might use 4-6 channels for music while reserving the remainder for various in-game sound effects. Some notable exceptions to this pattern were David Wise’s compositions for Rare’s SNES games (think Battletoads and the various Donkey Kong Country titles). These clearly use some dynamic channel approach since masking all but one channel will give you a variety of instrument sounds.

    Epilogue
    There ! That took a long time to explain but I find it fascinating for some reason. I need to distill it down to far fewer words because I want to make it a FAQ on my website for “Why can’t I isolate specific tracks for Nintendo DS games ?”

    Actually, perhaps I should remove the ability to toggle Nintendo DS channels in the first place. Here’s a funny tale of needless work : I found the Vio2sf engine for synthesizing Nintendo DS music and incorporated it into the program. It didn’t support toggling of individual channels so I figured out a way to add that feature to the engine. And then I noticed that most Nintendo DS games render that feature moot. After I released the webapp, I learned that I was out of date on the Vio2sf engine. The final insult was that the latest version already supports channel toggling. So I did the work for nothing. But then again, since I want to remove that feature from the UI, doubly so.