Recherche avancée

Médias (91)

Autres articles (23)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

Sur d’autres sites (6461)

  • FFMPEG with SSM is not using the specified network interface

    23 janvier 2014, par caspar

    I'm trying to use ffmpeg/ffprobe to join a SSM stream on a server with multiple network interfaces. I'm using the following command to initialize the input :

    c:\ffmpeg\bin>ffprobe "udp://232.2.4.206:24206?localaddr=10.15.248.217&sources=10.15.248.210,10.15.248.211,10.15.248.212,10.15.248.213&connect=1&fifo_size=1000000" -v 9 -loglevel 99

    The routing on the server can't be changed, though I can confirm other applications running on the same server are able to join and receive the multicast signal. The issue is, I believe, with the localaddr parameter, which is ignored it seems. Using Wireshark I can see that the wrong interface is being used (i.e. not the 10.15.248.217 interface).

    The output of the above command is :

    ffprobe version N-60087-g94a5241 Copyright (c) 2007-2014 the FFmpeg developers
    built on Jan 21 2014 22:06:13 with gcc 4.8.2 (GCC)
    configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetype --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib
    libavutil      52. 63.100 / 52. 63.100
    libavcodec     55. 48.102 / 55. 48.102
    libavformat    55. 25.101 / 55. 25.101
    libavdevice    55.  5.102 / 55.  5.102
    libavfilter     4.  1.100 /  4.  1.100
    libswscale      2.  5.101 /  2.  5.101
    libswresample   0. 17.104 /  0. 17.104
    libpostproc    52.  3.100 / 52.  3.100
    [udp @ 000000000272dcc0] end receive buffer size reported is 65536

    Anyone have experience with this use case ? Perhaps this is a bug that needs raising ?

    EDIT : I found out that if I remove the &sources= parameter, the localaddr is used and the request is passed through the correct interface, however as I need to join a SSM stream, this still blocks me from continuing.

  • ffmpeg/h265/opencv/c++ A method to resize frame after decoding on client side

    18 janvier 2018, par 8793

    I’ve just joined a project to build a realtime video streaming application using ffmpeg/opencv/c++ via udp socket. On server side, they want to transmit a video size (640x480) to client, in order to reduce data transmission through network I resize the video to (320x240) and send frame. On client side (client), after receiving frame, we will upscale the frame back to (640x480). Using H265 for encode/decoding.

    As I am just a beginner with video encoding, I would like to understand how to down-sampling & up-sampling the frame at server & client side in which we can incorporate with the video encoder/decoder.

    A simple idea came into my mind that after decoding avframe -> Mat frame, I will upsampling this frame then display it.

    I am not sure my idea is right or wrong. I would like to seek advice from any people who had experience in this area. Thank you very much !

    static void updateFrameCallback(AVFrame *avframe, void* userdata) {
       VideoStreamUDPClient* streamer = static_cast (userdata);
       TinyClient* client = static_cast (streamer->userdata);

       //Update Frame
       pthread_mutex_lock(&client->mtx_updateFrame);
       if (streamer->irect.width == client->frameSize.width
               && streamer->irect.height == client->frameSize.height) {
           cvtAVFrameYUV4202Frame(&avframe, client->frame);
           printf("TinyClient: Received Full Frame\n");
       } else {
           Mat block;
           cvtAVFrameYUV4202Frame(&avframe, block);
           block.copyTo(client->frame(streamer->irect));
       }

       //How to resize frame before display it!!!

       imshow("Frame", client->frame);
       waitKey(1);
       pthread_mutex_unlock(&client->mtx_updateFrame);
    }
  • Make video frames from a livestream identifiable across multiple clients

    23 septembre 2016, par mschwaig

    I need to distribute a video stream from a live source to several clients with the additional requirement that each frame is identifiable across all clients.

    I have already done research into the topic, and I have arrived at a possible solution that I can share. My solution seems suboptimal and this is my first experience of working with video streams, so I want to see if somebody knows a better way.

    The reason why I need to be able to identify specific frames within the video stream is that the streaming clients need to be able to talk about the time differences between events each of them identifies in their video stream.

    A little clarifying example

    I want to enable the following interaction :

    • Two client applications Dewey and Stevie connect to the streaming server
    • Dewey displays the stream and Stevie saves it to disk
    • Dewey identifies a specific video frame that is of interest to Stevie, so he wants to tell Stevie about it
    • Dewey extracts some identifying information from the video frame and sends it to Stevie
    • Stevie uses the identifying information to extract the same frame from the copy of the livestream he is currently saving

    Dewey cannot send the frame to Stevie directly, because Malcolm and Reese also want to tell him about specific video frames and Stevie is interested in the time difference between their findings.

    Suggested solution

    The solution that I found was using ffserver to broadcast a RTP stream and use the timestamps from the RTCP packets to identify frames. These timestamps are normally used to synchronize audio and video, and not to provide a shared timeline across several clients, which is why I am skeptical this is the best way to solve my problem.

    It also seems beneficial to have frame numbers, like an increasing counter of frames instead of arbitrary timestamps which increase by some perhaps varying offset as for my application I also have to reference neighboring frames and it seems easier to compute time differences from frame numbers, than the other way around.