Recherche avancée

Médias (91)

Autres articles (43)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (6786)

  • YUV Overlay in SlimDX / DirectX ?

    7 juin 2012, par Guillaume

    I decode a video file using ffmpeg and once decoded I get a YUV image.
    How can I display this YUV image as an overlay to a surface (or texture ?) using SlimDX / DirectX ?

    Thanks.

  • Live video stream on server (PC) from images sent by robot through UDP

    3 février 2018, par Richard Knop

    Hmm. I found this which seems promising :

    http://sourceforge.net/projects/mjpg-streamer/


    Ok. I will try to explain what I am trying to do clearly and in much detail.

    I have a small humanoid robot with camera and wifi stick (this is the robot). The robot’s wifi stick average wifi transfer rate is 1769KB/s. The robot has 500Mhz CPU and 256MB RAM so it is not enough for any serious computations (moreover there are already couple modules running on the robot for motion, vision, sonar, speech etc).

    I have a PC from which I control the robot. I am trying to have the robot walk around the room and see a live stream video of what the robot sees in the PC.

    What I already have working. The robot is walking as I want him to do and taking images with the camera. The images are being sent through UDP protocol to the PC where I am receiving them (I have verified this by saving the incoming images on the disk).

    The camera returns images which are 640 x 480 px in YUV442 colorspace. I am sending the images with lossy compression (JPEG) because I am trying to get the best possible FPS on the PC. I am doing the compression to JPEG on the robot with PIL library.

    My questions :

    1. Could somebody please give me some ideas about how to convert the incoming JPEG images to a live video stream ? I understand that I will need some video encoder for that. Which video encoder do you recommend ? FFMPEG or something else ? I am very new to video streaming so I want to know what is best for this task. I’d prefer to use Python to write this so I would prefer some video encoder or library which has Python API. But I guess if the library has some good command line API it doesn’t have to be in Python.

    2. What is the best FPS I could get out from this ? Given the 1769KB/s average wifi transfer rate and the dimensions of the images ? Should I use different compression than JPEG ?

    3. I will be happy to see any code examples. Links to articles explaining how to do this would be fine, too.

    Some code samples. Here is how I am sending JPEG images from robot to the PC (shortened simplified snippet). This runs on the robot :

    # lots of code here

    UDPSock = socket(AF_INET,SOCK_DGRAM)

     while 1:
       image = camProxy.getImageLocal(nameId)
       size = (image[0], image[1])
       data = image[6]
       im = Image.fromstring("YCbCr", size, data)
       s = StringIO.StringIO()
       im.save(s, "JPEG")

       UDPSock.sendto(s.getvalue(), addr)

       camProxy.releaseImage(nameId)

     UDPSock.close()

     # lots of code here

    Here is how I am receiving the images on the PC. This runs on the PC :

     # lots of code here

     UDPSock = socket(AF_INET,SOCK_DGRAM)
     UDPSock.bind(addr)

     while 1:
       data, addr = UDPSock.recvfrom(buf)
       # here I need to create a stream from the data
       # which contains JPEG image

     UDPSock.close()

     # lots of code here
  • Error message in transloadit /audio/merge robot

    20 octobre 2023, par Gorgsenegger

    I've been using the /audio/merge robot for almost a year now and didn't have any problems (except initially setting my template up). Since yesterday I keep getting the error message

    


    Invalid audio stream. Exactly one MP3 audio stream is required.

    


    when trying to merge two mp3 files, the relevant part of the code in my template is

    


        ...
    "merge": {
      "use": {
        "steps": [
          {
            "name": "concatenated",
            "as": "audio"
          },
          {
            "name": "normalized",
            "as": "audio"
          }
        ],
        "bundle_steps": true
      },
      "robot": "/audio/merge",
      "duration": "longest",
      "ffmpeg_stack": "v6.0.0",
      "sign_urls_for": 120
    },
    ...


    


    The only recent change was updating the ffmpeg_stack version from v4.3.1 to the recommended version v6.0.0.