Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (55)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (7684)

  • ffmpeg multiple pipes with different filters and outputs

    3 janvier 2024, par diegoddox

    I'm trying to pipe multiple outputs to an s3 but so far it's only working if I tried with only one pipe as soon as I add more I get Could not write header (incorrect codec parameters ?): Broken pipe error.

    


    aws s3 cp s3://bucket-name/original.mp4 - | \
  ffmpeg -f mp4 -i pipe:0 \
  -vf "scale=1280x720:flags=lanczos" -c:a aac -b:a 96k -movflags frag_keyframe+empty_moov -f mp4 pipe:1 | aws s3 cp - s3://bucket-name/720.mp4 --region my-region \
  -vf "scale=854x480:flags=lanczos" -c:a aac -b:a 96k -movflags frag_keyframe+empty_moov -f mp4 pipe:2 | aws s3 cp - s3://bucket-name/480.mp4 --region my-region \
  -vf "scale=640x360:flags=lanczos" -c:a aac -b:a 96k -movflags frag_keyframe+empty_moov -f mp4 pipe:3 | aws s3 cp - s3://bucket-name/360.mp4 --region my-region \
  -ss 00:00:00 -t 3 -vf "fps=10,scale=720:-1" -movflags frag_keyframe+empty_moov -f gif pipe:4 | aws s3 cp - s3://bucket-name/thumbnail.gif --region my-region


    


    Thanks in advance

    


  • Raspberry Pi Camera Module - Stream to LAN

    20 août 2015, par user3096434

    have a little problem with the setup of my RasPi camera infrastructure. Basically I have a RPi 2 which shall act as a MontionEye server from now on and 2 Pi B+ with camera modules.

    Previously, when I had only one camera in my network, I used the following command to stream the output from RPi B+ camera module to Youtube in full HD. So far, this command works flawless :

    raspivid -n -vf -hf -t 0 -w 1920 -h 1080 -fps 30 -b 3750000 -g 50 -o - | b ffmpeg -ar 8000 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -f h264 -i - -vcodec copy -acodec aac -ab 64k -g 50 -strict experimental -f flv $RTMP_URL/$STREAM_KEY

    Now I have a 2nd RPi with a camera module and figured it might be the time for a change towards motioneye, as I then can view both/all camera’s in my network within the same software. I have motioneye installed on my RPi 2 and the software is running correctly.

    I have a little problem when it comes to access the data stream from the RPi B+ camera on my local network.

    Basically I cannot seem to figure out how to change the ffmpeg portion of the above mentioned command, in a way so it will stream the data to localhost (Or the RPi2 IP where motioneye runs - which one to use ?) instead of Youtube or any other videohoster.

    I wonder, if changing the following part is a correct assumption :

    Instead of using variables to define Youtube URL and key

    -f flv $RTMP_URL/$STREAM_KEY

    And change this to

    -f flv 10.1.1.11:8080

    Will I then be able to add this RPi B+ video stream to my RPi 2 motioneye server, by using motioneye ’add network camera’ function ?

    From my understanding I should be able to enter the following details into motioneye ’add network camera’wizard :

    Camera type: network camera
    RTSP-URL: 10.1.1.11:8080
    User: Pi
    Pass: [my pwd]
    Camera: [my ffmpeg stream shall show here]

    Thanks in advance !

    Uhm, and then... How do I forwarded the video stream from a given camera connected to motioneye ? Like from motioneye to youtube (or similar), without re-encoding the stream ?

    Like the command shown above streams directly to youtube. But I want to have it in a way, that video is streamed to local network/motioneye server, and from there I can decide which camera’s stream and when I want to send the videostream to youtube ?

    How would a RPi professional realize this ?

    The command above explained : Takes full HD video with 30 fps from Pi camera module and hardware encodes it on GPU with 3.75mbit/s. Then I streamcopy the video (no re-encoding) and add some audio, so that the stream complies with youtube rules (yes, no live stream without audio). Audio is taken from virtual SB16 /dev/zero at low sampling rate then encoded to 32k AAC and sent to youtube. Works fine xD.

    Just when I have like 3 or more of these RPi cams the youtube stream approach ain’t feasible anymore, as my DSL upstream is limited (10 mbit/s=. Thus I need motioneye server and some magic, so I can watch f.e. all 3 camera’s videostream and then motioneye server can select and streamcopy the video from the Pi’s cam I choose and send it to youtube, as the original command did.

    Any help, tips, links to similar projects highly appreciated.

    Again, many thanks in advance, and even more thanks just cause you read until here.

    —mx

  • ffmpeg : get aspect ratio of input video

    1er janvier 2013, par user732274

    I wrote a shell script which accepts any video file (I mean any kind of dimensions, any sort of rotation, etc.) and it uses ffmpeg to pad that video to 720x576 (resizing it) in order to make it perfectly suitable for a video-dvd without requiring the user to calculate anything. Now : I know I can access the size of the input video by using "iw" and "ih" keywords (in the -vf syntax), but I don't know what's the keyword to access its aspect ratio (of course I don't need to read it in the ffmpeg output : I need to access it in the command line).