Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (37)

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

  • Activation de l’inscription des visiteurs

    12 avril 2011, par

    Il est également possible d’activer l’inscription des visiteurs ce qui permettra à tout un chacun d’ouvrir soit même un compte sur le canal en question dans le cadre de projets ouverts par exemple.
    Pour ce faire, il suffit d’aller dans l’espace de configuration du site en choisissant le sous menus "Gestion des utilisateurs". Le premier formulaire visible correspond à cette fonctionnalité.
    Par défaut, MediaSPIP a créé lors de son initialisation un élément de menu dans le menu du haut de la page menant (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

Sur d’autres sites (5336)

  • How to receive upd stream with OpenCV ?

    17 février 2021, par Legion

    I need to receive my stream from Jetson Nano to my OpenCV program on my PC (Windows 10).

    


    Ok, I stream camera from my device (Jetson Nano) using :

    


    cv::VideoWriter gst_udpsink("appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw, format=BGRx ! nvvidconv ! nvv4l2h264enc insert-vui=1 ! video/x-h264, stream-format=byte-stream ! h264parse ! rtph264pay pt=96 config-interval=1 ! udpsink host=224.1.1.1 port=5000 auto-multicast=true", cv::CAP_GSTREAMER, 0, fps, cv::Size (width, height));


    


    I installed OpenCV with Gstreamer(following that ) and tried that command

    


    c:\gstreamer\1.0\msvc_x86_64\bin\gst-launch-1.0.exe  udpsrc uri=udp://224.1.1.1:5000 auto-multicast=true ! application/x-rtp, media=video, encoding-name=H264 ! rtpjitterbuffer latency=300 ! rtph264depay ! decodebin ! d3dvideosink


    


    it is working, unfortunately, no matter what latency I set I still got quite a big lag.
When I try to use OpenCV

    


    cv::VideoCapture cap("udpsrc uri=udp://224.1.1.1:5000 auto-multicast=true ! application/x-rtp, media=video, encoding-name=H264 ! rtpjitterbuffer latency=300 ! rtph264depay ! decodebin ! videoconvert ! video/x-raw, format=BGR ! appsink", cv::CAP_GSTREAMER);


    


    I get

    


    [ WARN:0] global F:\Code\opencv_4.5.1\opencv-4.5.1\modules\videoio\src\cap_gstreamer.cpp (734) cv::GStreamerCapture::open OpenCV | GStreamer warning: Error opening bin: no element "udpsrc"
[ WARN:0] global F:\Code\opencv_4.5.1\opencv-4.5.1\modules\videoio\src\cap_gstreamer.cpp (501) cv::GStreamerCapture::isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created


    


    And .isOpened() give me false.
I’m don’t know why did I install something wrong ?

    


    I added everything to my PATH as instructed

    


    image

    


    I also tried to use FFmpeg :

    


    setenv ("OPENCV_FFMPEG_CAPTURE_OPTIONS", "protocol_whitelist;file,rtp,udp", 1);
cap = cv::VideoCapture("test.sdp", cv::CAP_FFMPEG);


    


    I get :

    


    [rtp @ 0000014dc1f83bc0] Protocol 'rtp' not on whitelist 'file,crypto,data'!


    


    I have no setenv() so I tried this and it seems that’s a problem, any idea ?

    


    Shell equivalent

    


    ffplay myFile.sdp -protocol_whitelist file,udp,rtp -fflags nobuffer


    


    Is working successfully (with delay but successfully).

    


    I'm willing to change anything to make it work ! If it's possible with FFmpeg/GStreamer/vlclib, I can change the Jetson side as well, thanks for any help !

    


  • Merge image, audio, video with no audio, video with audio, with ffmpeg

    17 février 2021, par Basj

    Similarly to Merge videos and images using ffmpeg (which is not a duplicate for the reasons explained below), I'd like to merge multiple inputs which can be either :

    


      

    • image only,
    • 


    • audio only,
    • 


    • video with audio,
    • 


    • video without audio
    • 


    


    into one output video, with stereo audio.

    


    Note : If multiple audio channels are playing at the same time, they should be mixed ; idem for video : the images from multiple sources should overlap.

    


    I tried this (comments added here) :

    


    ffmpeg 
  -i tmp/%04d.png       # [0]
  -f lavfi -t 0.1 -i anullsrc   # [1], if needed for inputs without sound?
  -i a.mp3              # [2], we keep 1 sec. from it; should start at 0'05" in output video
  -i b.mp3              # [3], we keep 2 sec. from it; should start at 0'06" in output video
  -i with_sound.mp4     # [4], we keep 3 sec. from it; should start at 0'07" in output video
  -i without_sound.mp4  # [5], we keep 4 sec. from it; should start at 0'08" in output video
  -filter_complex 
    [2]atrim=start=0:duration=1.0,asetpts=PTS-STARTPTS[s2];[s2]adelay=5000|5000[t2];
    [3]atrim=start=0:duration=2.0,asetpts=PTS-STARTPTS[s3];[s3]adelay=6000|6000[t3];
    [4]atrim=start=0:duration=3.0,asetpts=PTS-STARTPTS[s4];[s4]adelay=7000|7000[t4];
    [5]atrim=start=0:duration=4.0,asetpts=PTS-STARTPTS[s5];[s5]adelay=8000|8000[t5];
    [0][1][t2][t3][t4][t5]concat=n=6:a=1:v=1:unsafe=1[outv][outa]
  -map [outv] -map [outa] out.mp4


    


    I tried with various values concat=n=5, n=6, etc. and added unsafe=1, but I always get similar errors :

    


    


    [Parsed_adelay_2 @ 00000000006e8140] Media type mismatch between the 'Parsed_adelay_2' filter output pad 0 (audio) and the 'Parsed_concat_6' filter input pad 2 (video)
    
[AVFilterGraph @ 00000000006923c0] Cannot create the link adelay:0 -> concat:2

    


    


    or for the times I got it nearly working, the videos were added one after another and not merged / mixed.

    


    Also, I'm looking for a syntax that would work even if I don't know in advance if the input videos have or don't have audio (I'm doing a script and I don't know in advance if the videos have audio channels).

    



    


    TL ;DR :

    


    Question : How to mix/merge multiple inputs (image, audio, video with-or-without-sound) with ffmpeg, with a precise starting timestamp for each, into a single video output ?

    


  • How to use audio frame after decode mp3 file using pyav, ffmpeg, python

    2 janvier 2021, par Long Tran Dai

    I am using using python with pyav, ffmpeg to decode mp3 in the memory. I know there are some other way to do it, like pipe ffmpeg command. However, I would like to explore pyav and ffmpeg API. So I have the following code. It works but the sound is very noisy, although hearable :

    


    import numpy as np&#xA;import av # to convert mp3 to wav using ffmpeg&#xA;import pyaudio # to play music&#xA;&#xA;mp3_path = &#x27;D:/MyProg/python/SauTimThiepHong.mp3&#x27;&#xA;&#xA;def decodeStream(mp3_path):&#xA;  # Run NOT OK&#xA;  &#xA;  container = av.open(mp3_path)&#xA;  stream = next(s for s in container.streams if s.type == &#x27;audio&#x27;)&#xA;  frame_count = 0&#xA;  data = bytearray()&#xA;  for packet in container.demux(stream):&#xA;    # <class>&#xA;    # We need to skip the "flushing" packets that `demux` generates.&#xA;    #if frame_count == 5000 : break         &#xA;    if packet.dts is None:&#xA;        continue&#xA;    for frame in packet.decode():   &#xA;        #&#xA;        # type(frame) : <class>&#xA;        #frame.samples = 1152 : 1152 diem du lieu : Number of audio samples (per channel)&#xA;        # moi frame co size = 1152 (diem) * 2 (channels) * 4 (bytes / diem) = 9216 bytes&#xA;        # 11021 frames&#xA;        #arr = frame.to_ndarray() # arr.nbytes = 9216&#xA;&#xA;        #channels = []  &#xA;        channels = frame.to_ndarray().astype("float16")&#xA;        #for plane in frame.planes:&#xA;            #channels.append(plane.to_bytes()) #plane has 4 bytes / sample, but audio has only 2 bytes&#xA;        #    channels.append(np.frombuffer(plane, dtype=np.single).astype("float16"))&#xA;            #channels.append(np.frombuffer(plane, dtype=np.single)) # kieu np.single co 4 bytes&#xA;        if not frame.is_corrupt:&#xA;            #data.extend(np.frombuffer(frame.planes[0], dtype=np.single).astype("float16")) # 1 channel: noisy&#xA;            # type(planes) : <class>&#xA;            frame_count &#x2B;= 1&#xA;            #print( &#x27;>>>> %04d&#x27; % frame_count, frame)   &#xA;            #if frame_count == 5000 : break     &#xA;            # mix channels:&#xA;            for i in range(frame.samples):                &#xA;                for ch in channels: # dec_ctx->channels&#xA;                    data.extend(ch[i]) #noisy&#xA;                    #fwrite(frame->data[ch] &#x2B; data_size*i, 1, data_size, outfile)&#xA;  return bytes(data)&#xA;</class></class></class>

    &#xA;

    I use pipe ffmpeg to get decoded data to compare and find they are different :

    &#xA;

    def RunFFMPEG(mp3_path, target_fs = "44100"):&#xA;    # Run OK&#xA;    import subprocess&#xA;    # init command&#xA;    ffmpeg_command = ["ffmpeg", "-i", mp3_path,&#xA;                   "-ab", "128k", "-acodec", "pcm_s16le", "-ac", "0", "-ar", target_fs, "-map",&#xA;                   "0:a", "-map_metadata", "-1", "-sn", "-vn", "-y",&#xA;                   "-f", "wav", "pipe:1"]&#xA;    # excute ffmpeg command&#xA;    pipe = subprocess.run(ffmpeg_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize= 10**8)&#xA;    # debug&#xA;    #print(pipe.stdout, pipe.stderr)&#xA;    # read signal as numpy array and assign sampling rate&#xA;    #audio_np = np.frombuffer(buffer=pipe.stdout, dtype=np.uint16, offset=44)&#xA;    #audio_np = np.frombuffer(buffer=pipe.stdout, dtype=np.uint16)&#xA;    #sig, fs  = audio_np, target_fs&#xA;    #return audio_np&#xA;    return pipe.stdout[78:]     &#xA;

    &#xA;

    Then I use pyaudio to play data and find it very noisy

    &#xA;

    p = pyaudio.PyAudio()&#xA;streamOut = p.open(format=pyaudio.paInt16, channels=2, rate= 44100, output=True)&#xA;#streamOut = p.open(format=pyaudio.paInt16, channels=1, rate= 44100, output=True)&#xA;&#xA;mydata = decodeStream(mp3_path)&#xA;print("bytes of mydata = ", len(mydata))&#xA;#print("bytes of mydata = ", mydata.nbytes)&#xA;&#xA;ffMpegdata = RunFFMPEG(mp3_path)&#xA;print("bytes of ffMpegdata = ", len(ffMpegdata)) &#xA;#print("bytes of ffMpegdata = ", ffMpegdata.nbytes)&#xA;&#xA;minlen = min(len(mydata), len(ffMpegdata))&#xA;print("mydata == ffMpegdata", mydata[:minlen] == ffMpegdata[:minlen]) # ffMpegdata.tobytes()[:minlen] )&#xA;&#xA;#bytes of mydata =  50784768&#xA;#bytes of ffMpegdata =  50784768&#xA;#mydata == ffMpegdata False&#xA;&#xA;streamOut.write(mydata)&#xA;streamOut.write(ffMpegdata)&#xA;streamOut.stop_stream()&#xA;streamOut.close()&#xA;p.terminate()&#xA;

    &#xA;

    Please help me to understand decoded frame of pyav api (after for frame in packet.decode() :). Should it be processed more ? or I have some error ?

    &#xA;

    It makes me crazy for 3 days. I could not guess where to go.

    &#xA;

    Thank you very much.

    &#xA;