Recherche avancée

Médias (91)

Autres articles (68)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • Problèmes fréquents

    10 mars 2010, par

    PHP et safe_mode activé
    Une des principales sources de problèmes relève de la configuration de PHP et notamment de l’activation du safe_mode
    La solution consiterait à soit désactiver le safe_mode soit placer le script dans un répertoire accessible par apache pour le site

Sur d’autres sites (5316)

  • Use FFMpeg libraries to read an audio file while it is being generated

    12 mai 2019, par EyAl

    I got an audio engine, which generates aac files. I want to mux this audio with some video. I’m using ffmpeg libraries to do just that - meaning, after the audio file is ready, I read it and mux it.
    Now - for performance reasons, I don’t want to wait until the audio engine completes the audio generation, I want the muxer to start reading the audio while it is being generated.
    Can I achieve that using the FFMpeg libraries ?
    Which approach should I take ?
    Couldn’t find any examples doing that

  • ffmpeg I want to combine the three command in one command

    12 janvier 2019, par Abdulwahed AbuAbed

    hello I’m new here I want to marge many command in one, like add subtitle file , image (watermark) and write word in top right in video .
    without effecting video Quality and size , and if it possible make it fast like a single command .

    ffmpeg -i input.mkv -vf subtitles=subtitle.srt -sn out.mkv  
    ffmpeg -i input.mkv -vf "movie=image.png [watermark]; [in][watermark] overlay=10:10 [out]" out.mkv  
    ffmpeg -i input.mkv -vf drawtext=text='Hallo':x=10:y=H-th-10:fontfile=/path/to/font.ttf:fontsize=12:fontcolor=white:shadowcolor=black:shadowx=5:shadowy=5" out.mkv
  • RTP and H.264 (Packetization Mode 1)... Decoding RAW Data... Help understanding the audio and STAP-A packets

    12 février 2014, par Lane

    I am attempting to re-create a video from a Wireshark capture. I have researched extensively and the following links provided me with the most useful information...

    How to convert H.264 UDP packets to playable media stream or file (defragmentation) (and the 2 sub-links)
    H.264 over RTP - Identify SPS and PPS Frames

    ...I understand from these links and RFC (RTP Payload Format for H.264 Video) that...

    • The Wireshark capture shows a client communicating with a server via RTSP/RTP by making the following calls... OPTIONS, DESCRIBE, SETUP, SETUP, then PLAY (both audio and video tracks exist)

    • The RTSP response from PLAY (that contains the Sequence and Picture Parameter Sets) contains the following (some lines excluded)...

    Media Description, name and address (m) : audio 0 RTP/AVP 0
    Media Attribute (a) : rtpmap:0 PCMU/8000/1
    Media Attribute (a) : control:trackID=1
    Media Attribute (a) : x-bufferdelay:0

    Media Description, name and address (m) : video 0 RTP/AVP 98
    Media Attribute (a) : rtpmap:98 H264/90000
    Media Attribute (a) : control:trackID=2
    Media Attribute (a) : fmtp:98 packetization-mode=1 ;profile-level-id=4D0028 ;sprop-parameter-sets=J00AKI2NYCgC3YC1AQEBQAAA+kAAOpg6GAC3IAAzgC7y40MAFuQABnAF3lwWNF3A,KO48gA==

    Media Description, name and address (m) : metadata 0 RTP/AVP 100
    Media Attribute (a) : rtpmap:100 IQ-METADATA/90000
    Media Attribute (a) : control:trackID=3

    ...the packetization-mode=1 means that only NAL Units, STAP-A and FU-A are accepted

    • The streaming RTP packets (video only, DynamicRTP-Type-98) arrive in the following order...

    1x
    [RTP Header]
    0x78 0x00 (Type is 24, meaning STAP-A)
    [Remaining Payload]

     36x
    [RTP Header]
    0x7c (Type is 28, meaning FU-A) then either 0x85 (first) 0x05 (middle) or 0x45 (last)
    [Remaining Payload]

    1x
    [RTP Header]
    0x18 0x00 (Type is 24, meaning STAP-A)
    [Remaining Payload]

    8x
    [RTP Header]
    0x5c (Type is 28, meaning FU-A) then either 0x81 (first) 0x01 (middle) or 0x41 (last)
    [Remaining Payload]

    ...the cycle then repeats... typically there are 29 0x18/0x5c RTP packets for each 0x78/0x7c packet

    • Approximately every 100 packets, there is an audio RTP packet, all have their Marker set to true and their sequence numbers ascend as expected. Sometimes there is an individual RTP audio packet and sometimes there are three, see a sample one here...

    RTP 1042 PT=ITU-T G.711 PCMU, SSRC=0x238E1F29, Seq=31957, Time=1025208762, Mark

    ...also, the type of each audio RTP packet is different (as far as first bytes go... I see 0x4e, 0x55, 0xc5, 0xc1, 0xbc, 0x3c, 0x4d, 0x5f, 0xcc, 0xce, 0xdc, 0x3e, 0xbf, 0x43, 0xc9, and more)

    • From what I gather... to re-create the video, I first need to create a file of the format

    0x000001 [SPS Payload]
    0x000001 [PPS Payload]
    0x000001 [Complete H.264 Frame (NAL Byte, followed by all fragmented RTP payloads without the first 2 bytes)
    0x000001 [Next Frame]
    Etc...

    I made some progress where I can run "ffmpeg -i file" without it saying a bad input format or unable to find codec. But currently it complains something about MP3. My questions are as follows...

    1. Should I be using the SPS and PPS payload returned by the response to the DESCRIBE RTSP call or use the data sent in the first STAP-A RTP packets (0x78 and 0x18) ?

    2. How does the file format change to incorporate the audio track ?

    3. Why is the audio track payload headers all over the place and how can I make sense / utilize them ?

    4. Is my understanding of anything incorrect ?

    Any help is GREATLY appreciated, thanks !