Recherche avancée

Médias (91)

Autres articles (26)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

Sur d’autres sites (5125)

  • ffmpeg - AVPixelFormat, what is "smaples" means in the documentation ?

    17 juin 2017, par dafnahaktana

    I did not understand the documentation , for example :

    AV_PIX_FMT_YUV422P planar YUV 4:2:2, 16bpp, (1 Cr & Cb sample per 2x1 Y samples)

    I didn’t understand the meaning of "(1 Cr & Cb sample per 2x1 Y samples) ",
    and what does it mean 4:2:2 ? I could not find explanation anywhere.

    Also , I see there are types like AV_PIX_FMT_YUV420P that have 12bpp. Does that mean that each pixel is represented by 12 bit ? if so, how is it represented ? Should I allocate two bytes for each pixel and ignore the last 4 bits ? or I should allocate something like ceil((#pixels)*1.5) bytes ?

  • Realtime recording in multiple encodings python

    3 janvier 2015, par erip

    I am working on a project that requires audio recording and am using python. I would like to use a start button to begin recording and a stop button to stop it. Additionally, I’d like to support multiple filetypes like .wav, .mp3, .aiff.

    As far as I know, there doesn’t exist something that can do this "in one go", meaning I could record something as a .wav and then post-process the file to encode it in different formats using ffmpeg or a similar library.

    This presents a small problem because my project is web-based, so I would have to save the .wav and then save the .*.

    Does anyone know of another solution to my problem ?

    Thanks, erip

  • RTP and H.264 (Packetization Mode 1)... Decoding RAW Data... Help understanding the audio and STAP-A packets

    12 février 2014, par Lane

    I am attempting to re-create a video from a Wireshark capture. I have researched extensively and the following links provided me with the most useful information...

    How to convert H.264 UDP packets to playable media stream or file (defragmentation) (and the 2 sub-links)
    H.264 over RTP - Identify SPS and PPS Frames

    ...I understand from these links and RFC (RTP Payload Format for H.264 Video) that...

    • The Wireshark capture shows a client communicating with a server via RTSP/RTP by making the following calls... OPTIONS, DESCRIBE, SETUP, SETUP, then PLAY (both audio and video tracks exist)

    • The RTSP response from PLAY (that contains the Sequence and Picture Parameter Sets) contains the following (some lines excluded)...

    Media Description, name and address (m) : audio 0 RTP/AVP 0
    Media Attribute (a) : rtpmap:0 PCMU/8000/1
    Media Attribute (a) : control:trackID=1
    Media Attribute (a) : x-bufferdelay:0

    Media Description, name and address (m) : video 0 RTP/AVP 98
    Media Attribute (a) : rtpmap:98 H264/90000
    Media Attribute (a) : control:trackID=2
    Media Attribute (a) : fmtp:98 packetization-mode=1 ;profile-level-id=4D0028 ;sprop-parameter-sets=J00AKI2NYCgC3YC1AQEBQAAA+kAAOpg6GAC3IAAzgC7y40MAFuQABnAF3lwWNF3A,KO48gA==

    Media Description, name and address (m) : metadata 0 RTP/AVP 100
    Media Attribute (a) : rtpmap:100 IQ-METADATA/90000
    Media Attribute (a) : control:trackID=3

    ...the packetization-mode=1 means that only NAL Units, STAP-A and FU-A are accepted

    • The streaming RTP packets (video only, DynamicRTP-Type-98) arrive in the following order...

    1x
    [RTP Header]
    0x78 0x00 (Type is 24, meaning STAP-A)
    [Remaining Payload]

     36x
    [RTP Header]
    0x7c (Type is 28, meaning FU-A) then either 0x85 (first) 0x05 (middle) or 0x45 (last)
    [Remaining Payload]

    1x
    [RTP Header]
    0x18 0x00 (Type is 24, meaning STAP-A)
    [Remaining Payload]

    8x
    [RTP Header]
    0x5c (Type is 28, meaning FU-A) then either 0x81 (first) 0x01 (middle) or 0x41 (last)
    [Remaining Payload]

    ...the cycle then repeats... typically there are 29 0x18/0x5c RTP packets for each 0x78/0x7c packet

    • Approximately every 100 packets, there is an audio RTP packet, all have their Marker set to true and their sequence numbers ascend as expected. Sometimes there is an individual RTP audio packet and sometimes there are three, see a sample one here...

    RTP 1042 PT=ITU-T G.711 PCMU, SSRC=0x238E1F29, Seq=31957, Time=1025208762, Mark

    ...also, the type of each audio RTP packet is different (as far as first bytes go... I see 0x4e, 0x55, 0xc5, 0xc1, 0xbc, 0x3c, 0x4d, 0x5f, 0xcc, 0xce, 0xdc, 0x3e, 0xbf, 0x43, 0xc9, and more)

    • From what I gather... to re-create the video, I first need to create a file of the format

    0x000001 [SPS Payload]
    0x000001 [PPS Payload]
    0x000001 [Complete H.264 Frame (NAL Byte, followed by all fragmented RTP payloads without the first 2 bytes)
    0x000001 [Next Frame]
    Etc...

    I made some progress where I can run "ffmpeg -i file" without it saying a bad input format or unable to find codec. But currently it complains something about MP3. My questions are as follows...

    1. Should I be using the SPS and PPS payload returned by the response to the DESCRIBE RTSP call or use the data sent in the first STAP-A RTP packets (0x78 and 0x18) ?

    2. How does the file format change to incorporate the audio track ?

    3. Why is the audio track payload headers all over the place and how can I make sense / utilize them ?

    4. Is my understanding of anything incorrect ?

    Any help is GREATLY appreciated, thanks !