
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (52)
-
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...) -
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...) -
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...)
Sur d’autres sites (4063)
-
How do I convert that mp4 to the same format as this ts ?
5 septembre 2022, par Orlando BloomGeneral
Complete name : D:/aaa.mp4
Format : MPEG-4
Format profile : Base Media
Codec ID : isom (isom/iso2/avc1/mp41)
File size : 2.73 MiB
Duration : 6 s 154 ms
Overall bit rate mode : Variable
Overall bit rate : 3 715 kb/s
Encoded date : UTC 2022-09-04 09:49:04
Tagged date : UTC 2022-09-04 09:49:05

Video
ID : 1
Format : AVC
Format/Info : Advanced Video Codec
Format profile : High@L3.1
Format settings : CABAC / 2 Ref Frames
Format settings, CABAC : Yes
Format settings, Reference frames : 2 frames
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Duration : 6 s 134 ms
Bit rate mode : Variable
Bit rate : 3 578 kb/s
Maximum bit rate : 4 320 kb/s
Width : 856 pixels
Height : 480 pixels
Display aspect ratio : 16:9
Frame rate mode : Constant
Frame rate : 30.000 FPS
Standard : Component
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.290
Stream size : 2.62 MiB (96%)
Encoded date : UTC 2022-09-04 09:49:04
Tagged date : UTC 2022-09-04 09:49:05
Color range : Limited
Color primaries : BT.709
Transfer characteristics : BT.709
Matrix coefficients : BT.709
Codec configuration box : avcC

Audio
ID : 2
Format : AAC LC
Format/Info : Advanced Audio Codec Low Complexity
Codec ID : mp4a-40-2
Duration : 6 s 154 ms
Source duration : 6 s 200 ms
Bit rate mode : Variable
Bit rate : 137 kb/s
Maximum bit rate : 320 kb/s / 320 kb/s
Channel(s) : 2 channels
Channel layout : L R
Sampling rate : 44.1 kHz
Frame rate : 43.066 FPS (1024 SPF)
Compression mode : Lossy
Stream size : 104 KiB (4%)
Source stream size : 104 KiB (4%)
Default : Yes
Alternate group : 1
Encoded date : UTC 2022-09-04 09:49:04
Tagged date : UTC 2022-09-04 09:49:05
mdhd_Duration : 6153





General
ID : 1 (0x1)
Complete name : D:/bbb.ts
Format : MPEG-TS
File size : 3.13 MiB
Duration : 10 s 763 ms
Overall bit rate mode : Variable
Overall bit rate : 2 434 kb/s

Video
ID : 256 (0x100)
Menu ID : 1 (0x1)
Format : AVC
Format/Info : Advanced Video Codec
Format profile : High@L3.2
Format settings : CABAC / 4 Ref Frames
Format settings, CABAC : Yes
Format settings, Reference frames : 4 frames
Codec ID : 27
Duration : 10 s 780 ms
Width : 1 080 pixels
Height : 606 pixels
Display aspect ratio : 16:9
Frame rate mode : Variable
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Writing library : x264 core 142
Encoding settings : cabac=1 / ref=2 / deblock=1:0:0 / analyse=0x3:0x113 / me=hex / subme=4 / psy=1 / psy_rd=1.00:0.00 / mixed_ref=0 / me_range=16 / chroma_me=1 / trellis=1 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=0 / threads=48 / lookahead_threads=4 / sliced_threads=0 / nr=0 / decimate=1 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=3 / b_pyramid=2 / b_adapt=1 / b_bias=0 / direct=1 / weightb=1 / open_gop=0 / weightp=1 / keyint=250 / keyint_min=25 / scenecut=40 / intra_refresh=0 / rc_lookahead=20 / rc=crf / mbtree=1 / crf=23.0 / qcomp=0.60 / qpmin=0 / qpmax=69 / qpstep=4 / ip_ratio=1.40 / aq=1:1.00

Audio
ID : 257 (0x101)
Menu ID : 1 (0x1)
Format : AAC LC
Format/Info : Advanced Audio Codec Low Complexity
Format version : Version 4
Muxing mode : ADTS
Codec ID : 15-2
Duration : 10 s 750 ms
Bit rate mode : Variable
Channel(s) : 2 channels
Channel layout : L R
Sampling rate : 44.1 kHz
Frame rate : 43.066 FPS (1024 SPF)
Compression mode : Lossy
Delay relative to video : -23 ms





I tried "ffmpeg -i aaa.mp4 -c:v libx264 -c:a aac bbb.ts"


But the two converted ts, "-c copy" into a new mp4 cannot be played in the iPhone default player


I think my use of "ffmpeg -i aaa.mp4 -c:v libx264 -c:a aac bbb.ts" is wrong,


The following is the filler text, do not look


It looks like your post is mostly code ; please add some more details.
It looks like your post is mostly code ; please add some more details.
It looks like your post is mostly code ; please add some more details.


-
lavc/ffv1 : change FFV1SliceContext.plane into a RefStruct object
11 juillet 2024, par Anton Khirnovlavc/ffv1 : change FFV1SliceContext.plane into a RefStruct object
Frame threading in the FFV1 decoder works in a very unusual way - the
state that needs to be propagated from the previous frame is not decoded
pixels(¹), but each slice's entropy coder state after decoding the slice.For that purpose, the decoder's update_thread_context() callback stores
a pointer to the previous frame thread's private data. Then, when
decoding each slice, the frame thread uses the standard progress
mechanism to wait for the corresponding slice in the previous frame to
be completed, then copies the entropy coder state from the
previously-stored pointer.This approach is highly dubious, as update_thread_context() should be
the only point where frame-thread contexts come into direct contact.
There are no guarantees that the stored pointer will be valid at all, or
will contain any particular data after update_thread_context() finishes.More specifically, this code can break due to the fact that keyframes
reset entropy coder state and thus do not need to wait for the previous
frame. As an example, consider a decoder process with 2 frame threads -
thread 0 with its context 0, and thread 1 with context 1 - decoding a
previous frame P, current frame F, followed by a keyframe K. Then
consider concurrent execution consistent with the following sequence of
events :
* thread 0 starts decoding P
* thread 0 reads P's slice header, then calls
ff_thread_finish_setup() allowing next frame thread to start
* main thread calls update_thread_context() to transfer state from
context 0 to context 1 ; context 1 stores a pointer to context 0's private
data
* thread 1 starts decoding F
* thread 1 reads F's slice header, then calls
ff_thread_finish_setup() allowing the next frame thread to start
decoding
* thread 0 finishes decoding P
* thread 0 starts decoding K ; since K is a keyframe, it does not
wait for F and reallocates the arrays holding entropy coder state
* thread 0 finishes decoding K
* thread 1 reads entropy coder state from its stored pointer to context
0, however it finds state from K rather than from PThis execution is currently prevented by special-casing FFV1 in the
generic frame threading code, however that is supremely ugly. It also
involves unnecessary copies of the state arrays, when in fact they can
only be used by one thread at a time.This commit addresses these deficiencies by changing the array of
PlaneContext (each of which contains the allocated state arrays)
embedded in FFV1SliceContext into a RefStruct object. This object can
then be propagated across frame threads in standard manner. Since the
code structure guarantees only one thread accesses it at a time, no
copies are necessary. It is also re-created for keyframes, solving the
above issue cleanly.Special-casing of FFV1 in the generic frame threading code will be
removed in a later commit.(¹) except in the case of a damaged slice, when previous frame's pixels
are used directly -
Streaming video with FFmpeg through pipe causes partial file offset error and moov atom not found
12 mars 2023, par MoeI'm trying to stream a video from firebase cloud storage through ffmpeg then to the HTML video player, using a very basic example with the range header worked fine and was exactly what I was trying to do, but now when I'm trying to pipe the stream from firebase then through ffmpeg then to the browser it works fine for just first couple of requests (First 10 seconds) but after that It faced these issues :


- 

- Unable to get the actual time of the video on the browser (Constantly changing as if it doesn't know metadata)
- On the server it fails to continue streaming to the request with the following :






[NULL @ 000001d8c239e140] Invalid NAL unit size (110356 > 45446).
[NULL @ 000001d8c239e140] missing picture in access unit with size 45450
pipe:0: corrupt input packet in stream 0
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001d8c238c7c0] stream 0, offset 0x103fcf: partial file



and then this error as well :


[mov,mp4,m4a,3gp,3g2,mj2 @ 000001dc9590c7c0] moov atom not found
pipe:0: Invalid data found when processing input



Using nodejs and running Ffmpeg V 5.0.1 on serverless env :


const filePath = `sample.mp4` || req.query.path;

 // Create a read stream from the video file
 const videoFile = bucket.file(filePath);

 const range = req.headers.range;

 if(!range){
 return res.status(500).json({mes: 'Not Found'});
 }

 // Get the file size for the 'Content-Length' header
 const [metadata] = await videoFile.getMetadata();
 const videoSize = metadata.size;

 const CHUNK_SIZE = 1000000; // 1MB

 const start = Number(range.replace(/\D/g, ""));
 const end = Math.min(start + CHUNK_SIZE, videoSize - 1);
 
 // Create headers
 const contentLength = end - start + 1;
 const headers = {
 "Content-Range": `bytes ${start}-${end}/${videoSize}`,
 "Accept-Ranges": "bytes",
 "Content-Length": contentLength,
 "Content-Type": "video/mp4",
 // 'Transfer-Encoding': 'chunked'
 };
 
 // HTTP Status 206 for Partial Content
 res.writeHead(206, headers);
 
 // create video read stream for this particular chunk
 const inputStream = videoFile.createReadStream({ start, end });

 const ffmpeg = spawn(pathToFfmpeg, [
 // '-re', '-y', 
 '-f', 'mp4', 
 '-i', 'pipe:0', // read input from standard input (pipe)
 '-c:v', 'copy', // copy video codec
 '-c:a', 'copy', // copy audio codec
 '-map_metadata', '0',
 `-movflags`, `frag_keyframe+empty_moov+faststart+default_base_moof`,
 '-f', 'mp4', // output format
 'pipe:1',
 // write output to standard output (pipe)
 ], {
 stdio: ['pipe', 'pipe', 'inherit']
 });

 inputStream.pipe(ffmpeg.stdin);

 ffmpeg.stdout.pipe(res);
 



Note that this version is trimmed I do have a lot of log code and error handling of course, and basically again what's happening is that for the first request it is working fine but then if the player requests a part like minute 5 for example it gives the errors I mentioned above :


What I have tried :


- 

- At first I tried adjusting the ffmpeg parameters with the following to try and fix the moov atom error but it still presisted :




-map_metadata', '0',
 `-movflags`, `frag_keyframe+empty_moov+faststart+default_base_moof`



- 

- I have also tried to stream to file then through pipe but that also gave the same errors.




Finally after googling for about day and a half trying out tens of solutions nothing worked, now I'm stuck at this error where I'm not able to process a specific fragment of a video through ffmpeg, is that even possible or am I doing something wrong ?


Why am I even streaming through ffmpeg ?


I am indeed of course going to add filters to the video and a dynamic watermark text for every request that's why I need to use ffmpeg through stream not directly through a file as the video filters will change on demand according to every user