
Recherche avancée
Autres articles (65)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (3218)
-
HLS video not playing in Angular using Hls.js
5 avril 2023, par Jose A. MataránI am trying to play an HLS video using Hls.js in an Angular component. Here is the component code :


import { Component, ElementRef, ViewChild, AfterViewInit } from '@angular/core';
import Hls from 'hls.js';

@Component({
 selector: 'app-ver-recurso',
 templateUrl: './ver-recurso.component.html',
 styleUrls: ['./ver-recurso.component.css']
})
export class VerRecursoComponent implements AfterViewInit {
 @ViewChild('videoPlayer') videoPlayer!: ElementRef<htmlvideoelement>;
 hls!: Hls;

 ngAfterViewInit(): void {
 this.hls = new Hls();

 const video = this.videoPlayer.nativeElement;
 const watermarkText = 'MARCA_DE_AGUA';

 this.hls.on(Hls.Events.MEDIA_ATTACHED, () => {
 this.hls.loadSource(`http://localhost:8080/video/playlist.m3u8?watermarkText=${encodeURIComponent(watermarkText)}`);
 });

 this.hls.attachMedia(video);
 }

 loadVideo() {
 const watermarkText = 'Marca de agua personalizada';
 const video = this.videoPlayer.nativeElement;
 const hlsBaseUrl = 'http://localhost:8080/video';

 if (Hls.isSupported()) {
 this.hls.loadSource(`${hlsBaseUrl}/playlist.m3u8?watermarkText=${encodeURIComponent(watermarkText)}`);
 this.hls.attachMedia(video);
 this.hls.on(Hls.Events.MANIFEST_PARSED, () => {
 video.play();
 });
 } else if (video.canPlayType('application/vnd.apple.mpegurl')) {
 video.src = `${hlsBaseUrl}/playlist.m3u8?watermarkText=${encodeURIComponent(watermarkText)}`;
 video.addEventListener('loadedmetadata', () => {
 video.play();
 });
 }
 }
}
</htmlvideoelement>


I'm not sure what's going wrong, as the requests to the playlist and the segments seem to be working correctly, but the video never plays. Is there anything obvious that I'm missing here ?


On the backend side, I have a Spring Boot application that generates and returns the playlist file, as you can see.


@RestController
public class VideoController {
 @Autowired
 private VideoService videoService;

 @GetMapping("/video/{segmentFilename}")
 public ResponseEntity<resource> getHlsVideoSegment(@PathVariable String segmentFilename, @RequestParam String watermarkText) {
 String inputVideoPath = "/Users/jose/PROYECTOS/VARIOS/oposhield/oposhield-back/repo/202204M-20230111.mp4";
 String hlsOutputPath = "/Users/jose/PROYECTOS/VARIOS/oposhield/oposhield-back/repo/temporal";
 String segmentPath = Paths.get(hlsOutputPath, segmentFilename).toString();

 if (!Files.exists(Paths.get(hlsOutputPath, "playlist.m3u8"))) {
 videoService.generateHlsStream(inputVideoPath, watermarkText, hlsOutputPath);
 }

 // Espera a que esté disponible el segmento de video necesario.
 while (!Files.exists(Paths.get(segmentPath))) {
 try {
 Thread.sleep(1000);
 } catch (InterruptedException e) {
 e.printStackTrace();
 }
 }

 Resource resource;
 try {
 resource = new UrlResource(Paths.get(segmentPath).toUri());
 } catch (Exception e) {
 return ResponseEntity.badRequest().build();
 }

 return ResponseEntity.ok()
 .contentType(MediaType.parseMediaType("application/vnd.apple.mpegurl"))
 .header(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=\"" + resource.getFilename() + "\"")
 .body(resource);
 }

 @GetMapping("/video/{segmentFilename}.ts")
 public ResponseEntity<resource> getHlsVideoTsSegment(@PathVariable String segmentFilename) {
 String hlsOutputPath = "/Users/jose/PROYECTOS/VARIOS/oposhield/oposhield-back/repo/temporal";
 String segmentPath = Paths.get(hlsOutputPath, segmentFilename + ".ts").toString();

 // Espera a que esté disponible el segmento de video necesario.
 while (!Files.exists(Paths.get(segmentPath))) {
 try {
 Thread.sleep(1000);
 } catch (InterruptedException e) {
 e.printStackTrace();
 }
 }

 Resource resource;
 try {
 resource = new UrlResource(Paths.get(segmentPath).toUri());
 } catch (Exception e) {
 return ResponseEntity.badRequest().build();
 }

 return ResponseEntity.ok()
 .contentType(MediaType.parseMediaType("video/mp2t"))
 .header(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=\"" + resource.getFilename() + "\"")
 .body(resource);
 }
</resource></resource>


package dev.mataran.oposhieldback.service;


import org.springframework.stereotype.Service;

import java.io.BufferedReader;
import java.io.InputStreamReader;

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

@Service
public class VideoService {
 private Process process;
 private ExecutorService executorService = Executors.newSingleThreadExecutor();

 public void generateHlsStream(String inputVideoPath, String watermarkText, String outputPath) {
 Runnable task = () -> {
 try {
 if (process != null) {
 process.destroy();
 }

 String hlsOutputFile = outputPath + "/playlist.m3u8";
 String command = String.format("ffmpeg -i %s -vf drawtext=text='%s':x=10:y=10:fontsize=24:fontcolor=white -codec:v libx264 -crf 21 -preset veryfast -g 50 -sc_threshold 0 -map 0 -flags -global_header -hls_time 4 -hls_list_size 0 -hls_flags delete_segments+append_list -f hls %s", inputVideoPath, watermarkText, hlsOutputFile);
 process = Runtime.getRuntime().exec(command);
 BufferedReader reader = new BufferedReader(new InputStreamReader(process.getErrorStream()));
 String line;
 while ((line = reader.readLine()) != null) {
 System.out.println(line);
 }
 } catch (Exception e) {
 e.printStackTrace();
 }
 };
 executorService.submit(task);
 }
}



-
How to pass BGR NumPy arrays directly to FFMPEG with CUDA support
4 juillet 2022, par Ξένη ΓήινοςI am using
cv2
to edit images and create a video from the frames with FFMPEG. See this post for more details.

The images are 3D RGB NumPy
array
s (shape is like [h, w, 3]), they are stored in a Pythonlist
.

Yep, I know
cv2
has aVideoWriter
and I have used it before, but it is very inadequate to meet my needs.

Simply put, it can only use an
FFMPEG
version that comes with it, that version does not support CUDA and uses up all CPU time when generating the videos while not using any GPU time at all, the output is way too big and I can't pass many FFMPEG parameters to theVideoWrite
initiation.

I downloaded precompiled binaries of FFMPEG for Windows with CUDA support here, I am using Windows 10 21H1 x64, and my GPU is NVIDIA Geforce GTX 1050 Ti.


Anyways I need to mess with all the parameters found here and there to find the best compromise between quality and compression, like this :


command = '{} -y -stream_loop {} -framerate {} -hwaccel cuda -hwaccel_output_format cuda -i {}/{}_%d.png -c:v hevc_nvenc -preset 18 -tune 1 -rc vbr -cq {} -multipass 2 -b:v {} -vf scale={}:{} {}'
os.system(command.format(FFMPEG, loops-1, fps, tmp_folder, file_name, quality, bitrate, frame_width, frame_height, outfile))



I need to use exactly the binary I downloaded and specify as many parameters as I can to achieve the optimal result.


Currently I can only save the arrays to a disk as images and use the images as input of FFMPEG, and that is slow but I need exactly that binary and all those parameters.


After hours of Google searching I found
ffmpeg-python
, which seems perfect for the job, and I even found this : I can pass the binary path as an argument to therun
function, this

import ffmpeg
import io


def vidwrite(fn, images, framerate=60, vcodec='libx264'):
 if not isinstance(images, np.ndarray):
 images = np.asarray(images)
 _,height,width,channels = images.shape
 process = (
 ffmpeg
 .input('pipe:', format='rawvideo', pix_fmt='rgb24', s='{}x{}'.format(width, height), r=framerate)
 .output(fn, pix_fmt='yuv420p', vcodec=vcodec, r=framerate)
 .overwrite_output()
 .run_async(pipe_stdin=True, overwrite_output=True, pipe_stderr=True)
 )
 for frame in images:
 try:
 process.stdin.write(
 frame.astype(np.uint8).tobytes()
 )
 except Exception as e: # should probably be an exception related to process.stdin.write
 for line in io.TextIOWrapper(process.stderr, encoding="utf-8"): # I didn't know how to get the stderr from the process, but this worked for me
 print(line) # <-- print all the lines in the processes stderr after it has errored
 process.stdin.close()
 process.wait()
 return # cant run anymore so end the for loop and the function execution



However I need to pass all those parameters and possibly many more to the process and I am not sure where these parameters should be passed to (where should
stream_loop
go ? What abouthwaccel
,hwaccel_output_format
,multipass
...?).

How do I properly pipeline a bunch of NumPy arrays to an FFMPEG process spawned by an binary that supports CUDA and pass all sorts of arguments to the initialization of that process ?


-
ffmpeg simple re-mux short video : Invalid data found when processing input
19 mai 2021, par NicI try to remux (surveillance cam) raw AVC files (file header begins with "STL Stream Format v1.0") to get playable MP4 files.


The command I used does work for the longer file but not for the short one :


.\ffmpeg.exe -framerate 25 -i video.ssf -c copy video.mp4



Good result for long video :


Trailing option(s) found in the command: may be ignored.
Input #0, h264, from 'file.ssf':
 Duration: N/A, bitrate: N/A
 Stream #0:0: Video: h264 (Baseline), yuv420p(progressive), 704x576, 25 fps, 25 tbr, 1200k tbn, 50 tbc
Output #0, mp4, to 'file.mp4':
 Metadata:
 encoder : Lavf58.76.100
 Stream #0:0: Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p(progressive), 704x576, q=2-31, 25 fps, 25 tbr, 1200k tbn, 1200k tbc
Stream mapping:
 Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[mp4 @ 0000020c1a048240] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[NULL @ 0000020c1997f640] sps_id 32 out of range00:00:00.00 bitrate=384000.0kbits/s speed= 0.2x
frame= 4814 fps=0.0 q=-1.0 Lsize= 21829kB time=00:03:12.52 bitrate= 928.9kbits/s speed=2.34e+03x
video:21808kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.098001%



Bad result for short video :


[AVIOContext @ 000002c6db8e6d80] Statistics: 846679 bytes read, 0 seeks
video.ssf: Invalid data found when processing input



Is there anything I can adjust to fix this ?


Thanks !


Nico