
Recherche avancée
Autres articles (37)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (6212)
-
HLS. FFmpeg : error when loading first segment [closed]
30 avril 2024, par rus_99_pkI'm trying to download a streaming video using ffmpeg. There is a file in the format *.m3u8. BUT, if everything was so simple, I would not have come here.


There are a number of nuances :


- 

- It cannot be downloaded by specifying a link to the file
- If you upload the file and look at its contents, there will be :
#EXT-X-KEY:METHOD=AES-128,URI="[KEY],IV=[IV]






With a URI, the task is easy to solve ; just specify the value list.m3u8.


I end up getting :


Error when loading first segment 'https://cdnv-m12.boomstream.com/vod/hash:21596def3216ed982660d609751b8078/id:35105.29443.1039983.85853232.150106.hls/time:0/data:eyJ2ZXJzaW9uIjoiMS4yLjk3IiwidXNlX2RpcmVjdF9saW5rcyI6InllcyIsImlzX2VuY3J5cHQiOiJ5ZXMifQ==/m61/2024/04/27/1Q0idCxb.mp4/media-1.ts'



But with IV it’s more difficult, because file processing is performed on the server side. Please help.


I tried to get JS with similar variable names and monitor network traffic using Wireshark, in the hope of catching a response from the server with IV.


But it didn't help me.


My script for download :


#!/bin/bash
clear

link="/home/user/Download/chunklist.m3u8"
filename="testfile"

ffmpeg \
-headers $'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:124.0) Gecko/20100101 Firefox/124.0\r\nAccept: */*\r\nAccept-Language: ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3\r\nAccept-Encoding: gzip, deflate, br\r\nOrigin: https://example.com\r\nConn>
-protocol_whitelist "file,http,https,tcp,tls,crypto" \
-allowed_extensions ALL \
-f hls \
-i "$link" \
-map p:2 \
-bsf:a aac_adtstoasc -vcodec copy -c copy -crf 50 /tmp/$filename.mp4 -v trace





-
ffmpeg output with label 'v' does not exist in any defined filtler graph
15 avril 2024, par David RoyaleBeen trying to make this Python code work but it fails up until rendering the final part :


import subprocess
import re
import os

def detect_idle_sections(video_path):
 command = ['ffmpeg', '-i', video_path, '-vf', 'select=\'gt(scene,0.1)\'', '-vsync', 'vfr', '-f', 'null', '-']
 output = subprocess.check_output(command, stderr=subprocess.STDOUT).decode('utf-8')
 
 idle_sections = []
 duration = 0.0
 for line in output.split('\n'):
 match = re.search(r'scene:(\d+)', line)
 if match:
 scene = int(match.group(1))
 if scene == 0:
 duration += 1.0 / 30 # Assuming 30 fps
 else:
 if duration > 0:
 idle_sections.append((duration, duration - (1.0 / 30))) # Duration and start time
 duration = 0.0
 
 return idle_sections

def cut_idle_sections(video_path, idle_sections, output_path, total_duration):
 print("starting to cut things")
 filters = []
 start_time = 0.0
 for duration, _ in idle_sections:
 filters.append(f'[0:v]trim=start={start_time}:end={start_time + duration},setpts=PTS-STARTPTS[v{len(filters)}]')
 start_time += duration
 
 if start_time < total_duration:
 filters.append(f'[0:v]trim=start={start_time},setpts=PTS-STARTPTS[v{len(filters)}]')

 filter_str = ';'.join(filters)
 print("finished chopping")
 command = ['ffmpeg', '-i', video_path, '-filter_complex', filter_str, '-map', '[v]', output_path]
 subprocess.call(command)

def get_total_duration(video_path):
 print("prior getting time")
 command = ['ffprobe', '-v', 'error', '-show_entries', 'format=duration', '-of', 'default=noprint_wrappers=1:nokey=1', video_path]
 output = subprocess.check_output(command).decode('utf-8').strip()
 print("after getting time")
 return float(output)

input_file = r"C:\Users\D\Videos\2024-04-15 08-42-53.mkv"
output_file = r"C:\Users\D\Videos\output_video.mp4"

# Get full paths
input_path = os.path.abspath(input_file)
output_path = os.path.abspath(output_file)

total_duration = get_total_duration(input_path)
idle_sections = detect_idle_sections(input_path)
cut_idle_sections(input_path, idle_sections, output_path, total_duration)



The error I am getting is :


[out#0/mp4 @ 000001af60f9d3c0] Output with label 'v' does not exist in any defined filter graph, or was already used elsewhere.
Error opening output file C:\Users\D_era\Videos\output_video.mp4.
Error opening output files: Invalid argument



The code is intended to cut "iddle" frames where frame a = frame b. I want to point out that I don't really care about audio, so it's just comparing if frames "A" through "H" are the same and keeping a and continuing with the rest of the video.


Putting some comments to determine which part was successful and which broke, I found that the line failig is this :


command = ['ffmpeg', '-i', video_path, '-filter_complex', filter_str, '-map', '[v]', output_path]



and apparently is the -map part.


-
webrtc to rtmp send video from camera to rtmp link
14 avril 2024, par Leo-Mahendrai cant send the video from webrtc which is converted to bufferd data for every 10seconds and send to server.js where it takes it via websockets and convert it to flv format using ffmpeg.


i am trying to send it to rtmp server named restreamer for start, here i tried to convert the buffer data and send it to rtmp link using ffmpeg commands, where i initially started to suceesfully save the file from webrtc to mp4 format for a duration of 2-3 minute.


after i tried to use webrtc to send video data for every 10 seconds and in server i tried to send it to rtmp but i cant send it, but i can see the connection of rtmp url and server is been taken place but i cant see the video i can see the logs in rtmp server as


2024-04-14 12:35:45 ts=2024-04-14T07:05:45Z level=INFO component="RTMP" msg="no streams available" action="INVALID" address=":1935" client="172.17.0.1:37700" path="/3d30c5a9-2059-4843-8957-da963c7bc19b.stream" who="PUBLISH"
2024-04-14 12:35:45 ts=2024-04-14T07:05:45Z level=INFO component="RTMP" msg="no streams available" action="INVALID" address=":1935" client="172.17.0.1:37716" path="/3d30c5a9-2059-4843-8957-da963c7bc19b.stream" who="PUBLISH"
2024-04-14 12:35:45 ts=2024-04-14T07:05:45Z level=INFO component="RTMP" msg="no streams available" action="INVALID" address=":1935" client="172.17.0.1:37728" path="/3d30c5a9-2059-4843-8957-da963c7bc19b.stream" who="PUBLISH" 



my frontend code


const handleSendVideo = async () => {
 console.log("start");
 
 if (!ws) {
 console.error('WebSocket connection not established.');
 return;
 }
 
 try {
 const videoStream = await navigator.mediaDevices.getUserMedia({ video: true });
 const mediaRecorder = new MediaRecorder(videoStream);
 
 const requiredFrameSize = 460800;
 const frameDuration = 10 * 1000; // 10 seconds in milliseconds
 
 mediaRecorder.ondataavailable = async (event) => {
 if (ws.readyState !== WebSocket.OPEN) {
 console.error('WebSocket connection is not open.');
 return;
 }
 
 if (event.data.size > 0) {
 const arrayBuffer = await event.data.arrayBuffer();
 const uint8Array = new Uint8Array(arrayBuffer);
 
 const width = videoStream.getVideoTracks()[0].getSettings().width;
 const height = videoStream.getVideoTracks()[0].getSettings().height;
 
 const numFrames = Math.ceil(uint8Array.length / requiredFrameSize);
 
 for (let i = 0; i < numFrames; i++) {
 const start = i * requiredFrameSize;
 const end = Math.min((i + 1) * requiredFrameSize, uint8Array.length);
 let frameData = uint8Array.subarray(start, end);
 
 // Pad or trim the frameData to match the required size
 if (frameData.length < requiredFrameSize) {
 // Pad with zeros to reach the required size
 const paddedData = new Uint8Array(requiredFrameSize);
 paddedData.set(frameData, 0);
 frameData = paddedData;
 } else if (frameData.length > requiredFrameSize) {
 // Trim to match the required size
 frameData = frameData.subarray(0, requiredFrameSize);
 }
 
 const dataToSend = {
 buffer: Array.from(frameData), // Convert Uint8Array to array of numbers
 width: width,
 height: height,
 pixelFormat: 'yuv420p',
 mode: 'SendRtmp'
 };
 
 console.log("Sending frame:", i);
 ws.send(JSON.stringify(dataToSend));
 }
 }
 };
 
 // Start recording and send data every 10 seconds
 mediaRecorder.start(frameDuration);
 
 console.log("MediaRecorder started.");
 } catch (error) {
 console.error('Error accessing media devices or starting recorder:', error);
 }
 };



and my backend


wss.on('connection', (ws) => {
 console.log('WebSocket connection established.');

 ws.on('message', async (data) => {
 try {
 const parsedData = JSON.parse(data);

 if (parsedData.mode === 'SendRtmp' && Array.isArray(parsedData.buffer)) {
 const { buffer, pixelFormat, width, height } = parsedData;
 const bufferArray = Buffer.from(buffer);

 await sendRtmpVideo(bufferArray, pixelFormat, width, height);
 } else {
 console.log('Received unknown or invalid mode or buffer data');
 }
 } catch (error) {
 console.error('Error parsing WebSocket message:', error);
 }
 });

 ws.on('close', () => {
 console.log('WebSocket connection closed.');
 });
 });
 const sendRtmpVideo = async (frameBuffer, pixelFormat, width, height) => {
 console.log("ffmpeg data",frameBuffer)
 try {
 const ratio = `${width}x${height}`;
 const ffmpegCommand = [
 '-re',
 '-f', 'rawvideo',
 '-pix_fmt', pixelFormat,
 '-s', ratio,
 '-i', 'pipe:0',
 '-c:v', 'libx264',
 '-preset', 'fast', // Specify the preset for libx264
 '-b:v', '3000k', // Specify the video bitrate
 '-loglevel', 'debug',
 '-f', 'flv',
 // '-flvflags', 'no_duration_filesize', 
 RTMPLINK
 ];


 const ffmpeg = spawn('ffmpeg', ffmpegCommand);

 ffmpeg.on('exit', (code, signal) => {
 if (code === 0) {
 console.log('FFmpeg process exited successfully.');
 } else {
 console.error(`FFmpeg process exited with code ${code} and signal ${signal}`);
 }
 });

 ffmpeg.on('error', (error) => {
 console.error('FFmpeg spawn error:', error);
 });

 ffmpeg.stderr.on('data', (data) => {
 console.error(`FFmpeg stderr: ${data}`);
 });

 ffmpeg.stdin.write(frameBuffer, (err) => {
 if (err) {
 console.error('Error writing to FFmpeg stdin:', err);
 } else {
 console.log('Data written to FFmpeg stdin successfully.');
 }
 ffmpeg.stdin.end(); // Close stdin after writing the buffer
 });
 } catch (error) {
 console.error('Error in sendRtmpVideo:', error);
 }
 };