
Recherche avancée
Médias (1)
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (111)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (6489)
-
Don’t use expressions with side effects in macro parameters
28 juillet 2016, par Martin StorsjöDon’t use expressions with side effects in macro parameters
AV_WB32 can be implemented as a macro that expands its parameters
multiple times (in case AV_HAVE_FAST_UNALIGNED isn’t set and the
compiler doesn’t support GCC attributes) ; make sure not to read
multiple times from the source in this case.Signed-off-by : Martin Storsjö <martin@martin.st>
-
ffmpeg not returning duration, cant play video until complete. Stream images 2 video via PHP
17 février 2014, par John JI am real struggling with ffmpeg. I am trying to convert images to video, I have an ip camera which I am recording from. The recordings are mjpegs 1 frame per image.
I am trying to create a script in php so I can recreate a video from date to date, this requires inputting the images via image2pipe and then creating the video.
The trouble is, ffmpeg does return the duration and start stats, so I have no way of working out when the video is done or what percentage is done. The video won't play until its finished, and its not a very good UE.
Any ideas of how I can resolve this, the video format can be anything I am open to suggestions.
PHP :
//Shell command
exec('cat /image/dir/*.jpg | ffmpeg -y -c:v mjpeg -f image2pipe -r 10 -i - -c:v libx264 -pix_fmt yuv420p -movflags +faststart myvids/vidname.mp4 1>vidname.txt 2>&1')
//This is loaded via javascript when the video is loaded (which is failing due to stats being wrong
$video_play = "<video width="\"320\"" height="\"240\"" src="\"myvids/vidname.mp4\"" type="\"video/mp4\"\" controls="\"controls\"" preload="\"none\""></video>";Javascript :
//Javascript to create the loop until video is loaded
<code class="echappe-js"><script><br />
$(document).ready(function() {<br />
var loader = $("#clip_load").percentageLoader();<br />
$.ajaxSetup({ cache: false }); // This part addresses an IE bug. without it, IE will only load the first number and will never refresh<br />
var interval = setInterval(updateProgress,1000);<br />
function updateProgress(){ $.get( "&#39;.base_url().&#39;video/getVideoCompile_Process?l=&#39;.$vid_name.&#39;-output.txt&amp;t=per", function( data ) { if(data=>\&#39;100\&#39;){ $("#clip_load").html(\&#39;&#39;.$video_play.&#39;\&#39;); clearInterval(interval); }else{loader.setProgress(data); } }); }<br />
});<br />
</script>PHP (page is called via javascript :
//This is the script which returns the current percentage
$logloc = $this->input->get('l');
$content = @file_get_contents($logloc);
if($content){
//get duration of source
preg_match("/Duration: (.*?), start:/", $content, $matches);
$rawDuration = $matches[1];
//rawDuration is in 00:00:00.00 format. This converts it to seconds.
$ar = array_reverse(explode(":", $rawDuration));
$duration = floatval($ar[0]);
if (!empty($ar[1])) $duration += intval($ar[1]) * 60;
if (!empty($ar[2])) $duration += intval($ar[2]) * 60 * 60;
//get the time in the file that is already encoded
preg_match_all("/time=(.*?) bitrate/", $content, $matches);
$rawTime = array_pop($matches);
//this is needed if there is more than one match
if (is_array($rawTime)){$rawTime = array_pop($rawTime);}
//rawTime is in 00:00:00.00 format. This converts it to seconds.
$ar = array_reverse(explode(":", $rawTime));
$time = floatval($ar[0]);
if (!empty($ar[1])) $time += intval($ar[1]) * 60;
if (!empty($ar[2])) $time += intval($ar[2]) * 60 * 60;
//calculate the progress
$progress = round(($time/$duration) * 100);
if ($this->input->get('t')=='per'){
echo $progress;
}else{
echo "Duration: " . $duration . "<br />";
echo "Current Time: " . $time . "<br />";
echo "Progress: " . $progress . "%";}
}else{ echo "cannot locate";}Thanks
-
How do terminal pipes in Python differ from those in Rust ?
5 octobre 2022, par rust_convertTo work on learning Rust (in a Tauri project) I am converting a Python 2 program that uses ffmpeg to create a custom video format from a GUI. The video portion converts successfully, but I am unable to get the audio to work. With the debugging I have done for the past few days, it looks like I am not able to read in the audio data in Rust correctly from the terminal pipe - what is working to read in the video data is not working for the audio. I have tried reading in the audio data as a string and then converting it to bytes but then the byte array appears empty. I have been researching the 'Pipe'-ing of data from the rust documentation and python documentation and am unsure how the Rust pipe could be empty or incorrect if it's working for the video.


From this python article and this rust stack overflow exchange, it looks like the python stdout pipe is equivalent to the rust stdin pipe ?


The python code snippet for video and audio conversion :


output=open(self.outputFile, 'wb')
devnull = open(os.devnull, 'wb')

vidcommand = [ FFMPEG_BIN,
 '-i', self.inputFile,
 '-f', 'image2pipe',
 '-r', '%d' % (self.outputFrameRate),
 '-vf', scaleCommand,
 '-vcodec', 'rawvideo',
 '-pix_fmt', 'bgr565be',
 '-f', 'rawvideo', '-']
 
vidPipe = '';
if os.name=='nt' :
 startupinfo = sp.STARTUPINFO()
 startupinfo.dwFlags |= sp.STARTF_USESHOWWINDOW
 vidPipe=sp.Popen(vidcommand, stdin = sp.PIPE, stdout = sp.PIPE, stderr = devnull, bufsize=self.inputVidFrameBytes*10, startupinfo=startupinfo)
else:
 vidPipe=sp.Popen(vidcommand, stdin = sp.PIPE, stdout = sp.PIPE, stderr = devnull, bufsize=self.inputVidFrameBytes*10)

vidFrame = vidPipe.stdout.read(self.inputVidFrameBytes)

audioCommand = [ FFMPEG_BIN,
 '-i', self.inputFile,
 '-f', 's16le',
 '-acodec', 'pcm_s16le',
 '-ar', '%d' % (self.outputAudioSampleRate),
 '-ac', '1',
 '-']

audioPipe=''
if (self.audioEnable.get() == 1):
 if os.name=='nt' :
 startupinfo = sp.STARTUPINFO()
 startupinfo.dwFlags |= sp.STARTF_USESHOWWINDOW
 audioPipe = sp.Popen(audioCommand, stdin = sp.PIPE, stdout=sp.PIPE, stderr = devnull, bufsize=self.audioFrameBytes*10, startupinfo=startupinfo)
 else:
 audioPipe = sp.Popen(audioCommand, stdin = sp.PIPE, stdout=sp.PIPE, stderr = devnull, bufsize=self.audioFrameBytes*10)

 audioFrame = audioPipe.stdout.read(self.audioFrameBytes) 

currentFrame=0;

while len(vidFrame)==self.inputVidFrameBytes:
 currentFrame+=1
 if(currentFrame%30==0):
 self.progressBarVar.set(100.0*(currentFrame*1.0)/self.totalFrames)
 if (self.videoBitDepth.get() == 16):
 output.write(vidFrame)
 else:
 b16VidFrame=bytearray(vidFrame)
 b8VidFrame=[]
 for p in range(self.outputVidFrameBytes):
 b8VidFrame.append(((b16VidFrame[(p*2)+0]>>0)&0xE0)|((b16VidFrame[(p*2)+0]<<2)&0x1C)|((b16VidFrame[(p*2)+1]>>3)&0x03))
 output.write(bytearray(b8VidFrame))

 vidFrame = vidPipe.stdout.read(self.inputVidFrameBytes) # Read where vidframe is to match up with audio frame and output?
 if (self.audioEnable.get() == 1):


 if len(audioFrame)==self.audioFrameBytes:
 audioData=bytearray(audioFrame) 

 for j in range(int(round(self.audioFrameBytes/2))):
 sample = ((audioData[(j*2)+1]<<8) | audioData[j*2]) + 0x8000
 sample = (sample>>(16-self.outputAudioSampleBitDepth)) & (0x0000FFFF>>(16-self.outputAudioSampleBitDepth))

 audioData[j*2] = sample & 0xFF
 audioData[(j*2)+1] = sample>>8

 output.write(audioData)
 audioFrame = audioPipe.stdout.read(self.audioFrameBytes)

 else:
 emptySamples=[]
 for samples in range(int(round(self.audioFrameBytes/2))):
 emptySamples.append(0x00)
 emptySamples.append(0x00)
 output.write(bytearray(emptySamples))

self.progressBarVar.set(100.0)

vidPipe.terminate()
vidPipe.stdout.close()
vidPipe.wait()

if (self.audioEnable.get() == 1):
 audioPipe.terminate()
 audioPipe.stdout.close()
 audioPipe.wait()

output.close()



The Rust snippet that should accomplish the same goals :


let output_file = OpenOptions::new()
 .create(true)
 .truncate(true)
 .write(true)
 .open(&output_path)
 .unwrap();
let mut writer = BufWriter::with_capacity(
 options.video_frame_bytes.max(options.audio_frame_bytes),
 output_file,
);
let ffmpeg_path = sidecar_path("ffmpeg");
#[cfg(debug_assertions)]
let timer = Instant::now();

let mut video_cmd = Command::new(&ffmpeg_path);
#[rustfmt::skip]
video_cmd.args([
 "-i", options.path,
 "-f", "image2pipe",
 "-r", options.frame_rate,
 "-vf", options.scale,
 "-vcodec", "rawvideo",
 "-pix_fmt", "bgr565be",
 "-f", "rawvideo",
 "-",
])
.stdin(Stdio::null())
.stdout(Stdio::piped())
.stderr(Stdio::null());

// windows creation flag CREATE_NO_WINDOW: stops the process from creating a CMD window
// https://docs.microsoft.com/en-us/windows/win32/procthread/process-creation-flags
#[cfg(windows)]
video_cmd.creation_flags(0x08000000);

let mut video_child = video_cmd.spawn().unwrap();
let mut video_stdout = video_child.stdout.take().unwrap();
let mut video_frame = vec![0; options.video_frame_bytes];

let mut audio_cmd = Command::new(&ffmpeg_path);
#[rustfmt::skip]
audio_cmd.args([
 "-i", options.path,
 "-f", "s16le",
 "-acodec", "pcm_s16le",
 "-ar", options.sample_rate,
 "-ac", "1",
 "-",
])
.stdin(Stdio::null())
.stdout(Stdio::piped())
.stderr(Stdio::null());

#[cfg(windows)]
audio_cmd.creation_flags(0x08000000);

let mut audio_child = audio_cmd.spawn().unwrap();
let mut audio_stdout = audio_child.stdout.take().unwrap();
let mut audio_frame = vec![0; options.audio_frame_bytes];

while video_stdout.read_exact(&mut video_frame).is_ok() {
 writer.write_all(&video_frame).unwrap();

 if audio_stdout.read_to_end(&mut audio_frame).is_ok() {
 if audio_frame.len() == options.audio_frame_bytes {
 for i in 0..options.audio_frame_bytes / 2 {
 let temp_sample = ((u32::from(audio_frame[(i * 2) + 1]) << 8)
 | u32::from(audio_frame[i * 2]))
 + 0x8000;
 let sample = (temp_sample >> (16 - 10)) & (0x0000FFFF >> (16 - 10));

 audio_frame[i * 2] = (sample & 0xFF) as u8;
 audio_frame[(i * 2) + 1] = (sample >> 8) as u8;
 }
 } else {
 audio_frame.fill(0x00);
 }
 }
 writer.write_all(&audio_frame).unwrap();
}


video_child.wait().unwrap();
audio_child.wait().unwrap();

#[cfg(debug_assertions)]
{
 let elapsed = timer.elapsed();
 dbg!(elapsed);
}

writer.flush().unwrap();



I have looked at the hex data of the files using HxD - regardless of how I alter the Rust program, I am unable to get data different from what is previewed in the attached image - so the audio pipe is incorrectly interfaced. I included a screenshot of the hex data from the working python program that converts the video and audio correctly.


HxD Python program hex output :




HxD Rust program hex output :