Recherche avancée

Médias (1)

Mot : - Tags -/publier

Autres articles (111)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (6489)

  • Don’t use expressions with side effects in macro parameters

    28 juillet 2016, par Martin Storsjö
    Don’t use expressions with side effects in macro parameters
    

    AV_WB32 can be implemented as a macro that expands its parameters
    multiple times (in case AV_HAVE_FAST_UNALIGNED isn’t set and the
    compiler doesn’t support GCC attributes) ; make sure not to read
    multiple times from the source in this case.

    Signed-off-by : Martin Storsjö <martin@martin.st>

    • [DBH] libavcodec/dxv.c
    • [DBH] libavformat/xmv.c
  • ffmpeg not returning duration, cant play video until complete. Stream images 2 video via PHP

    17 février 2014, par John J

    I am real struggling with ffmpeg. I am trying to convert images to video, I have an ip camera which I am recording from. The recordings are mjpegs 1 frame per image.

    I am trying to create a script in php so I can recreate a video from date to date, this requires inputting the images via image2pipe and then creating the video.

    The trouble is, ffmpeg does return the duration and start stats, so I have no way of working out when the video is done or what percentage is done. The video won't play until its finished, and its not a very good UE.

    Any ideas of how I can resolve this, the video format can be anything I am open to suggestions.

    PHP :

    //Shell command
    exec(&#39;cat /image/dir/*.jpg | ffmpeg -y -c:v mjpeg -f image2pipe -r 10 -i - -c:v libx264 -pix_fmt yuv420p -movflags +faststart myvids/vidname.mp4 1>vidname.txt 2>&amp;1&#39;)

    //This is loaded via javascript when the video is loaded (which is failing due to stats being wrong
    $video_play = "<video width="\&quot;320\&quot;" height="\&quot;240\&quot;" src="\&quot;myvids/vidname.mp4\&quot;" type="\&quot;video/mp4\&quot;\" controls="\&quot;controls\&quot;" preload="\&quot;none\&quot;"></video>";

    Javascript :

    //Javascript to create the loop until video is loaded
    <code class="echappe-js">&lt;script&gt;<br />
              $(document).ready(function() {<br />
                   var loader = $(&quot;#clip_load&quot;).percentageLoader();<br />
                   $.ajaxSetup({ cache: false }); // This part addresses an IE bug. without it, IE will only load the first number and will never refresh<br />
                   var interval = setInterval(updateProgress,1000);<br />
                   function updateProgress(){ $.get( &quot;&amp;#39;.base_url().&amp;#39;video/getVideoCompile_Process?l=&amp;#39;.$vid_name.&amp;#39;-output.txt&amp;amp;t=per&quot;, function( data ) { if(data=&gt;\&amp;#39;100\&amp;#39;){ $(&quot;#clip_load&quot;).html(\&amp;#39;&amp;#39;.$video_play.&amp;#39;\&amp;#39;); clearInterval(interval); }else{loader.setProgress(data); } });                    }<br />
               });<br />
               &lt;/script&gt;

    PHP (page is called via javascript :

    //This is the script which returns the current percentage
    $logloc = $this->input->get(&#39;l&#39;);
    $content = @file_get_contents($logloc);

    if($content){
       //get duration of source
       preg_match("/Duration: (.*?), start:/", $content, $matches);

       $rawDuration = $matches[1];

       //rawDuration is in 00:00:00.00 format. This converts it to seconds.
       $ar = array_reverse(explode(":", $rawDuration));
       $duration = floatval($ar[0]);
       if (!empty($ar[1])) $duration += intval($ar[1]) * 60;
       if (!empty($ar[2])) $duration += intval($ar[2]) * 60 * 60;

       //get the time in the file that is already encoded
       preg_match_all("/time=(.*?) bitrate/", $content, $matches);

       $rawTime = array_pop($matches);

       //this is needed if there is more than one match
       if (is_array($rawTime)){$rawTime = array_pop($rawTime);}

       //rawTime is in 00:00:00.00 format. This converts it to seconds.
       $ar = array_reverse(explode(":", $rawTime));
       $time = floatval($ar[0]);
       if (!empty($ar[1])) $time += intval($ar[1]) * 60;
       if (!empty($ar[2])) $time += intval($ar[2]) * 60 * 60;

       //calculate the progress
       $progress = round(($time/$duration) * 100);
       if ($this->input->get(&#39;t&#39;)==&#39;per&#39;){
           echo $progress;
       }else{
               echo "Duration: " . $duration . "<br />";
               echo "Current Time: " . $time . "<br />";
       echo "Progress: " . $progress . "%";}
    }else{ echo "cannot locate";}

    Thanks

  • How do terminal pipes in Python differ from those in Rust ?

    5 octobre 2022, par rust_convert

    To work on learning Rust (in a Tauri project) I am converting a Python 2 program that uses ffmpeg to create a custom video format from a GUI. The video portion converts successfully, but I am unable to get the audio to work. With the debugging I have done for the past few days, it looks like I am not able to read in the audio data in Rust correctly from the terminal pipe - what is working to read in the video data is not working for the audio. I have tried reading in the audio data as a string and then converting it to bytes but then the byte array appears empty. I have been researching the 'Pipe'-ing of data from the rust documentation and python documentation and am unsure how the Rust pipe could be empty or incorrect if it's working for the video.

    &#xA;

    From this python article and this rust stack overflow exchange, it looks like the python stdout pipe is equivalent to the rust stdin pipe ?

    &#xA;

    The python code snippet for video and audio conversion :

    &#xA;

    output=open(self.outputFile, &#x27;wb&#x27;)&#xA;devnull = open(os.devnull, &#x27;wb&#x27;)&#xA;&#xA;vidcommand = [ FFMPEG_BIN,&#xA;            &#x27;-i&#x27;, self.inputFile,&#xA;            &#x27;-f&#x27;, &#x27;image2pipe&#x27;,&#xA;            &#x27;-r&#x27;, &#x27;%d&#x27; % (self.outputFrameRate),&#xA;            &#x27;-vf&#x27;, scaleCommand,&#xA;            &#x27;-vcodec&#x27;, &#x27;rawvideo&#x27;,&#xA;            &#x27;-pix_fmt&#x27;, &#x27;bgr565be&#x27;,&#xA;            &#x27;-f&#x27;, &#x27;rawvideo&#x27;, &#x27;-&#x27;]&#xA;        &#xA;vidPipe = &#x27;&#x27;;&#xA;if os.name==&#x27;nt&#x27; :&#xA;    startupinfo = sp.STARTUPINFO()&#xA;    startupinfo.dwFlags |= sp.STARTF_USESHOWWINDOW&#xA;    vidPipe=sp.Popen(vidcommand, stdin = sp.PIPE, stdout = sp.PIPE, stderr = devnull, bufsize=self.inputVidFrameBytes*10, startupinfo=startupinfo)&#xA;else:&#xA;    vidPipe=sp.Popen(vidcommand, stdin = sp.PIPE, stdout = sp.PIPE, stderr = devnull, bufsize=self.inputVidFrameBytes*10)&#xA;&#xA;vidFrame = vidPipe.stdout.read(self.inputVidFrameBytes)&#xA;&#xA;audioCommand = [ FFMPEG_BIN,&#xA;    &#x27;-i&#x27;, self.inputFile,&#xA;    &#x27;-f&#x27;, &#x27;s16le&#x27;,&#xA;    &#x27;-acodec&#x27;, &#x27;pcm_s16le&#x27;,&#xA;    &#x27;-ar&#x27;, &#x27;%d&#x27; % (self.outputAudioSampleRate),&#xA;    &#x27;-ac&#x27;, &#x27;1&#x27;,&#xA;    &#x27;-&#x27;]&#xA;&#xA;audioPipe=&#x27;&#x27;&#xA;if (self.audioEnable.get() == 1):&#xA;    if os.name==&#x27;nt&#x27; :&#xA;        startupinfo = sp.STARTUPINFO()&#xA;        startupinfo.dwFlags |= sp.STARTF_USESHOWWINDOW&#xA;        audioPipe = sp.Popen(audioCommand, stdin = sp.PIPE, stdout=sp.PIPE, stderr = devnull, bufsize=self.audioFrameBytes*10, startupinfo=startupinfo)&#xA;    else:&#xA;        audioPipe = sp.Popen(audioCommand, stdin = sp.PIPE, stdout=sp.PIPE, stderr = devnull, bufsize=self.audioFrameBytes*10)&#xA;&#xA;    audioFrame = audioPipe.stdout.read(self.audioFrameBytes) &#xA;&#xA;currentFrame=0;&#xA;&#xA;while len(vidFrame)==self.inputVidFrameBytes:&#xA;    currentFrame&#x2B;=1&#xA;    if(currentFrame%30==0):&#xA;        self.progressBarVar.set(100.0*(currentFrame*1.0)/self.totalFrames)&#xA;    if (self.videoBitDepth.get() == 16):&#xA;        output.write(vidFrame)&#xA;    else:&#xA;        b16VidFrame=bytearray(vidFrame)&#xA;        b8VidFrame=[]&#xA;        for p in range(self.outputVidFrameBytes):&#xA;            b8VidFrame.append(((b16VidFrame[(p*2)&#x2B;0]>>0)&amp;0xE0)|((b16VidFrame[(p*2)&#x2B;0]&lt;&lt;2)&amp;0x1C)|((b16VidFrame[(p*2)&#x2B;1]>>3)&amp;0x03))&#xA;        output.write(bytearray(b8VidFrame))&#xA;&#xA;    vidFrame = vidPipe.stdout.read(self.inputVidFrameBytes) # Read where vidframe is to match up with audio frame and output?&#xA;    if (self.audioEnable.get() == 1):&#xA;&#xA;&#xA;        if len(audioFrame)==self.audioFrameBytes:&#xA;            audioData=bytearray(audioFrame) &#xA;&#xA;            for j in range(int(round(self.audioFrameBytes/2))):&#xA;                sample = ((audioData[(j*2)&#x2B;1]&lt;&lt;8) | audioData[j*2]) &#x2B; 0x8000&#xA;                sample = (sample>>(16-self.outputAudioSampleBitDepth)) &amp; (0x0000FFFF>>(16-self.outputAudioSampleBitDepth))&#xA;&#xA;                audioData[j*2] = sample &amp; 0xFF&#xA;                audioData[(j*2)&#x2B;1] = sample>>8&#xA;&#xA;            output.write(audioData)&#xA;            audioFrame = audioPipe.stdout.read(self.audioFrameBytes)&#xA;&#xA;        else:&#xA;            emptySamples=[]&#xA;            for samples in range(int(round(self.audioFrameBytes/2))):&#xA;                emptySamples.append(0x00)&#xA;                emptySamples.append(0x00)&#xA;            output.write(bytearray(emptySamples))&#xA;&#xA;self.progressBarVar.set(100.0)&#xA;&#xA;vidPipe.terminate()&#xA;vidPipe.stdout.close()&#xA;vidPipe.wait()&#xA;&#xA;if (self.audioEnable.get() == 1):&#xA;    audioPipe.terminate()&#xA;    audioPipe.stdout.close()&#xA;    audioPipe.wait()&#xA;&#xA;output.close()&#xA;

    &#xA;

    The Rust snippet that should accomplish the same goals :

    &#xA;

    let output_file = OpenOptions::new()&#xA;    .create(true)&#xA;    .truncate(true)&#xA;    .write(true)&#xA;    .open(&amp;output_path)&#xA;    .unwrap();&#xA;let mut writer = BufWriter::with_capacity(&#xA;    options.video_frame_bytes.max(options.audio_frame_bytes),&#xA;    output_file,&#xA;);&#xA;let ffmpeg_path = sidecar_path("ffmpeg");&#xA;#[cfg(debug_assertions)]&#xA;let timer = Instant::now();&#xA;&#xA;let mut video_cmd = Command::new(&amp;ffmpeg_path);&#xA;#[rustfmt::skip]&#xA;video_cmd.args([&#xA;    "-i", options.path,&#xA;    "-f", "image2pipe",&#xA;    "-r", options.frame_rate,&#xA;    "-vf", options.scale,&#xA;    "-vcodec", "rawvideo",&#xA;    "-pix_fmt", "bgr565be",&#xA;    "-f", "rawvideo",&#xA;    "-",&#xA;])&#xA;.stdin(Stdio::null())&#xA;.stdout(Stdio::piped())&#xA;.stderr(Stdio::null());&#xA;&#xA;// windows creation flag CREATE_NO_WINDOW: stops the process from creating a CMD window&#xA;// https://docs.microsoft.com/en-us/windows/win32/procthread/process-creation-flags&#xA;#[cfg(windows)]&#xA;video_cmd.creation_flags(0x08000000);&#xA;&#xA;let mut video_child = video_cmd.spawn().unwrap();&#xA;let mut video_stdout = video_child.stdout.take().unwrap();&#xA;let mut video_frame = vec![0; options.video_frame_bytes];&#xA;&#xA;let mut audio_cmd = Command::new(&amp;ffmpeg_path);&#xA;#[rustfmt::skip]&#xA;audio_cmd.args([&#xA;    "-i", options.path,&#xA;    "-f", "s16le",&#xA;    "-acodec", "pcm_s16le",&#xA;    "-ar", options.sample_rate,&#xA;    "-ac", "1",&#xA;    "-",&#xA;])&#xA;.stdin(Stdio::null())&#xA;.stdout(Stdio::piped())&#xA;.stderr(Stdio::null());&#xA;&#xA;#[cfg(windows)]&#xA;audio_cmd.creation_flags(0x08000000);&#xA;&#xA;let mut audio_child = audio_cmd.spawn().unwrap();&#xA;let mut audio_stdout = audio_child.stdout.take().unwrap();&#xA;let mut audio_frame = vec![0; options.audio_frame_bytes];&#xA;&#xA;while video_stdout.read_exact(&amp;mut video_frame).is_ok() {&#xA;    writer.write_all(&amp;video_frame).unwrap();&#xA;&#xA;    if audio_stdout.read_to_end(&amp;mut audio_frame).is_ok() {&#xA;        if audio_frame.len() == options.audio_frame_bytes {&#xA;            for i in 0..options.audio_frame_bytes / 2 {&#xA;                let temp_sample = ((u32::from(audio_frame[(i * 2) &#x2B; 1]) &lt;&lt; 8)&#xA;                    | u32::from(audio_frame[i * 2]))&#xA;                    &#x2B; 0x8000;&#xA;                let sample = (temp_sample >> (16 - 10)) &amp; (0x0000FFFF >> (16 - 10));&#xA;&#xA;                audio_frame[i * 2] = (sample &amp; 0xFF) as u8;&#xA;                audio_frame[(i * 2) &#x2B; 1] = (sample >> 8) as u8;&#xA;            }&#xA;        } else {&#xA;            audio_frame.fill(0x00);&#xA;        }&#xA;    }&#xA;    writer.write_all(&amp;audio_frame).unwrap();&#xA;}&#xA;&#xA;&#xA;video_child.wait().unwrap();&#xA;audio_child.wait().unwrap();&#xA;&#xA;#[cfg(debug_assertions)]&#xA;{&#xA;    let elapsed = timer.elapsed();&#xA;    dbg!(elapsed);&#xA;}&#xA;&#xA;writer.flush().unwrap();&#xA;

    &#xA;

    I have looked at the hex data of the files using HxD - regardless of how I alter the Rust program, I am unable to get data different from what is previewed in the attached image - so the audio pipe is incorrectly interfaced. I included a screenshot of the hex data from the working python program that converts the video and audio correctly.

    &#xA;

    HxD Python program hex output :

    &#xA;

    HxD Python program hex output

    &#xA;

    HxD Rust program hex output :

    &#xA;

    HxD Rust program hex output

    &#xA;