Recherche avancée

Médias (91)

Autres articles (85)

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

  • Récupération d’informations sur le site maître à l’installation d’une instance

    26 novembre 2010, par

    Utilité
    Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
    Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (4482)

  • How to correctly calculate which segments are ready to be downloaded using MPEG-DASH

    24 avril 2019, par igal k

    What i’m trying to do ?

    Write a simple MPEG-DASH client using the SegmentTemplate pattern to calculate which segments are ready to be downloaded for a live source.

    A picture taken using chrome’s debugging tools at a moment X showing an mpd request(8af651fd747.....mpd) and the actual segments fetched respectfully to that request.

    enter image description here

    Given the following MPD

    <mpd availabilitystarttime="2019-04-24T06:43:32Z" maxsegmentduration="PT4.096S" minbuffertime="PT4.096S" minimumupdateperiod="PT15.835S" profiles="urn:mpeg:dash:profile:isoff-live:2011" publishtime="2019-04-24T11:14:01Z" suggestedpresentationdelay="PT11.878S" timeshiftbufferdepth="PT65.536S" type="dynamic" xmlns="urn:mpeg:dash:schema:mpd:2011">
     <location>https://content-aaps1.uplynk.com/channel/8af651fd7473474f86a05ffb0a1c8972.mpd?rmt=wv&amp;amp;cid=8af651fd7473474f86a05ffb0a1c8972&amp;amp;oid=600e5c27541344a1bf3818617ad712ce&amp;amp;prettydash=1&amp;amp;exp=1556091088&amp;amp;rn=4138683939&amp;amp;tc=1&amp;amp;ct=c&amp;amp;sig=5fb7f0c18f3f1d2ad4fdee53c02c1e1ed904bc5e8474f4ebf886d209ff7f21c9&amp;amp;pbs=05b6594bcf4b4728ac1094976a80194d</location>
     <period start="PT2826.240S">
       <adaptationset maxframerate="30" maxheight="720" maxwidth="1280" mimetype="video/mp4" segmentalignment="true" startwithsap="1">
         <representation bandwidth="2604473" codecs="avc1.64001e" framerate="30" height="360" scantype="progressive" width="640">
           <baseurl>https://x-default-stgec.uplynk.com/aapm/slices/8c1/600e5c27541344a1bf3818617ad712ce/8c1027496a964b049f1bd5895f8f0412/</baseurl>
           <segmenttemplate duration="368640" initialization="https://x-default-stgec.uplynk.com/aapm/slices/8c1/600e5c27541344a1bf3818617ad712ce/8c1027496a964b049f1bd5895f8f0412/$RepresentationID$_init.mp4?pbs=05b6594bcf4b4728ac1094976a80194d&amp;amp;_jt=l&amp;amp;chid=8af651fd7473474f86a05ffb0a1c8972" media="$RepresentationID$$Number%08d$.m4f?pbs=05b6594bcf4b4728ac1094976a80194d&amp;amp;_jt=l&amp;amp;chid=8af651fd7473474f86a05ffb0a1c8972" presentationtimeoffset="254361599" startnumber="690" timescale="90000"></segmenttemplate>
         </representation>
       </adaptationset>
     </period>
     <utctiming schemeiduri="urn:mpeg:dash:utc:http-iso:2014" value="https://content-aaps1.uplynk.com/misc/utcservertime"></utctiming>
    </mpd>

    I see that the next segment request should be #3955

    What i have tried so far

       period.end = 1556104456;
       period.start = 2826;
       availability_start_time = 1556088212;
       max_segment_duration = 4;
       time_shift_buffer_depth = 65

    So, first of all, i read DASH-IF-IOP 4.3 section 4.3.4.2 page #82 and implemented the following code :

      int k1 = 1;
    int period_duration = period.end - (period.start + data_.availability_start_time);
    int k2 = ceil((float)period_duration / (float)data_.max_segment_duration);
    double duration = ((float)representation.duration / (float)representation.timeScale);
    size_t live_edge = std::min(
       (int)floor((float)((data_.publish_time - data_.availability_start_time - period.start) / duration)), k2);

    size_t oldest = std::max(k1, (int)floor((float)((data_.publish_time - data_.availability_start_time - period.start -
                                                       data_.time_shift_buffer_depth) /
                                                   duration)));

    after calculating everything : k1=1, k2=3355, live_edge=3272 and oldest=3256

    Also tried using ffmpeg’s dashdec.c

    for min_segment :

    if (c->is_live &amp;&amp; pls->fragment_duration)
       {
           num = pls->first_seq_no + (((get_current_time_in_sec() - c->availability_start_time) - c->time_shift_buffer_depth) * pls->fragment_timescale) / pls->fragment_duration;
       }

    for max_segment :

    num = pls->first_seq_no + (((get_current_time_in_sec() - c->availability_start_time)) * pls->fragment_timescale) / pls->fragment_duration;

    after a small modification :

    size_t pmax = (((data_.publish_time - data_.availability_start_time))) / duration;
    size_t pmin = ((data_.publish_time - data_.availability_start_time) - data_.time_shift_buffer_depth) / duration;

    pmin=3946 pmax=3961

    in the ffmpeg example, i had to manually remove the first_seq_no variable because it looked like i doubled added the SegmentTemplate@StartNumber.

    even after succeeding in this task, how do i exactly build the request list of Segment(NOW) ----> Segment(LIVE_EDGE)

  • FFMPEG PHP enter command lines ?

    3 mai 2019, par Robert

    You will have to excuse me I have been spending the past 2 days reading through old FFMPEG posts for an answer but did little but confuse myself.
    It seems from what I read FFMPEG-PHP wrappers aren’t supported anymore ??? and to be honest they don’t seem like the proper way of learning how to incorporate it with PHP as there is a whole lot more help for command line FFMPEG usage and the FFMPEG-PHP wrapper usage looks nothing like the command line as far as I can tell.

    So I have 2 questions on using ffmpeg with PHP. So we are on the same page I posted a little info below.

    I downloaded FFMPEG static for windows 64bit.
    I then ran (in composer)

         $ composer require php-ffmpeg/php-ffmpeg

    in my vendor folder i have the following path.

       vendor\ffmpeg-20190429-ac551c5-win64-static\bin

    if I open the command prompt in that folder and type FFMPEG I get.

    ffmpeg version N-93710-gac551c54b1 Copyright (c) 2000-2019 the FFmpeg
    developers
    built with gcc 8.3.1 (GCC) 20190414
    configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-
    fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-
    libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --
    enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg
    --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --
    enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack
    --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --
    enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-
    libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa
    --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --
    enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-
    nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
    libavutil      56. 26.100 / 56. 26.100
    libavcodec     58. 52.100 / 58. 52.100
    libavformat    58. 27.103 / 58. 27.103
    libavdevice    58.  7.100 / 58.  7.100
    libavfilter     7. 50.100 /  7. 50.100
    libswscale      5.  4.100 /  5.  4.100
    libswresample   3.  4.100 /  3.  4.100
    libpostproc    55.  4.100 / 55.  4.100
    Hyper fast Audio and Video encoder
    usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...

    So I’m pretty sure FFMPEG is installed which was my goal.

    Now on my PHP side here is where I am at, and I’m not sure how this works.

    1ST QUESTION. Do I need to save my $_POST files to the drive before manipulating them ? Or can I use the $file and $filep as is ? I don’t really want to store those files, only the output.

    My SLIM code.

    $app->post('/telestrator', function(Request $request, Response $response)
    {

    $response = array();

    if (isset ($_POST['ID']) &amp;&amp; ($_POST['position']) &amp;&amp; $_FILES['video']
    ['error'] === UPLOAD_ERR_OK &amp;&amp; $_FILES['image']['error'] ===
    UPLOAD_ERR_OK) {

    $file = $_FILES['video']['tmp_name'];
    $filep = $_FILES['image']['tmp_name'];
    $time = $_POST['position'];
    $position = msToTime($time);
    $filetime = round(microtime(true) * 1000);
    $outputfolder = 'teletemp/';

    $ID = $_POST['ID'];

    $tempvid = $ID . 'tempvid' . 'mp4';
    $finalvid = $ID  . $filetime . 'mp4';


    $ffmpegpath = "public/ffmpeg.exe";

    echo "Starting ffmpeg...\n\n";
    echo shell_exec("$ffmpegpath -loop 1 -i $filep -c:v libx264 -t 3 -pix_fmt
    yuv420p \"$outputfolder.\" $tempvid /");
    echo shell_exec("$ffmpegpath -i $file -t $position -c copy
    \"$outputfolder\"
    small-1.mp4 -ss $position -codec copy \"$outputfolder\" small-2.mp4 ");
    echo shell_exec("$ffmpegpath -i small-1.mp4 -i $tempvid.mp4 -i small-
    2.mp4 \
    -filter_complex \"[0:v:0][1:v:0][2:v:0]concat=n=3:v=1:a=1[outv]\" \
    -map \"[outv]\" $finalvid" );
    echo "Done.\n";

    $upload = new videouploads();

           $desc = 'telestrated video for ' . $ID . $filetime;
           $ID = $_POST['ID'];  

           if ($upload->saveVideoFile($finalvid, getFileExtension($finalvid),
    $desc, $ID)) {
               $response['error'] = false;
               $response['message'] = 'File Uploaded Successfullly';
           }
        else {
           $response['error'] = true;
           $response['message'] = 'Required parameters are not available';
       }
       echo json_encode($response);
     }
    });

    function getFileExtension($file)
    {
    $path_parts = pathinfo($file);
    return $path_parts['extension'];
    }

    function msToTime($duration) {
    $seconds = floor($duration / 1000);
    $minutes = floor($seconds / 60);
    $hours = floor($minutes / 60);
    $milliseconds = $duration % 1000;
    $seconds = $seconds % 60;
    $minutes = $minutes % 60;

    $format = '%02u:%02u:%02u.%03u';
    $time = sprintf($format, $hours, $minutes, $seconds, $milliseconds);
    return rtrim($time, '0');

    }

    so it’s not running here is my postman. How do I get it to actually run the FFMPEG.

    Starting ffmpeg...

    Done.

    <br />
    <b>Notice</b>:  Undefined index: extension in
    <b>C:\xampp\htdocs\Pathways\public\index.php</b> on line
    <b>14741</b>
    <br />
    {"error":false,"message":"File Uploaded Successfullly"}
  • Why is audio stream extraction slow ? (compared to AviSynth)

    6 mai 2019, par LLL

    I want to extract the audio stream of an avi file as a wav file, it works but it is really slow ( 4-5fps) although I just want to copy the stream.

    Here is the type of stream I want to extract (ffprobe info) :
    Stream #0:1: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s

    Going through AviSynth does it about 100 times faster, but I would prefer a pure FFmpeg solution. Why such a speed difference ? It looks like FFmpeg is reading and processing through the whole file whereas AviSynth can just extract the data without reading it.

    Example :
    ffmpeg -i file.avi -vn -ac 2 -c:a copy audio.wav
    or
    ffmpeg -i file.avi -map 0:a -ac 2 -c:a copy audio.wav
    both work fine but take time.

    Using an AviSynth script as input :
    ffmpeg -i script.avs -map 0:a -ac 2 -c:a copy audio.wav
    with script.avs containing just :
    AviSource("file.avi")
    does the same but almost instantaneously !

    Any idea why AviSynth is so much faster and if there is a way to get the same speed in FFmpeg ?

    Edit : adding logs
    Using FFmpeg directly :

    E:\>ffmpeg -i "file.avi" -map 0:a -c:a copy -y -benchmark "output.wav"
    ffmpeg version N-92936-ged3b64402e Copyright (c) 2000-2019 the FFmpeg developers
     built with gcc 8.2.1 (GCC) 20181201
     configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
     libavutil      56. 25.100 / 56. 25.100
     libavcodec     58. 43.100 / 58. 43.100
     libavformat    58. 25.100 / 58. 25.100
     libavdevice    58.  6.101 / 58.  6.101
     libavfilter     7. 47.100 /  7. 47.100
     libswscale      5.  4.100 /  5.  4.100
     libswresample   3.  4.100 /  3.  4.100
     libpostproc    55.  4.100 / 55.  4.100
    [avi @ 0000018d3c38a680] non-interleaved AVI
    Guessed Channel Layout for Input Stream #0.1 : stereo
    Input #0, avi, from 'file.avi':
     Duration: 00:18:37.49, start: 0.000000, bitrate: 534682 kb/s
       Stream #0:0: Video: rawvideo, bgr24, 1280x720, 533183 kb/s, 24.11 fps, 24.11 tbr, 24.10 tbn, 24.10 tbc
       Stream #0:1: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
    Output #0, wav, to 'output.wav':
     Metadata:
       ISFT            : Lavf58.25.100
       Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
    Stream mapping:
     Stream #0:1 -> #0:0 (copy)
    Press [q] to stop, [?] for help
    size=  192445kB time=00:18:37.12 bitrate=1411.2kbits/s speed=4.77x
    video:0kB audio:192445kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000040%
    bench: utime=1.188s stime=50.766s rtime=234.254s
    bench: maxrss=17468kB

    Using AviSynth :

    E:\>ffmpeg -i "soundout.avs" -map 0:a -c:a copy -y -benchmark "output.wav"
    ffmpeg version N-92936-ged3b64402e Copyright (c) 2000-2019 the FFmpeg developers
     built with gcc 8.2.1 (GCC) 20181201
     configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
     libavutil      56. 25.100 / 56. 25.100
     libavcodec     58. 43.100 / 58. 43.100
     libavformat    58. 25.100 / 58. 25.100
     libavdevice    58.  6.101 / 58.  6.101
     libavfilter     7. 47.100 /  7. 47.100
     libswscale      5.  4.100 /  5.  4.100
     libswresample   3.  4.100 /  3.  4.100
     libpostproc    55.  4.100 / 55.  4.100
    Guessed Channel Layout for Input Stream #0.1 : stereo
    Input #0, avisynth, from 'soundout.avs':
     Duration: 00:18:37.49, start: 0.000000, bitrate: N/A
       Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24, 1280x720, 24.11 fps, 24.11 tbr, 24.10 tbn, 24.10 tbc
       Stream #0:1: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s
    Output #0, wav, to 'output.wav':
     Metadata:
       ISFT            : Lavf58.25.100
       Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
    Stream mapping:
     Stream #0:1 -> #0:0 (copy)
    Press [q] to stop, [?] for help
    size=  192445kB time=00:18:37.11 bitrate=1411.2kbits/s speed= 155x
    video:0kB audio:192445kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000040%
    bench: utime=0.234s stime=1.047s rtime=7.236s
    bench: maxrss=23792kB