Recherche avancée

Médias (91)

Autres articles (54)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (5183)

  • How to convert a video and audio file to be smoothly played via Media Source Extension API ?

    4 octobre 2018, par Aman

    I have built a web video player using the Media Source Extension API. I have been testing my video player using local video and audio files on my PC. Everything works perfectly expect the video keeps buffering. I’m playing a 4k 60fps video, which I downloaded from YouTube. My PC is not 4k resolution, but the video smoothly plays through YouTube and VLC Media Player. I’m just surprised to why my Media Source Extension Video Player buffers even through the video and audio file are not being retrieved via network. I’m assuming that the video and audio files I’m using are causing this problem. So I will explain how I created my video and audio files first :

    1. I downloaded the video from https://www.youtube.com/watch?v=KaCQ8SQ6ZHQ&t=3s using the 4K Video Downloader https://www.4kdownload.com/products/product-videodownloader.

    2. Convert the mkv (the 4K Video Downloader only allows the 4k 60fps video to be downloaded in mkv format, for me) file to mp4 using ffmpeg in CMD : ffmpeg -i test.mkv -codec copy test.mp4.

    3. Converting the test.mp4 file to my preferred 4K resolution 3840x2160 from 3840x1632 using ffmpeg in CMD : ffmpeg -i test.mp4 -s 3840x2160 -c:a copy test_changed.mp4. (NOT SO IMPORTANT)

    4. Separating the video and audio of test_changed.mp4 to video.mp4 for video and audio.mp4 for audio using MP4Box in CMD : Video - MP4Box -single 1 test_changed.mp4 -out video.mp4 and Audio - MP4Box -single 2 test_changed.mp4 -out audio.mp4.

    5. Splitting both video.mp4 and audio.mp4 into 30 split parts each containing 5 seconds of the video and audio file. So I end up having (video_1.mp4,audio_1.mp4), (video_2.mp4,audio_2.mp4), (video_3.mp4,audio_3.mp4), ..... (video_29.mp4,audio_29.mp4), (video_30.mp4,audio_30.mp4). Using ffmpeg and one by one specifying the time range for each part in CMD :

      [For Part 1 : ffmpeg -ss 00:00:00 -to 00:00:05 -i video.mp4 video_1.mp4, ffmpeg -ss 00:00:00 -to 00:00:05 -i audio.mp4 audio_1.mp4],

      [For Part 2 : ffmpeg -ss 00:00:05 -to 00:00:10 -i video.mp4 video_2.mp4, ffmpeg -ss 00:00:05 -to 00:00:10 -i audio.mp4 audio_2.mp4],

      [For Part 3 : ffmpeg -ss 00:00:10 -to 00:00:15 -i video.mp4 video_3.mp4, ffmpeg -ss 00:00:10 -to 00:00:15 -i audio.mp4 audio_3.mp4],

      .....

      [For Part 29 : ffmpeg -ss 00:02:20 -to 00:02:25 -i video.mp4 video_29.mp4, ffmpeg -ss 00:02:20 -to 00:02:25 -i audio.mp4 audio_29.mp4],

      [For Part 30 : ffmpeg -ss 00:02:25 -to 00:02:30 -i video.mp4 video_30.mp4, ffmpeg -ss 00:02:25 -to 00:02:30 -i audio.mp4 audio_30.mp4].

    6. Fragmenting each of the video and audio parts using MP4Box in CMD(As far as I know, fragmented mp4 files are only files played via Media Source Extension API) :

      [For Part 1 : MP4Box -dash 1000 -rap -frag-rap video_1.mp4, MP4Box -dash 1000 -rap -frag-rap audio_1.mp4],

      [For Part 2 : MP4Box -dash 1000 -rap -frag-rap video_2.mp4, MP4Box -dash 1000 -rap -frag-rap audio_2.mp4],

      [For Part 3 : MP4Box -dash 1000 -rap -frag-rap video_3.mp4, MP4Box -dash 1000 -rap -frag-rap audio_3.mp4],

      .....

      [For Part 29 : MP4Box -dash 1000 -rap -frag-rap video_29.mp4, MP4Box -dash 1000 -rap -frag-rap audio_29.mp4],

      [For Part 30 : MP4Box -dash 1000 -rap -frag-rap video_30.mp4, MP4Box -dash 1000 -rap -frag-rap audio_30.mp4].

    7. I receive a fragmented file for each file with "_dashinit" being in it e.g. For Part 1 : video_1_dashinit.mp4 and audio_1_dashinit.mp4. These are the files I’m playing through Media Source Extension API.

    So I’m appending these files into my sourceBuffers and playing it with the video. I have given the test.zip file here (https://drive.google.com/file/d/1tyPBTxgpS601Xs5VEWznYhWw9PwhMHsB/view?usp=sharing) containing the test sample.

    I’m using this command in CMD to run Chrome and test my file : chrome.exe --allow-file-access-from-files

    Anyone can use this test sample and see if the video is buffering for them too. Please comment about anything I’m doing wrong, or help me construct a better 5 seconds video and audio files for MSE playable. Thanks

  • can't find NDK camera and media native API symbols when linking libavdevice.a to libffmpeg.so

    4 septembre 2018, par jianwen

    I’m using NDK tools to build ffmpeg shared lib which will be used in my
    android rtsp project.All needed components are compiled/linked as seperate
    static libs, and at last these libs will be linked as a single shared lib.
    Everything goes well except the last step. error happens when linking
    libavdevice, all symbols in NDK camera and media can not be found, error
    log :

    libavdevice/android_camera.c:702: error: undefined reference    
    to 'ACameraCaptureSession_stopRepeating'
    libavdevice/android_camera.c:706: error: undefined reference
    to 'ACameraCaptureSession_close'
    libavdevice/android_camera.c:711: error: undefined reference
    to 'ACaptureRequest_removeTarget'
    libavdevice/android_camera.c:712: error: undefined reference
    to 'ACaptureRequest_free'
    libavdevice/android_camera.c:717: error: undefined reference
    to 'ACameraOutputTarget_free'
    libavdevice/android_camera.c:722: error: undefined reference
    to 'ACaptureSessionOutputContainer_remove'
    libavdevice/android_camera.c:724: error: undefined reference
    to 'ACaptureSessionOutput_free'
    libavdevice/android_camera.c:729: error: undefined reference
    to 'ANativeWindow_release'
    libavdevice/android_camera.c:734: error: undefined reference
    to 'ACaptureSessionOutputContainer_free'
    libavdevice/android_camera.c:739: error: undefined reference
    to 'ACameraDevice_close'
    libavdevice/android_camera.c:744: error: undefined reference
    to 'AImageReader_delete'
    libavdevice/android_camera.c:749: error: undefined reference
    to 'ACameraMetadata_free'
    libavdevice/android_camera.c:756: error: undefined reference
    to 'ACameraManager_delete'
    libavdevice/android_camera.c:172: error: undefined reference
    to 'ACameraDevice_getId'
    libavdevice/android_camera.c:163: error: undefined reference
    to 'ACameraDevice_getId'
    libavdevice/android_camera.c:392: error: undefined reference
    to 'AImageReader_acquireLatestImage'
    libavdevice/android_camera.c:483: error: undefined reference  
    to 'AImage_delete'
    libavdevice/android_camera.c:345: error: undefined reference
    to 'AImage_getPlanePixelStride'
    libavdevice/android_camera.c:346: error: undefined reference
    to 'AImage_getPlaneData'
    ...

    Here is my build script which is ran on my Windows 7 x86_64 PC.

    #!/bin/bash
    export TMPDIR=D:/other/AndroidDevelopment/ffmpeg-4.0.2/ffmpegtemp
    NDK=D:/software/app/android_sdk/ndk-bundle
    SYSROOT=$NDK/platforms/android-28/arch-x86_64/
    TOOLCHAIN=$NDK/toolchains/x86_64-4.9/prebuilt/windows-x86_64
    CPU=x86_64
    PREFIX=./android/$CPU

    function build_one
    {
       ./configure \
       --prefix=$PREFIX \
       --enable-static \
       --enable-jni \
       --enable-pthreads \
       --enable-mediacodec \
       --disable-asm \
       --disable-shared \
       --disable-doc \
       --disable-ffmpeg \
       --disable-ffplay \
       --disable-ffprobe \
       --disable-doc \
       --disable-symver \
       --cross-prefix=$TOOLCHAIN/bin/x86_64-linux-android- \
       --target-os=android \
       --arch=x86_64 \
       --enable-cross-compile \
       --sysroot=$SYSROOT \
       --extra-cflags=" -isysroot $NDK/sysroot  -I$NDK/sysroot/usr/include/x86_64-linux-android" \
       --extra-ldflags=-pie
    make clean
    make -j4
    make install

    $TOOLCHAIN/bin/x86_64-linux-android-ld \
    -rpath-link=$SYSROOT/usr/lib64 \
    -L$SYSROOT/usr/lib64 \
    -L$PREFIX/lib \
    -soname libffmpeg.so -shared -nostdlib -Bsymbolic --whole-archive --no- undefined -o \
    $PREFIX/libffmpeg.so \
    libavcodec/libavcodec.a \
    libavfilter/libavfilter.a \
    libswresample/libswresample.a \
    libavformat/libavformat.a \
    libavutil/libavutil.a \
    libswscale/libswscale.a \
    libavdevice/libavdevice.a \
    -lc -lm -lz -ldl -llog --dynamic-linker=/system/bin/linker \
    $TOOLCHAIN/lib/gcc/x86_64-linux-android/4.9.x/libgcc.a \
    }
    build_one
  • How to seek mp4 aac audio using Media Source Extensions

    29 août 2018, par Chris

    Please can someone offer me a few pointers when trying to seek within streamed aac audio in mp4 containers ? I’m trying to develop a music download service that sips data via ranged requests rather than simply link to a mp4 file as an <audio></audio> src. (which will instead buffer the whole file as quickly as possible, and so be rather wasteful and expensive).

    So far I’ve managed to successfully append sequential audio range buffers to the SourceBuffer object using partial/ranged requests, attached to my suitably mime-typed MediaSource object. But as soon as I try to seek, the wheels come off and I receive a ’CHUNK_DEMUXER_ERROR_APPEND_FAILED’ error, with the specific issue : ’stream parsing failed’.

    I’ve prepared my mp4 files by encoding them with ffmpeg (via the fluent ffmpeg module), rewriting the movie header box at the start of the file (via the -movflags faststart setting) so that the duration can be parsed. I then fragment the file with mp4fragment (part of the Bento4 tools) with the default settings, and check to ensure the structure of the file matches the format specified by ISO BMFF, with pairs of movie fragments and data boxes (moof/mdat) describing the audio stream. Given the source buffer has no problem playing from the beginning, with contiguous subsequent ranges, this appears to confirm that the format of the mp4 file is acceptable.

    As an aside, I’ve tried fragmenting the file completely in ffmpeg/fluent ffmpeg (using the ’-movflags empty_moov+default_base_moof’ options), but while this works, it also removes the duration from the moov as you’d expect, so the file gets larger during playback as more fragments are fetched and appended. If I set the file duration manually, I still have the issue of not being able to seek to unbuffered audio, so I only seem to be making life more difficult trying to fragment the file solely in ffmpeg.

    So how should I go about seeking within the stream ? I gather that seeking effectively ’needle-drops’ randomly, and so the source buffer might struggle to parse the data out of context, but I imagined that it would skip to the next available fragment in the range that I fetch (which is calculated using the percentage of the seek bar width to set the player.currentTime, which is then converted to a suitable byte range using the 128kbps CBR figure to convert seconds to bytes, to send a 206 partial range request).

    I’ve seen mention of buffer offsets, but I don’t understand how these apply. Most of the dev examples I’ve seen just focus on whole files or segmented videos, rather than fragmented single audio files for seeking ? Do I need to somehow retain a portion of the data from the moov box when seeking for the source buffer to be able to parse it ? In the trun box I have a data offset that varies between two values throughout the file, 444 and 448, depending on whether the sample count is 86 or 87. I’m not sure why it’s not consistent.

    Here’s what the moov looks like from my audio file :

    [ftyp] size=8+24
     major_brand = isom
     minor_version = 200
     compatible_brand = isom
     compatible_brand = iso2
     compatible_brand = mp41
     compatible_brand = iso5
    [moov] size=8+620
     [mvhd] size=12+96
       timescale = 1000
       duration = 350047
       duration(ms) = 350047
     [trak] size=8+448
       [tkhd] size=12+80, flags=7
         enabled = 1
         id = 1
         duration = 350047
         width = 0.000000
         height = 0.000000
       [edts] size=8+28
         [elst] size=12+16
           entry count = 1
           entry/segment duration = 350000
           entry/media time = 2048
           entry/media rate = 1
       [mdia] size=8+312
         [mdhd] size=12+20
           timescale = 44100
           duration = 0
           duration(ms) = 0
           language = und
         [hdlr] size=12+41
           handler_type = soun
           handler_name = Bento4 Sound Handler
         [minf] size=8+219
           [smhd] size=12+4
             balance = 0
           [dinf] size=8+28
             [dref] size=12+16
               [url ] size=12+0, flags=1
                 location = [local to file]
           [stbl] size=8+159
             [stsd] size=12+79
               entry-count = 1
               [mp4a] size=8+67
                 data_reference_index = 1
                 channel_count = 2
                 sample_size = 16
                 sample_rate = 44100
                 [esds] size=12+27
                   [ESDescriptor] size=2+25
                     es_id = 0
                     stream_priority = 0
                     [DecoderConfig] size=2+17
                       stream_type = 5
                       object_type = 64
                       up_stream = 0
                       buffer_size = 0
                       max_bitrate = 128006
                       avg_bitrate = 128006
                       DecoderSpecificInfo = 12 10
                     [Descriptor:06] size=2+1
             [stts] size=12+4
               entry_count = 0
             [stsc] size=12+4
               entry_count = 0
             [stsz] size=12+8
               sample_size = 0
               sample_count = 0
             [stco] size=12+4
               entry_count = 0
     [mvex] size=8+48
       [mehd] size=12+4
         duration = 350047
       [trex] size=12+20
         track id = 1
         default sample description index = 1
         default sample duration = 0
         default sample size = 0
         default sample flags = 0

    And here’s a typical fragment :

    [moof] size=8+428
     [mfhd] size=12+4
       sequence number = 1
     [traf] size=8+404
       [tfhd] size=12+8, flags=20008
         track ID = 1
         default sample duration = 1024
       [tfdt] size=12+8, version=1
         base media decode time = 0
       [trun] size=12+352, flags=201
         sample count = 86
         data offset = 444
    [mdat] size=8+32653

    Does that all look good ? Any pointers for seeking within such a file would be hugely appreciated. Thanks !