Recherche avancée

Médias (91)

Autres articles (74)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Sélection de projets utilisant MediaSPIP

    29 avril 2011, par

    Les exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
    Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
    Ferme MediaSPIP @ Infini
    L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)

Sur d’autres sites (7111)

  • How to segment a video and then concatenate back into original one with ffmpeg

    23 décembre 2016, par steve

    I am surveying on distributed video transcoding with FFmpeg. I have found that there is a good script on https://github.com/nergdron/dve/blob/master/dve.

    The script mainly uses the segment and concatenate filters of FFmpeg. I want to do a simple test first. However, I can not split the video into segments and then concatenate back to original video (with the same codec). I have tried with the following command :

    a. Chunk the video

    ffmpeg -fflags +genpts -i Test.avi -map 0 -codec copy -f segment -segment_format avi -v error chunk-%03d.seg

    b. Building the chunking list :

    #!/bin/bash -e
    set -e
    echo "ffconcat version 1.0" > concat.txt
    for f in `ls chunk-*.seg | sort`; do
    echo "file $f" >> concat.txt
    done

    c. Concatenate the chunks

    ffmpeg  -y -v error -i concat.txt -f concat -map 0 -c copy -f avi output.avi

    Then when I run ffprobe I get the following message which says it is a non-interleaved AVI :

       ffprobe version N-82301-g1bbb18f Copyright (c) 2007-2016 the FFmpeg developers
     built with gcc 5.4.0 (GCC)
     configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-libebur128 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
     libavutil      55. 35.100 / 55. 35.100
     libavcodec     57. 66.101 / 57. 66.101
     libavformat    57. 57.100 / 57. 57.100
     libavdevice    57.  2.100 / 57.  2.100
     libavfilter     6. 66.100 /  6. 66.100
     libswscale      4.  3.100 /  4.  3.100
     libswresample   2.  4.100 /  2.  4.100
     libpostproc    54.  2.100 / 54.  2.100
    [avi @ 00000000028e3700] non-interleaved AVI
    Input #0, avi, from 'output.avi':
     Metadata:
       encoder         : Lavf57.57.100
     Duration: 74:43:47.82, start: 0.000000, bitrate: 17 kb/s
       Stream #0:0: Video: mpeg4 (Advanced Simple Profile) (XVID / 0x44495658), yuv420p, 640x368 [SAR 1:1 DAR 40:23], 23.98 fps, 23.98 tbr, 23.98 tbn, 23.98 tbc
       Stream #0:1: Audio: mp3 (U[0][0][0] / 0x0055), 48000 Hz, stereo, s16p, 112 kb/s

    I have tried a few other things without success. Any help would be greatly appreciated. Thanks in advance !!

  • Live video broadcasting in android using front and back facing cameras [on hold]

    1er décembre 2016, par Jun Kim

    This is my first time I am posting a question on Stack Overflow.

    I am currently developing an android application. My goal is to make an app that captures and records video and broadcasts it to other mobile devices as well as web browsers like facebook live.
    (Nowadays, famous apps like Youtube, Facebook, Twitch, Periscope has this function)

    Now, I am just researching what technologies and ways to develop this app.

    I have been researching and reading a lot of blogs and documentations for the past five weeks about FFMpeg, different types of streaming technologies, types of web servers etc. I decided to use MPEG-DASH for my app and nginx server (Nginx-rtmp-module).

    While searching further, I got stuck and confused about how I can possibly capture & record video using internal camera of Android mobile devices.

    I was thinking that I could use MediaRecorder to capture & record (I could hardly find an example using MediaRecorder to record video) and use FFMpeg
    to somehow encode with the certain video and audio codecs to make a container format and send it to nginx server to broadcast other devices.
    But then.... i am not sure if this is the right way to do it...

    My questions is... I would like to know the whole process (in detail if it possible) how I can possible record and broadcast to other devices.

    The whole process of recording video using internal camera that is in user’s mobile device and send the frame to the server to broadcast to other devices seems difficult to imagine for me... Can anyone elaborate ?

    Or any suggestion about how can I reach my goal ?
    Every help would be appreciated.

  • crystalhd : Revert back to letting hardware handle packed b-frames

    16 octobre 2016, par Philip Langdale
    crystalhd : Revert back to letting hardware handle packed b-frames
    

    I’m not sure why, but the mpeg4_unpack_bframes bsf is not
    interacting well with seeking. Looking at the code, it should be
    ok, with possibly one warning shown, but I see it getting stuck
    for an extended period of time after a seek where a packed frame
    is cached to be shown later.

    So, I gave up on that and went back to making the old hardware
    based path work. Turns out that it wasn’t broken except that some
    samples have a 6 byte drop packet which I wasn’t accounting for.

    Now it works again and seeks are good.

    • [DH] libavcodec/crystalhd.c