Recherche avancée

Médias (1)

Mot : - Tags -/MediaSPIP

Autres articles (88)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (7217)

  • Displaying YUV420 data using Opengles shader is too slow

    28 novembre 2012, par user1278982

    I have a child thread called A to decode video using ffmpeg on iPhone 3GS, another thread called B to display yuv data, in thread B, I used glSubTexImage2D to upload Y U V textures, and then convert yuv data to RGB in shader, but the frame rate in the decode thread is only 15fps.Why ?

    Update :
    The frame size is 720 * 576.
    I also found something interesting that if I didn't start the thread displaying the YUV data, the frame rate calculated in the decode thread is 22 fps,otherwise 15 fps.So I think that my displaying method must be inefficient.the code as below.

    I have a callback in the decode thread :

    typedef struct _DVDVideoPicture
    {
      char *plane[4];
      int iLineSize[4];
    }DVDVideoPicture;

    void YUVCallBack(void *pYUVData, void *pContext)
    {
      VideoView *view = (VideoView *)pContext;
      [view.glView copyYUVData:(DVDVideoPicture *)pData];
      [view calculateFrameRate];
    }

    The copyYUVData method extract the y u v planes separately. The following is displaying thread method.

  • ffmpeg concat demuxer dropping audio after first clip

    6 septembre 2020, par marcman

    I'm trying to concatenate a video collection that all should be the same type and format. It seems to work like expected when the original sources are the same type, but when they're different types the demuxer drops audio. I understand that the demux requires all inputs to have the same codecs, but I believe I am doing that already.

    


    This is my workflow (pseudocode with Python-like for loop) :

    


    for i, video in enumerate(all_videos):&#xA;    # Call command for transcoding and filtering&#xA;    # I allow this command to be called on mp4, mov, and avi files&#xA;    # The point of this filter is:&#xA;    # (1) to superimpose a timestamp on the bottom right of the video&#xA;    # (2) to scale and pad the videos to a common output resolution (the specific numbers below are just copied from a video I ran, but they are filled in automatically for each given video by the rest of my script)&#xA;    # (3) To transcode all videos to the same common format&#xA;    ffmpeg \&#xA;        -y \&#xA;        -loglevel quiet \&#xA;        -stats \&#xA;        -i video_<i>.{mp4, mov, avi} \&#xA;        -vcodec libx264 \&#xA;        -acodec aac \&#xA;        -vf "scale=607:1080, pad=width=1920:height=1080:x=656:y=0:color=black, drawtext=expansion=strftime: basetime=$(date &#x2B;%s -d&#x27;2020-08-27 16:42:26&#x27;)000000 : fontcolor=white : text=&#x27;%^b %d, %Y%n%l\\:%M%p&#x27; : fontsize=36 : y=1080-4*lh : x=1263-text_w-2*max_glyph_w" \&#xA;        tmpdir/video_<i>.mp4&#xA;&#xA;&#xA;# create file_list.txt, e.g.&#xA;#&#xA;# file &#x27;/abspath/to/tmpdir/video_1.mp4&#x27;&#xA;# file &#x27;/abspath/to/tmpdir/video_2.mp4&#x27;&#xA;# file &#x27;/abspath/to/tmpdir/video_3.mp4&#x27;&#xA;# ...&#xA;&#xA;&#xA;ffmpeg \&#xA;    -f concat \&#xA;    -safe 0 \&#xA;    -i file_list.txt \&#xA;    -c copy \&#xA;    all_videos.mp4&#xA;</i></i>

    &#xA;

    In my test case, my inputs are 3 videos in this order :

    &#xA;

      &#xA;
    1. a camcorder video output in H.264+aac in a mp4
    2. &#xA;

    3. an iphone video in mov format
    4. &#xA;

    5. an iphone video in mp4 format
    6. &#xA;

    &#xA;

    When I review each of the intermediate mp4-transcoded videos in tmpdir they all playback audio and video just fine and the filtering works as expected. However, when I review the final concatenated output, only the first clip (the camcorder video) has sound. When all the videos are from the camcorder, there is no audio issue—they all have sound.

    &#xA;

    When I output ffmpeg warnings and errors, the only thing that shows up is an expected warning about the timestamp :

    &#xA;

    [mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909504, current: 5430298; changing to 5909505. This may result in incorrect timestamps in the output file.&#xA;[mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909505, current: 5431322; changing to 5909506. This may result in incorrect timestamps in the output file.&#xA;[mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909506, current: 5432346; changing to 5909507. This may result in incorrect timestamps in the output file.&#xA;[mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909507, current: 5433370; changing to 5909508. This may result in incorrect timestamps in the output file.&#xA;...&#xA;

    &#xA;

    What might I be doing wrong here ? I'm testing in both the Ubuntu 20.04 "Videos" applications as well as VLC Player and both demonstrate the same problem. I'd prefer to use the demuxer if possible for speed as re-encoding during concatenation is quite expensive.

    &#xA;

    NOTE : This is a different issue than laid out here, in which some of the videos had no audio. In my case, all videos have both video and audio.

    &#xA;

  • How do I convert a .wav file to 16bit 44.1kz using ffmpeg or other utility [closed]

    26 mai 2023, par Seth Edwards

    A preface :&#xA;I am building an environment for a my own streaming box. Since building the UI. I turned to the now obsolete MSNTV box to find its UI sound effects.

    &#xA;

    I found the dump on GitHub. I downloaded and located where the sounds where located.

    &#xA;

    I listened to them one by one. I noticed that they are wave files. But they sound like they were low quality and may have been compressed before being turned into a wave file.

    &#xA;

    I was using the Apple Files app on an iPhone 6s running iOS 15.7.1.

    &#xA;

    They play back fine.

    &#xA;

    I try importing them into GarageBand for iOS and it gives me an error saying that it only allows 16bit 44.1khz files. This confirmed my suspicion of it being low quality.

    &#xA;

    I then tried playing them on a Dell Chromebook 3100 running ChromeOS. Chrome’s player would also not play the files.

    &#xA;

    I need to find out how to convert them to 16bit 44.1khz wave files.

    &#xA;

    My guess is that since the MSNTV had a small amount of storage space that they compressed the audio.

    &#xA;

    I tried converting the files to mp3. And they are Noticeably worse.

    &#xA;

    Does anyone know how to convert these files so they can be played back normally.

    &#xA;

    In the end I plan to use these files and play them using the pygame library.

    &#xA;

    I have tried changing the metadata

    &#xA;

    Converting to mp3

    &#xA;