Recherche avancée

Médias (0)

Mot : - Tags -/interaction

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (74)

  • Problèmes fréquents

    10 mars 2010, par

    PHP et safe_mode activé
    Une des principales sources de problèmes relève de la configuration de PHP et notamment de l’activation du safe_mode
    La solution consiterait à soit désactiver le safe_mode soit placer le script dans un répertoire accessible par apache pour le site

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

Sur d’autres sites (10886)

  • Revisiting Nosefart and Discovering GME

    30 mai 2011, par Multimedia Mike — Game Hacking

    I found the following screenshot buried deep in an old directory structure of mine :



    I tried to recall how this screenshot came to exist. Had I actually created a functional KDE frontend to Nosefart yet neglected to release it ? I think it’s more likely that I used some designer tool (possibly KDevelop) to prototype a frontend. This would have been sometime in 2000.

    However, this screenshot prompted me to revisit Nosefart.

    Nosefart Background
    Nosefart is a program that can play Nintendo Sound Format (NSF) files. NSF files are files containing components that were surgically separated from Nintendo Entertainment System (NES) ROM dumps. These components contain the music playback engines for various games. An NSF player is a stripped down emulation system that can simulate the NES6502 CPU along with the custom hardware (2 square waves, 1 triangle wave, 1 noise generator, and 1 limited digital channel).

    Nosefart was written by Matt Conte and eventually imported into a Sourceforge project, though it has not seen any development since then. The distribution contains standalone command line players for Linux and DOS, a GTK frontend for the Linux command line version, and plugins for Winamp, XMMS, and CL-Amp.

    The Sourceforge project page notes that Nosefart is also part of XBMC. Let the record show that Nosefart is also incorporated into xine (I did that in 2002, I think).

    Upgrading the API
    When I tried running the command line version of Nosefart under Linux, I hit hard against the legacy audio API : OSS. Remember that ?

    In fairly short order, I was able to upgrade the CL program to use PulseAudio. The program is not especially sophisticated. It’s a single-threaded affair which checks for a keypress, processes an audio frame, and sends the frame out to the OSS file interface. All that was needed was to rewrite open_hardware() and close_hardware() for PA and then replace the write statement in play(). The only quirk that stood out is that including <pulse/pulseaudio.h> is insufficient for programming PA’s simple API. <pulse/simple.h> must be included separately.

    For extra credit, I adapted the program to ALSA. The program uses the most simplistic audio output API possible — just keep filling a buffer and sending it out to the DAC.

    Discovering GME
    I’m not sure what to do with the the program now since, during my research to attempt to bring Nosefart up to date, I became aware of a software library named Game Music Emu, or GME. It’s a pure C++ library that can essentially play any classic video game format you can possible name. Wow. A lot can happen in 10 years when you’re not paying attention.

    It’s such a well-written library that I didn’t need any tutorial or documentation to come up to speed. Just a quick read of the main gme.h header library enabled me in short order to whip up a quick C program that could play NSF and SPC files. Path of least resistance : Client program asks library to open a hardcoded file, synthesize 10 seconds of audio, and dump it into a file ; ask the FLAC command line program to transcode raw data to .flac file ; use ffplay to verify the results.

    I might develop some other uses for this library.

  • How do I sync 4 videos in a grid to play the same frame at the same time ?

    28 décembre 2022, par PirateApp
      &#xA;
    • 4 of us have recorded ourselves playing a game and want to create a 4 x 4 video grid
    • &#xA;

    • The game has cutscenes at the beginning followed by each person having their unique part for the rest of the video
    • &#xA;

    • I am looking to synchronize the grid such that it starts at the same place in the cutscene for everyone
    • &#xA;

    • Kindly take a look at what is happening currently. The cutscene is off by a few seconds for everyone
    • &#xA;

    • Imagine a time offset a,b,c,d such that when I add this offet to each video, the entire video grid will be in sync
    • &#xA;

    • How to find this a,b,c,d and more importantly how to add it in filter_complex
    • &#xA;

    &#xA;

    I used the ffmpeg command below to generate a 4 x 4 video grid and it seems to work

    &#xA;

    ffmpeg&#xA;    -i nano_prologue.mkv -i macko_nimble_guardian.mkv -i nano_nimble_guardian.mkv -i ghost_nimble_guardian_subtle_arrow_1.mp4&#xA;    -filter_complex "&#xA;        nullsrc=size=1920x1080 [base];&#xA;        [0:v] setpts=PTS-STARTPTS, scale=960x540 [upperleft];&#xA;        [1:v] setpts=PTS-STARTPTS, scale=960x540 [upperright];&#xA;        [2:v] setpts=PTS-STARTPTS, scale=960x540 [lowerleft];&#xA;        [3:v] setpts=PTS-STARTPTS, scale=960x540 [lowerright];&#xA;        [base][upperleft] overlay=shortest=1 [tmp1];&#xA;        [tmp1][upperright] overlay=shortest=1:x=960 [tmp2];&#xA;        [tmp2][lowerleft] overlay=shortest=1:y=540 [tmp3];&#xA;        [tmp3][lowerright] overlay=shortest=1:x=960:y=540&#xA;    "&#xA;    -c:v libx264 output.mkv&#xA;

    &#xA;

    My problem though is that since each of us starts recording at slightly different times, the cutscenes are out of sync

    &#xA;

    As per the screenshot below, you can see that each video has the same scene starting at a slightly different time.

    &#xA;

    Is there a way to find where the same frame will start on all videos and then sync each video to start from that frame or 20 seconds before that frame ?

    &#xA;

    enter image description here

    &#xA;

    UPDATE 1

    &#xA;

    i have figured out the offset for each video in millisecond precision using the following technique

    &#xA;

    take a screenshot of the first video at a particular point in the cutscene and save image as png and run the script below for the remaining 3 videos to find out where this screenshot appears in each video&#xA;&#xA;&#xA;ffmpeg -i "video2.mp4" -r 1 -loop 1 -i screenshot.png -an -filter_complex "blend=difference:shortest=1,blackframe=90:32" -f null -&#xA;

    &#xA;

    Use the command above to search for the offset in every video for that cutscene

    &#xA;

    It gave me this

    &#xA;

    VIDEO 3 OFFSET

    &#xA;

    [Parsed_blackframe_1 @ 0x600003af00b0] frame:3144 pblack:92 pts:804861 t:52.399805 type:P last_keyframe:3120&#xA;&#xA;[Parsed_blackframe_1 @ 0x600003af00b0] frame:3145 pblack:96 pts:805117 t:52.416471 type:P last_keyframe:3120&#xA;

    &#xA;

    VIDEO 2 OFFSET

    &#xA;

    [Parsed_blackframe_1 @ 0x6000014dc0b0] frame:3629 pblack:91 pts:60483 t:60.483000 type:P last_keyframe:3500&#xA;

    &#xA;

    VIDEO 4 OFFSET

    &#xA;

    [Parsed_blackframe_1 @ 0x600002f84160] frame:2885 pblack:93 pts:48083 t:48.083000 type:P last_keyframe:2880&#xA;&#xA;[Parsed_blackframe_1 @ 0x600002f84160] frame:2886 pblack:96 pts:48100 t:48.100000 type:P last_keyframe:2880&#xA;

    &#xA;

    Now how do I use filter_complex to say start each video at either the frame above or the timestamp above ?. I would like to include say 10 seconds before the above frame in each video so that it starts from the beginning

    &#xA;

    UPDATE 2

    &#xA;

    This command currently gives me a 100% synced video, how do I make it start 15 seconds before the specified frame numbers and how to make it use the audio track from video 2 instead ?

    &#xA;

    ffmpeg&#xA;    -i v_nimble_guardian.mkv -i macko_nimble_guardian.mkv -i ghost_nimble_guardian_subtle_arrow_1.mp4 -i nano_nimble_guardian.mkv&#xA;    -filter_complex "&#xA;        nullsrc=size=1920x1080 [base];&#xA;        [0:v] trim=start_pts=49117,setpts=PTS-STARTPTS, scale=960x540 [upperleft];&#xA;        [1:v] trim=start_pts=50483,setpts=PTS-STARTPTS, scale=960x540 [upperright];&#xA;        [2:v] trim=start_pts=795117,setpts=PTS-STARTPTS, scale=960x540 [lowerleft];&#xA;        [3:v] trim=start_pts=38100,setpts=PTS-STARTPTS, scale=960x540 [lowerright];&#xA;        [base][upperleft] overlay=shortest=1 [tmp1];&#xA;        [tmp1][upperright] overlay=shortest=1:x=960 [tmp2];&#xA;        [tmp2][lowerleft] overlay=shortest=1:y=540 [tmp3];&#xA;        [tmp3][lowerright] overlay=shortest=1:x=960:y=540&#xA;    "&#xA;    -c:v libx264 output.mkv&#xA;

    &#xA;

  • To get OpenCV VideoWriter work across platforms consistently for MP4 container with H264 encoding

    28 mars 2019, par Moh

    I am trying to get OpenCV VideoWriter work across platform consistently for MP4 container with H246 encoding.

    Target platforms in order of importance - Ubuntu, Raspbian, OSX

    Basically, my shortcoming at this point is not understanding the relationship of FourCC code (as a parameter to OpenCV VideoWriter) to the FFMPEG backend and its requirements. I am interested to understand the game in play rather than discussing a piece of code.

    What I want to know is when I specify ’X264’ as FourCC code trying to write an x.MP4 file (FFMPEG backend) and the request is marshalled to FFMPEG what requirements/dependencies need to be satisfied by the OS for it to success.

    So far I have got my python stack writing MP4 video files across Raspbian/Ubuntu/OSX, with a hack.

    On my Raspbian stretch installation, I use 0x00000021 as the fourCC code.
    On Ubuntu (VM on OSX) and on OSX, AVC1 works.

    Days of Googling only delivered those hacks, not a good understanding of the problem.

    The x264 as FourCC code leads to one of - failure, non-portable video file + annoying FFMPEG warning.

    I am trying to get to the bottom of it.

    The code,

       #self.__fourCC = cv2.VideoWriter_fourcc('x', '2', '6', '4')
       self.__fourCC = cv2.VideoWriter_fourcc('a', 'v', 'c', '1')
       if PlatformUtils.isRunningOnRaspberryPi():
           self.__fourCC = 0x00000021

    I have control over the version both OpenCV and FFMPEG (if required GStreamer too). I can and have built them for Ubuntu/Raspbian.