Recherche avancée

Médias (91)

Autres articles (104)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (7424)

  • aarch64 : NEON asm for integral init

    14 août 2014, par Janne Grunau
    aarch64 : NEON asm for integral init
    

    integral_init4h_neon and integral_init8h_neon are 3-4 times faster than
    C. integral_init8v_neon is 6 times faster and integral_init4v_neon is 10
    times faster.

    • [DH] common/aarch64/mc-a.S
    • [DH] common/aarch64/mc-c.c
  • ffmpeg pipe blocks while capturing

    26 juin 2013, par Marco Vasapollo

    I have this code :

    public InputStream getInputStream() throws Exception {
       try {
           process = Runtime.getRuntime().exec("ffmpeg -f dshow -i video=\"" + query + "\":audio=\"" + microPhoneName + "\" -r 25 -vcodec mpeg4 -acodec mp3 -f avi -");
           }
           catch (Exception e) {
           }
       return process.getInputStream();
    }

    When i use the inputStream.read(b) command, it works only for a little bit of times (180 to 400 times, depending from formats and codecs I use) then the inputStream lock on read and the application doesn't go anymore.

    What's the problem ? Memory saturation (ffmpeg process memory is at least 14mb) ?
    Is there a way to unlock this situation (clean memory, use a file as a bridge to prevent locks) ?

    Of course I need a little bit of "realtime", and not "post-process".
    I'm not constrained to use ffmpeg, I can change it if necessary.

  • Trimming videos with 'ffmpeg and ffprobe'

    10 août 2022, par adeshina Ibrahim

    I am working on an ETL process, and I'm now in the final stage of preprocessing my videos. I used the script below (reference : @FarisHijazi) to first auto detected black-screen frames using ffprobe and trim them out using ffmpeg.

    


    The script worked for me but the problems are :

    


      

    1. It cut off all other good frames together with the first bad frames. e.g. if gBgBgBgB represents a sequence of good and BAD frames for 5sec each, the script only returned the first g(5sec) and cut off the other BgBgBgB after it. I want to have only g g g g where all B B B B has been removed

      


    2. 


    3. I also want to detect other colors aside black-screen e.g. green-screen or red-screen or blurry part of video

      


    4. 


    5. Script doesn't work if video has no audio in it.

      


    6. 


    


    import argparse
import os
import shlex
import subprocess

parser = argparse.ArgumentParser(
    __doc__, formatter_class=argparse.ArgumentDefaultsHelpFormatter
)
parser.add_argument("input", type=str, help="input video file")
parser.add_argument(
    "--invert",
    action="store_true",
    help="remove nonblack instead of removing black",
)
args = parser.parse_args()

##FIXME: sadly you must chdir so that the ffprobe command will work
os.chdir(os.path.split(args.input)[0])
args.input = os.path.split(args.input)[1]

spl = args.input.split(".")
outpath = (
    ".".join(spl[:-1])
    + "."
    + ("invert" if args.invert else "")
    + "out."
    + spl[-1]
)


def delete_back2back(l):
    from itertools import groupby

    return [x[0] for x in groupby(l)]


def construct_ffmpeg_trim_cmd(timepairs, inpath, outpath):
    cmd = f'ffmpeg -i "{inpath}" -y -r 20 -filter_complex '
    cmd += '"'
    for i, (start, end) in enumerate(timepairs):
        cmd += (
            f"[0:v]trim=start={start}:end={end},setpts=PTS-STARTPTS,format=yuv420p[{i}v]; "
            + f"[0:a]atrim=start={start}:end={end},asetpts=PTS-STARTPTS[{i}a]; "
        )
    for i, (start, end) in enumerate(timepairs):
        cmd += f"[{i}v][{i}a]"
    cmd += f"concat=n={len(timepairs)}:v=1:a=1[outv][outa]"
    cmd += '"'
    cmd += f' -map [outv] -map [outa] "{outpath}"'
    return cmd


def get_blackdetect(inpath, invert=False):
    ffprobe_cmd = f'ffprobe -f lavfi -i "movie={inpath},blackdetect[out0]" -show_entries tags=lavfi.black_start,lavfi.black_end -of default=nw=1 -v quiet'
    print("ffprobe_cmd:", ffprobe_cmd)
    lines = (
        subprocess.check_output(shlex.split(ffprobe_cmd))
        .decode("utf-8")
        .split("\n")
    )
    times = [
        float(x.split("=")[1].strip()) for x in delete_back2back(lines) if x
    ]
    assert len(times), "no black scene detected"

    if not invert:
        times = [0] + times[:-1]
    timepairs = [
        (times[i], times[i + 1]) for i in range(0, len(times) // 2, 2)
    ]
    return timepairs


if __name__ == "__main__":
    timepairs = get_blackdetect(args.input, invert=args.invert)
    cmd = construct_ffmpeg_trim_cmd(timepairs, args.input, outpath)

    print(cmd)
    os.system(cmd)