Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (55)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

Sur d’autres sites (6232)

  • Revision dd88f48296 : Set the maximum decode threads to be 8. This will fix the frame parallel decode

    5 février 2015, par hkuang

    Changed Paths :
     Modify /vp9/common/vp9_thread.h


     Modify /vp9/vp9_dx_iface.c



    Set the maximum decode threads to be 8.

    This will fix the frame parallel decode hang on windows
    due to not enough semaphores.

    This will also make the frame parallel decode safer as
    the number of frame buffers could only support maximum
    8 threads.

    Change-Id : Id9ef50692819dcbebbd74a0aabffbfb3f39a4309

  • Record sound with ffmpeg on ubuntu 12.04 [closed]

    27 juin 2012, par vzybilly

    I have been working for a few days on trying to get ffmpeg to record sound, a short list of what I've tried :

    #Crappy screen grab
    #ffmpeg -f x11grab -s "1366x768" -r "24" -i :0.0 -f mp4 ./out
    #awesome screen grab, grabbing sound but non out.
    #ffmpeg -f x11grab -s "1366x768" -r "24" -i :0.0 -f alsa -ac 2 -i pulse -vcodec libx264 -s "1366x768" -acodec libmp3lame -ab 128k -threads 0 -f mp4 ~/Desktop/vid
    #audio test, no audio in file.
    #ffmpeg -f alsa -ac 2 -i pulse -acodec libmp3lame -ab 128k -threads 0 -f mp3 ./test.mp3
    #awesome screen grab.
    #ffmpeg -f x11grab -s "1366x768" -r "24" -i :0.0 -threads 0 -sameq -an -f mp4 ~/Desktop/vid[/CODE]I'm running ubuntu 12.04 from beta(ish)

    it would be awesome if someone could help me get this to work all in one line or (the way i'm going) multiple instances of ffmpeg (screen grab, microphone, program)

    I have also tried the pavucontrol with doing the monitoring of when recording audio, but that does not help either.

    Thanks for all of your help, vzybilly 

    EDIT :
    This one crashed.

    $ ffmpeg -f alsa -ac 2 -i plughw:0,0 -f x11grab -r 100 -s 1366x768 -i :0.0 -acodec pcm_s16le -vcodec libx264 -preset ultrafast -threads 3 testVid.mkv
    ffmpeg version 0.8.3-4:0.8.3-0ubuntu0.12.04.1, Copyright (c) 2000-2012 the Libav developers
     built on Jun 12 2012 16:37:58 with gcc 4.6.3
    *** THIS PROGRAM IS DEPRECATED ***
    This program is only provided for compatibility and will be removed in a future release. Please use avconv instead.
    [alsa @ 0x8fce240] capture with some ALSA plugins, especially dsnoop, may hang.
    [alsa @ 0x8fce240] Estimating duration from bitrate, this may be inaccurate
    Input #0, alsa, from 'plughw:0,0':
     Duration: N/A, start: 433.999945, bitrate: N/A
       Stream #0.0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s
    [x11grab @ 0x8fde820] device: :0.0 -> display: :0.0 x: 0 y: 0 width: 1366 height: 768
    [x11grab @ 0x8fde820] shared memory extension  found
    [x11grab @ 0x8fde820] Estimating duration from bitrate, this may be inaccurate
    Input #1, x11grab, from ':0.0':
     Duration: N/A, start: 1340805516.368518, bitrate: N/A
       Stream #1.0: Video: rawvideo, bgra, 1366x768, -2147483 kb/s, 100 tbr, 1000k tbn, 100 tbc
    File 'testVid.mkv' already exists. Overwrite ? [y/N] y
    Incompatible pixel format 'bgra' for codec 'libx264', auto-selecting format 'yuv420p'
    [buffer @ 0x8fde700] w:1366 h:768 pixfmt:bgra
    [avsink @ 0x8fcdf20] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out'
    [scale @ 0x8ff3ce0] w:1366 h:768 fmt:bgra -> w:1366 h:768 fmt:yuv420p flags:0x4
    [libx264 @ 0x8fdd920] lookaheadless mb-tree requires intra refresh or infinite keyint
    [libx264 @ 0x8fdd920] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2
    [libx264 @ 0x8fdd920] profile Constrained Baseline, level 4.2
    [libx264 @ 0x8fdd920] 264 - core 120 r2151 a3f4407 - H.264/MPEG-4 AVC codec - Copyleft 2003-2011 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=25 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.25 aq=0
    Output #0, matroska, to 'archinstall4.mkv':
     Metadata:
       encoder         : Lavf53.21.0
       Stream #0.0: Video: libx264, yuv420p, 1366x768, q=-1--1, 1k tbn, 100 tbc
       Stream #0.1: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s
    Stream mapping:
     Stream #1.0 -> #0.0
     Stream #0.0 -> #0.1
    Press ctrl-c to stop encoding
    [alsa @ 0x8fce240] ALSA buffer xrun.
    [matroska @ 0x8fcd980] Application provided invalid, non monotonically increasing dts to muxer in stream 1: 213 >= 213
    av_interleaved_write_frame(): Invalid argument

    Any thoughts ?

    EDIT & ANSWER :
    Got it all working with a script :

    #!/bin/bash
    #vzybilly
    #these are temp files
    aud="aud.mp3"
    vid="vid.mp4"
    #grab audio & pid
    ffmpeg -f alsa -ac 2 -i plughw:0,0 $aud &
    audPID=$!
    #grab screen & pid
    ffmpeg -f x11grab -s "1366x768" -r "24" -i :0.0 -threads 0 -sameq -an -f mp4 $vid &
    vidPID=$!
    #wait, till name given (that means stop)
    read -p "Stop by giving an Output video name?" out
    #stop audio and video with pids
    kill -n 2 $audPID
    kill -n 2 $vidPID
    echo "$out"
    #combine to the target output file
    ffmpeg -i $aud -i $vid -acodec copy -vcodec copy "$out"
    #purge the temp files
    rm $aud
    rm $vid
  • Processing Big Data Problems

    8 janvier 2011, par Multimedia Mike — Big Data

    I’m becoming more interested in big data problems, i.e., extracting useful information out of absurdly sized sets of input data. I know it’s a growing field and there is a lot to read on the subject. But you know how I roll— just think of a problem to solve and dive right in.

    Here’s how my adventure unfolded.

    The Corpus
    I need to run a command line program on a set of files I have collected. This corpus is on the order of 350,000 files. The files range from 7 bytes to 175 MB. Combined, they occupy around 164 GB of storage space.

    Oh, and said storage space resides on an external, USB 2.0-connected hard drive. Stop laughing.

    A file is named according to the SHA-1 hash of its data. The files are organized in a directory hierarchy according to the first 6 hex digits of the SHA-1 hash (e.g., a file named a4d5832f... is stored in a4/d5/83/a4d5832f...). All of this file hash, path, and size information is stored in an SQLite database.

    First Pass
    I wrote a Python script that read all the filenames from the database, fed them into a pool of worker processes using Python’s multiprocessing module, and wrote some resulting data for each file back to the SQLite database. My Eee PC has a single-core, hyperthreaded Atom which presents 2 CPUs to the system. Thus, 2 worker threads crunched the corpus. It took awhile. It took somewhere on the order of 9 or 10 or maybe even 12 hours. It took long enough that I’m in no hurry to re-run the test and get more precise numbers.

    At least I extracted my initial set of data from the corpus. Or did I ?

    Think About The Future

    A few days later, I went back to revisit the data only to notice that the SQLite database was corrupted. To add insult to that bit of injury, the script I had written to process the data was also completely corrupted (overwritten with something unrelated to Python code). BTW, this is was on a RAID brick configured for redundancy. So that’s strike 3 in my personal dealings with RAID technology.

    I moved the corpus to a different external drive and also verified the files after writing (easy to do since I already had the SHA-1 hashes on record).

    The corrupted script was pretty simple to rewrite, even a little better than before. Then I got to re-run it. However, this run was on a faster machine, a hyperthreaded, quad-core beast that exposes 8 CPUs to the system. The reason I wasn’t too concerned about the poor performance with my Eee PC is that I knew I was going to be able to run in on this monster later.

    So I let the rewritten script rip. The script gave me little updates regarding its progress. As it did so, I ran some rough calculations and realized that it wasn’t predicted to finish much sooner than it would have if I were running it on the Eee PC.

    Limiting Factors
    It had been suggested to me that I/O bandwidth of the external USB drive might be a limiting factor. This is when I started to take that idea very seriously.

    The first idea I had was to move the SQLite database to a different drive. The script records data to the database for every file processed, though it only commits once every 100 UPDATEs, so at least it’s not constantly syncing the disc. I ran before and after tests with a small subset of the corpus and noticed a substantial speedup thanks to this policy chance.

    Then I remembered hearing something about "atime" which is access time. Linux filesystems, per default, record the time that a file was last accessed. You can watch this in action by running 'stat <file> ; cat <file> > /dev/null ; stat <file>' and observe that the "Access" field has been updated to NOW(). This also means that every single file that gets read from the external drive still causes an additional write. To avoid this, I started mounting the external drive with '-o noatime' which instructs Linux not to record "last accessed" time for files.

    On the limited subset test, this more than doubled script performance. I then wondered about mounting the external drive as read-only. This had the same performance as noatime. I thought about using both options together but verified that access times are not updated for a read-only filesystem.

    A Note On Profiling
    Once you start accessing files in Linux, those files start getting cached in RAM. Thus, if you profile, say, reading a gigabyte file from a disk and get 31 MB/sec, and then repeat the same test, you’re likely to see the test complete instantaneously. That’s because the file is already sitting in memory, cached. This is useful in general application use, but not if you’re trying to profile disk performance.

    Thus, in between runs, do (as root) 'sync; echo 3 > /proc/sys/vm/drop_caches' in order to wipe caches (explained here).

    Even Better ?
    I re-ran the test using these little improvements. Now it takes somewhere around 5 or 6 hours to run.

    I contrived an artificially large file on the external drive and did some 'dd' tests to measure what the drive could really do. The drive consistently measured a bit over 31 MB/sec. If I could read and process the data at 30 MB/sec, the script would be done in about 95 minutes.

    But it’s probably rather unreasonable to expect that kind of transfer rate for lots of smaller files scattered around a filesystem. However, it can’t be that helpful to have 8 different processes constantly asking the HD for 8 different files at any one time.

    So I wrote a script called stream-corpus.py which simply fetched all the filenames from the database and loaded the contents of each in turn, leaving the data to be garbage-collected at Python’s leisure. This test completed in 174 minutes, just shy of 3 hours. I computed an average read speed of around 17 MB/sec.

    Single-Reader Script
    I began to theorize that if I only have one thread reading, performance should improve greatly. To test this hypothesis without having to do a lot of extra work, I cleared the caches and ran stream-corpus.py until 'top' reported that about half of the real memory had been filled with data. Then I let the main processing script loose on the data. As both scripts were using sorted lists of files, they iterated over the filenames in the same order.

    Result : The processing script tore through the files that had obviously been cached thanks to stream-corpus.py, degrading drastically once it had caught up to the streaming script.

    Thus, I was incented to reorganize the processing script just slightly. Now, there is a reader thread which reads each file and stuffs the name of the file into an IPC queue that one of the worker threads can pick up and process. Note that no file data is exchanged between threads. No need— the operating system is already implicitly holding onto the file data, waiting in case someone asks for it again before something needs that bit of RAM. Technically, this approach accesses each file multiple times. But it makes little practical difference thanks to caching.

    Result : About 183 minutes to process the complete corpus (which works out to a little over 16 MB/sec).

    Why Multiprocess
    Is it even worthwhile to bother multithreading this operation ? Monitoring the whole operation via 'top', most instances of the processing script are barely using any CPU time. Indeed, it’s likely that only one of the worker threads is doing any work most of the time, pulling a file out of the IPC queue as soon the reader thread triggers its load into cache. Right now, the processing is usually pretty quick. There are cases where the processing (external program) might hang (one of the reasons I’m running this project is to find those cases) ; the multiprocessing architecture at least allows other processes to take over until a hanging process is timed out and killed by its monitoring process.

    Further, the processing is pretty simple now but is likely to get more intensive in future iterations. Plus, there’s the possibility that I might move everything onto a more appropriately-connected storage medium which should help alleviate the bottleneck bravely battled in this post.

    There’s also the theoretical possibility that the reader thread could read too far ahead of the processing threads. Obviously, that’s not too much of an issue in the current setup. But to guard against it, the processes could share a variable that tracks the total number of bytes that have been processed. The reader thread adds filesizes to the count while the processing threads subtract file sizes. The reader thread would delay reading more if the number got above a certain threshold.

    Leftovers
    I wondered if the order of accessing the files mattered. I didn’t write them to the drive in any special order. The drive is formatted with Linux ext3. I ran stream-corpus.py on all the filenames sorted by filename (remember the SHA-1 naming convention described above) and also by sorting them randomly.

    Result : It helps immensely for the filenames to be sorted. The sorted variant was a little more than twice as fast as the random variant. Maybe it has to do with accessing all the files in a single directory before moving onto another directory.

    Further, I have long been under the impression that the best read speed you can expect from USB 2.0 was 27 Mbytes/sec (even though 480 Mbit/sec is bandied about in relation to the spec). This comes from profiling I performed with an external enclosure that supports both USB 2.0 and FireWire-400 (and eSata). FW-400 was able to read the same file at nearly 40 Mbytes/sec that USB 2.0 could only read at 27 Mbytes/sec. Other sources I have read corroborate this number. But this test (using different hardware), achieved over 31 Mbytes/sec.