
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (33)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
Création définitive du canal
12 mars 2010, parLorsque votre demande est validée, vous pouvez alors procéder à la création proprement dite du canal. Chaque canal est un site à part entière placé sous votre responsabilité. Les administrateurs de la plateforme n’y ont aucun accès.
A la validation, vous recevez un email vous invitant donc à créer votre canal.
Pour ce faire il vous suffit de vous rendre à son adresse, dans notre exemple "http://votre_sous_domaine.mediaspip.net".
A ce moment là un mot de passe vous est demandé, il vous suffit d’y (...) -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)
Sur d’autres sites (4595)
-
Dreamcast Serial Extractor
31 décembre 2017, par Multimedia Mike — Sega DreamcastIt has not been a very productive year for blogging. But I started the year by describing an unfinished project that I developed for the Sega Dreamcast, so I may as well end the year the same way. The previous project was a media player. That initiative actually met with some amount of success and could have developed into something interesting if I had kept at it.
By contrast, this post describes an effort that was ultimately a fool’s errand that I spent way too much time trying to make work.
Problem Statement
In my neverending quest to analyze the structure of video games while also hoarding a massive collection of them (though I’m proud to report that I did play at least a few of them this past year), I wanted to be able to extract the data from my many Dreamcast titles, both games and demo discs. I had a tool called the DC Coder’s Cable, a serial cable that enables communication between a Dreamcast and a PC. With the right software, you could dump an entire Dreamcast GD-ROM, which contained a gigabyte worth of sectors.Problem : The dumping software (named ‘dreamrip’ and written by noted game hacker BERO) operated in a very basic mode, methodically dumping sector after sector and sending it down the serial cable. This meant that it took about 28 hours to extract all the data on a single disc by running at the maximum speed of 115,200 bits/second, or about 11 kilobytes/second. I wanted to create a faster method.
The Pitch
I formed a mental model of dreamrip’s operation that looked like this :
As an improvement, I envisioned this beautiful architecture :
Architectural Assumptions
My proposed architecture was predicated on the assumption that the disc reading and serial output functions were both I/O-bound operations and that the CPU would be idle much of the time. My big idea was to use that presumably idle CPU time to compress the sectors before sending them over the wire. As long as the CPU can compress the data faster than 11 kbytes/sec, it should be a win. In order to achieve this, I broke the main program into 3 threads :- The first thread reads the sectors ; more specifically, it asks the drive firmware to please read the sectors and make the data available in system RAM
- The second thread waits for sector data to appear in memory and then compresses it
- The third thread takes the compressed data when it is ready and shuffles it out through the serial cable
Simple and elegant, right ?
For data track compression, I wanted to start with zlib in order to prove the architecture, but then also try bzip2 or lzma. As long as they could compress data faster than the serial port could write it, then it should be a win. For audio track compression, I wanted to use the Flake FLAC encoder. According to my notes, I did get both bzip2 compression and the Flake compressor working on the Dreamcast. I recall choosing Flake over the official FLAC encoder because it was much simpler and had fewer dependencies, always an important consideration for platforms such as this.
Problems
I worked for quite awhile on this project. I have a lot of notes recorded but a lot of the problems I had remain a bit vague in my memory. However, there was one problem I discovered that eventually sunk the entire initiative :The serial output operation is CPU-bound.
My initial mental model was that the a buffer could be “handed off” to the serial subsystem and the CPU could go back to doing other work. Nope. Turns out that the CPU was participating at every step of the serial transfer.
Further, I eventually dug into the serial driver code and learned that there was already some compression taking place via the miniLZO library.
Lessons Learned
- Recognize the assumptions that you’re making up front at the start of the project.
- Prototype in order to ensure plausibility
- Profile to make sure you’re optimizing the right thing (this is something I have learned again and again).
Another interesting tidbit from my notes : it doesn’t matter how many sectors you read at a time, the overall speed is roughly the same. I endeavored to read 1000 2048-byte data sectors, 1 or 10 or 100 at a time, or all 1000 at once. My results :
- 1 : 19442 ms
- 10 : 19207 ms
- 100 : 19194 ms
- 1000 : 19320 ms
No difference. That surprised me.
Side Benefits
At one point, I needed to understand how BERO’s dreamrip software was operating. I knew I used to have the source code but I could no longer find it. Instead, I decided to try to reverse engineer what I needed from the SH-4 binary image that I had. It wasn’t an ELF image ; rather, it was a raw binary meant to be loaded at a particular memory location which makes it extra challenging for ‘objdump’. This led to me asking my most viewed and upvoted question on Stack Overflow : “Disassembling A Flat Binary File Using objdump”. The next day, it also led me to post one of my most upvoted answers when I found the solution elsewhere.Strangely, I have since tried out the command line shown in my answer and have been unable to make it work. But people keep upvoting both the question and the answer.
Eventually this all became moot when I discovered a misplaced copy of the source code on one of my computers.
I strongly recall binging through the Alias TV show while I was slogging away on this project, so I guess that’s a positive association since I got so many fun screenshots out of it.
The Final Resolution
Strangely, I was still determined to make this project work even though the Dreamcast SD adapter arrived for me about halfway through the effort. Part of this was just stubbornness, but part of it was my assumptions about serial port speeds, in particular, my assumption that there was a certain speed-of-light type of limitation on serial port speeds so that the SD adapter, operating over the DC’s serial port, would not be appreciably faster than the serial cable.This turned out to be very incorrect. In fact, the SD adapter is capable of extracting an entire gigabyte disc image in 35-40 minutes. This is the method I have since been using to extract Dreamcast disc images.
The post Dreamcast Serial Extractor first appeared on Breaking Eggs And Making Omelettes.
-
PC Video Conferencing in the Year 1999
21 juin 2011, par Multimedia Mike — GeneralRemember Intel’s custom flavor of H.263 cleverly named I.263 ? I think I have finally found an application that used it thanks to a recent thrift shop raid— Intel Video Phone :
The root directory of the disc has 2 copies of an intro.avi video. One copy uses Intel Indeo 3 video and PCM audio. The other uses I.263 video and an undetermined (presumably Intel-proprietary) audio codec — RIFF id 0x0402 at a bitrate of 88 kbits/sec for stereo, 22 kHz audio. The latter video looks awful but is significantly smaller (like 4 MB vs. 25 MB).
This is the disc marked as "Send it to a friend...". Here’s the way this concept was supposed to operate :
- You buy an Intel Video Phone Camera Pack (forgotten page courtesy of the Internet Archive) which includes a camera and 2 CDs.
- You install the camera and video phone software on your computer.
- You send the other CD to the person whom you want to be able to see your face when you’re teleconferencing with them.
- The other party installs the software.
- The 2 of you may make an internet phone call presumably using commodity PC microphones for the voice component ; the person who doesn’t have a camera is able to see the person who does have a camera.
- In a cunning viral/network marketing strategy, Intel encourages the other party to buy the physical hardware as well so that they may broadcast their own visage back to the other person.
If you need further explanation, the intro lady does a great job :
I suspect I.263 was the video codec driving this since Indeo 3 would probably be inappropriate for real time video applications due to its vector quantizing algorithm.
-
ffmpeg status & quality / cuda (CPU/GPU)
18 juin 2024, par coccoffmpeg am I doing it right ?


So much time has passed since I use ffmpeg to convert clips on my home web server, now that mp4 (h264 & aac) is the current overall standard (works on every console, smartphone, smartTV, pc) I decided to convert my old clips from various digital cameras to to this new container/codecs.


- 

- less space & the same quality.
- compatibility
- support for tags (subler for mac)








after some research I opted for ffmpeg because of various reasons


- 

- commandline (I made my simple web interface with default settings which I execute with php's exec)
- the quality/size amount






I read that many expensive video conversion software programs are not able to handle low bitrate videos properly. I also tested some of them and personally I could not find the proper export settings or I was not impressed by the results... some had fixed default export settings, most had a lower video quality at the same filesize. ffmpeg allows me to set the -crf (18-24 usually) and -preset (veryslow, fast..) which allows me to reduce the filesize drastically maintaining the same visible quality.


Said that I'm using the preset at very slow (there is also placebo but the final video file is only 1% smaller in size).


And here is the command I use


ffmpeg
-y //overwrite the file if it exists

-i INPUTFILE // replace with the input file

-metadata title=THETITLE // set a nice title, visible on modern devices
-metadata date=THEDATE // set a nice title, visible on modern devices

-c:v libx264 // use the h264 codec
 -crf 21 // try different numbers between 18-26
 -preset veryslow // placebo,slow,fast,ultrafast==big file 
 -tune film // tune it a little
 -pix_fmt yuv420p // preferred on most modern devices
 -profile:v main // preferred on most modern devices
 -level 3.1 // preferred on most modern devices 
 -refs 4 // preferred on most modern devices

-c:a libfdk_aac // use aac
 -metadata:s:a language=eng // set a language, visible on modern devices 
 -b:a 128k // audio bitrate 128k is like mp3 192k
 -ar 48000 // 44100 ... whatever
 -ac 2 // audiochannels
 -movflags +faststart //move the metadata in the front of the video so it loads faster

OUTPUTFILE



Some camcorder clips with m2ts already have the avc/h264 compatible codec so I just copy the stream.

some have the ac3/Dolby surround audio. I convert the audio but keep the ac3 as second audio track mapping the ffmpeg streams. this allows me to watch the mp4 on browsers and mobile devices but I'm able to keep the surround sound to playback on some tv's, advanced media players or devices like apple tv.

Not that I'm not happy with the speed (using quad core's) but I recently read again about cuda opencl and there is also the simple fact that I'm not using other converters than ffmpeg since a lot of time.


Is ffmpeg (with the setting I use) a good converter to keep the same video quality than the source reducing the space occupied by and average of 30-40% ?


Is GPU conversion really that bad (cuda .. testing a gtx970) ?
it would be nice to add some more speed to the conversions by using both the gpu and the cpu..but for my understanding they cannot work together ??? and using only GPU is a drastically quality loss...cpu is more precise, GPU is faster in calculation are too imprecise from what I read.. so expensive software programs use cuda only for preview purpose... right ?


Is ffmpeg or another software compatible with CPU+GPU encoding ?
I really don't remember where, but I read that the ffmpeg is not a good videoconverter.


I'm really happy with the size/quality, I gained an average of 30% in space with no visible quality loss. With some extra parameters i can adjust some really old analog videos that are deinterlaced in a really bad way.




maybe I could gain more size/quality with another software ???




note : I like ffmpeg.it's free and it has commandline so I can create my own interface with php html & js and use it on various machines without the need to install it in every device I use. I upload the idevice clips directly to the ffmpeg server.


EDIT :


@talonmies ...cuda tag removed :


http://www.nvidia.com/object/cuda_home_new.html




CUDA® is a parallel computing platform and programming model invented
by NVIDIA. It enables dramatic increases in computing performance by
harnessing the power of the graphics processing unit (GPU). With
millions of CUDA-enabled GPUs sold to date, software developers,
scientists and researchers are finding broad-ranging uses for GPU
computing with CUDA. Here are a few examples : - See more at :
http://www.nvidia.com/object/cuda_home_new.html#sthash.dEYaqae7.dpuf




isn't cuda the programming model that a theoretical ffmpeg library should support to handle GPU encoding on nvidia cards like the gtx 970 ?? like the badaboom software http://www.geforce.com/games-applications/pc-applications/badaboom-media-converter.