
Recherche avancée
Médias (1)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (96)
-
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Gestion de la ferme
2 mars 2010, parLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation"
Sur d’autres sites (5506)
-
Progress with rtc.io
12 août 2014, par silviaAt the end of July, I gave a presentation about WebRTC and rtc.io at the WDCNZ Web Dev Conference in beautiful Wellington, NZ.
Putting that talk together reminded me about how far we have come in the last year both with the progress of WebRTC, its standards and browser implementations, as well as with our own small team at NICTA and our rtc.io WebRTC toolbox.
One of the most exciting opportunities is still under-exploited : the data channel. When I talked about the above slide and pointed out Bananabread, PeerCDN, Copay, PubNub and also later WebTorrent, that’s where I really started to get Web Developers excited about WebRTC. They can totally see the shift in paradigm to peer-to-peer applications away from the Server-based architecture of the current Web.
Many were also excited to learn more about rtc.io, our own npm nodules based approach to a JavaScript API for WebRTC.
We believe that the World of JavaScript has reached a critical stage where we can no longer code by copy-and-paste of JavaScript snippets from all over the Web universe. We need a more structured module reuse approach to JavaScript. Node with JavaScript on the back end really only motivated this development. However, we’ve needed it for a long time on the front end, too. One big library (jquery anyone ?) that does everything that anyone could ever need on the front-end isn’t going to work any longer with the amount of functionality that we now expect Web applications to support. Just look at the insane growth of npm compared to other module collections :
Packages per day across popular platforms (Shamelessly copied from : http://blog.nodejitsu.com/npm-innovation-through-modularity/) For those that – like myself – found it difficult to understand how to tap into the sheer power of npm modules as a font end developer, simply use browserify. npm modules are prepared following the CommonJS module definition spec. Browserify works natively with that and “compiles” all the dependencies of a npm modules into a single bundle.js file that you can use on the front end through a script tag as you would in plain HTML. You can learn more about browserify and module definitions and how to use browserify.
For those of you not quite ready to dive in with browserify we have prepared prepared the rtc module, which exposes the most commonly used packages of rtc.io through an “RTC” object from a browserified JavaScript file. You can also directly download the JavaScript file from GitHub.
Using rtc.io rtc JS library So, I hope you enjoy rtc.io and I hope you enjoy my slides and large collection of interesting links inside the deck, and of course : enjoy WebRTC ! Thanks to Damon, JEeff, Cathy, Pete and Nathan – you’re an awesome team !
On a side note, I was really excited to meet the author of browserify, James Halliday (@substack) at WDCNZ, whose talk on “building your own tools” seemed to take me back to the times where everything was done on the command-line. I think James is using Node and the Web in a way that would appeal to a Linux Kernel developer. Fascinating !!
-
Methods For Retaining State
I jump around between projects. A lot. Over the years, I have employed various methods for retaining state or context as I switch to a different project. Here’s a quick survey and a general classification of their effectiveness.
Good
- Evernote : This is a cloud-based note-taking service that has a web client, Mac and Windows clients, and clients for just about ever mobile platform out there. I have an account and access it via the web interface as as the Windows, iOS, and Android clients. I really like it.
Okay
- Series of text files : I have been doing this for a very long time. I have many little note-filled directories here and there that are consistently migrated to new machines but generally forgotten about. This isn’t a terrible method but can be unwieldy when you work on lots of different machines. I’m still tracking down all these directories and importing them into Evernote.
Bad
- Layout of desktop windows : I have a habit of working on one project in a set of windows on one desktop space and another project in a second set of windows in another space, etc. Oh, this makes me shudder just thinking about it, mostly because of living in constant fear of a power failure or some other inadvertent reset (darn you, default config’d Windows Update) that wipes the state clean (sure, all of the work might have been saved, but I was relying on those windows to be set up in just the right manner to remind me of all the things I was working on). These days, I force myself to reboot at least once a week so I can’t get too deep into this habit. When it’s time to change projects, I write up exactly what I was doing and where I left off and stick it in Evernote.
- Open browser windows : I guess it’s common to have many, many tabs open in one’s web browser in this day and age. Like many, I use open tabs as a stack of items to read. The state problem comes when a few of the open tabs represent TODO items. Then I start living in fear that the browser might crash or be restarted in an unexpected way and I struggle to recall what 3-5 important TODO items were that I had opened in separate tabs (on top of a stack of less important items). Again, I try to shut down the browser frequently in order to break this tendency. TODO items are better filed in Evernote.
- Unsaved data in a text editor : Okay, this is just sloppy on my part, shoving temporary data into a text editor window thinking it’s supremely ephemeral. The problem comes when it’s linked to one of the many tasks on my desktop that might be bumped down a few priority levels ; when finally returning to the context-free data, I’m at a loss to explain what it’s for. Evernote gets it, once more, with a more thorough description of what was going on.
- Email inbox : I make an effort to ensure that my email inbox has the fewest number of messages possible. Once things are dealt with, they get filed away elsewhere. This implies that things in my inbox require action. Some things have a habit of hanging around, though. Longer items now get described in better detail and filed away in Evernote.
- Classic paper : Thanks to Derek in the comments for reminding me of this one. Paper is a reliable standby but it can get unwieldy when Post-It Notes litter your work area. Further, it can be problematic when you have multiple physical work areas.
- Shell history : Another method I rely on entirely too often. This is when I count on a recipe of command line incantations living on in the history buffer of my Unix shell (generally Bash). What sequence of git commands allowed me to do XYZ ? Let’s check the shell history– I sure hope it’s still in there.
Conclusion
I guess what I’m trying to say here is that I really like Evernote. If you have similar troubles with retaining state, try it out. I hear there are many other services similar to it with slightly varying feature sets (people rave about Microsoft OneNote). So there are plenty of options and something out there is surely a fit.Evernote has a free tier and a premium tier. For my meager note-taking needs, I don’t come anywhere close to the free tier’s limit but I decided to pay for a premium subscription simply because I feel like I derive so much value from the service.
One downside, however, is that I seem to be doing a lot less blogging since I got on Evernote earlier this year (though it is where I author most of these posts nowadays ; I especially like that I have a notebook labeled “Posted” whose incrementing count reminds me that I am getting some stuff out there). I originally started this blog as a sort of technical journal in order to organize notes and projects in a central location. It’s strange to think that if Evernote existed in 2005, I might never have had a reason to start this blog.
-
ffmpeg status & quality / cuda (CPU/GPU)
18 juin 2024, par coccoffmpeg am I doing it right ?


So much time has passed since I use ffmpeg to convert clips on my home web server, now that mp4 (h264 & aac) is the current overall standard (works on every console, smartphone, smartTV, pc) I decided to convert my old clips from various digital cameras to to this new container/codecs.


- 

- less space & the same quality.
- compatibility
- support for tags (subler for mac)








after some research I opted for ffmpeg because of various reasons


- 

- commandline (I made my simple web interface with default settings which I execute with php's exec)
- the quality/size amount






I read that many expensive video conversion software programs are not able to handle low bitrate videos properly. I also tested some of them and personally I could not find the proper export settings or I was not impressed by the results... some had fixed default export settings, most had a lower video quality at the same filesize. ffmpeg allows me to set the -crf (18-24 usually) and -preset (veryslow, fast..) which allows me to reduce the filesize drastically maintaining the same visible quality.


Said that I'm using the preset at very slow (there is also placebo but the final video file is only 1% smaller in size).


And here is the command I use


ffmpeg
-y //overwrite the file if it exists

-i INPUTFILE // replace with the input file

-metadata title=THETITLE // set a nice title, visible on modern devices
-metadata date=THEDATE // set a nice title, visible on modern devices

-c:v libx264 // use the h264 codec
 -crf 21 // try different numbers between 18-26
 -preset veryslow // placebo,slow,fast,ultrafast==big file 
 -tune film // tune it a little
 -pix_fmt yuv420p // preferred on most modern devices
 -profile:v main // preferred on most modern devices
 -level 3.1 // preferred on most modern devices 
 -refs 4 // preferred on most modern devices

-c:a libfdk_aac // use aac
 -metadata:s:a language=eng // set a language, visible on modern devices 
 -b:a 128k // audio bitrate 128k is like mp3 192k
 -ar 48000 // 44100 ... whatever
 -ac 2 // audiochannels
 -movflags +faststart //move the metadata in the front of the video so it loads faster

OUTPUTFILE



Some camcorder clips with m2ts already have the avc/h264 compatible codec so I just copy the stream.

some have the ac3/Dolby surround audio. I convert the audio but keep the ac3 as second audio track mapping the ffmpeg streams. this allows me to watch the mp4 on browsers and mobile devices but I'm able to keep the surround sound to playback on some tv's, advanced media players or devices like apple tv.

Not that I'm not happy with the speed (using quad core's) but I recently read again about cuda opencl and there is also the simple fact that I'm not using other converters than ffmpeg since a lot of time.


Is ffmpeg (with the setting I use) a good converter to keep the same video quality than the source reducing the space occupied by and average of 30-40% ?


Is GPU conversion really that bad (cuda .. testing a gtx970) ?
it would be nice to add some more speed to the conversions by using both the gpu and the cpu..but for my understanding they cannot work together ??? and using only GPU is a drastically quality loss...cpu is more precise, GPU is faster in calculation are too imprecise from what I read.. so expensive software programs use cuda only for preview purpose... right ?


Is ffmpeg or another software compatible with CPU+GPU encoding ?
I really don't remember where, but I read that the ffmpeg is not a good videoconverter.


I'm really happy with the size/quality, I gained an average of 30% in space with no visible quality loss. With some extra parameters i can adjust some really old analog videos that are deinterlaced in a really bad way.




maybe I could gain more size/quality with another software ???




note : I like ffmpeg.it's free and it has commandline so I can create my own interface with php html & js and use it on various machines without the need to install it in every device I use. I upload the idevice clips directly to the ffmpeg server.


EDIT :


@talonmies ...cuda tag removed :


http://www.nvidia.com/object/cuda_home_new.html




CUDA® is a parallel computing platform and programming model invented
by NVIDIA. It enables dramatic increases in computing performance by
harnessing the power of the graphics processing unit (GPU). With
millions of CUDA-enabled GPUs sold to date, software developers,
scientists and researchers are finding broad-ranging uses for GPU
computing with CUDA. Here are a few examples : - See more at :
http://www.nvidia.com/object/cuda_home_new.html#sthash.dEYaqae7.dpuf




isn't cuda the programming model that a theoretical ffmpeg library should support to handle GPU encoding on nvidia cards like the gtx 970 ?? like the badaboom software http://www.geforce.com/games-applications/pc-applications/badaboom-media-converter.