
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (38)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)
Sur d’autres sites (1723)
-
DXGI Desktop Duplication : encoding frames to send them over the network
31 août 2018, par prazuberI’m trying to write an app which will capture a video stream of the screen and send it to a remote client. I’ve found out that the best way to capture a screen on Windows is to use DXGI Desktop Duplication API (available since Windows 8). Microsoft provides a neat sample which streams duplicated frames to screen. Now, I’ve been wondering what is the easiest, but still relatively fast way to encode those frames and send them over the network.
The frames come from
AcquireNextFrame
with a surface that contains the desktop bitmap and metadata which contains dirty and move regions that were updated. From here, I have a couple of options :- Extract a bitmap from a DirectX surface and then use an external library like ffmpeg to encode series of bitmaps to H.264 and send it over RTSP. While straightforward, I fear that this method will be too slow as it isn’t taking advantage of any native Windows methods. Converting D3D texture to a ffmpeg-compatible bitmap seems like unnecessary work.
- From this answer : convert D3D texture to IMFSample and use MediaFoundation’s SinkWriter to encode the frame. I found this tutorial of video encoding, but I haven’t yet found a way to immediately get the encoded frame and send it instead of dumping all of them to a video file.
Since I haven’t done anything like this before, I’m asking if I’m moving in the right direction. In the end, I want to have a simple, preferably low latency desktop capture video stream, which I can view from a remote device.
Also, I’m wondering if I can make use of dirty and move regions provided by Desktop Duplication. Instead of encoding the frame, I can send them over the network and do the processing on the client side, but this means that my client has to have DirectX 11.1 or higher available, which is impossible if I would want to stream to a mobile platform.
-
How to use scale filter in doc/examples/transcoding.c
19 janvier 2018, par siods333333I tried to replace the
"null"
filter with the"scale=320x180"
filter in thedoc/examples/transcoding.c
file, it only resulted into the this error message :[libx264 @ 03303ee0] Input picture width (640) is greater than stride (256)
Error occurred: Generic error in an external libraryWhat is wrong ? Look that
init_filters
happens afteropen_output_file
, the encoder is already set before it even knows the resolution of the output.How to do this properly ?
Look at this piece of code, I don’t get what it’s talking about, fiters ain’t going to magically set the correct resolution :
/* In this example, we transcode to same properties (picture size,
* sample rate etc.). These properties can be changed for output
* streams easily using filters */
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
enc_ctx->height = dec_ctx->height;
enc_ctx->width = dec_ctx->width;Found this in the ffmpeg.c itself :
enc_ctx->width = av_buffersink_get_w(ost->filter->filter);
enc_ctx->height = av_buffersink_get_h(ost->filter->filter); -
Background version of uploaded video creation in Elixir with Arc and FFmpeg
12 décembre 2017, par LevitI tried to make video uploading on Elixir phoenix project using Arc. I need to have different size versions of video. So I used FFmpeg to do this :
def transform(:large, {file, scope}) do
{:ffmpeg, fn(input, output) -> "-i #{input} -c:a copy -s 1920x1080 -f mp4 #{output}" end, :mp4}
endHowever, it takes a lot of time to create this versions even to small videos. So, I decided to make version creation on background. I want Arc to upload video, then I give the response with uploaded original while other versions are creating.
I was hoping to find such option at https://github.com/stavro/arc but I didn’t succed. I only found "To disable asynchronous processing, add @async false to your upload definition" but I don’t want to disable it, right ?
I tried to use Exq https://github.com/akira/exq for background processing but I didn’t menage to use it in uploader.
Could anybody tell me how it should be made in a proper way or give me some dirty hack advice to make it work. Thanks.
I tried Task as was advised in comments, but I am not sure how to use them in this case. When I try
{:ffmpeg, fn(input, output) -> Task.async(fn -> "-i #{input} -c:a copy -s 1920x1080 -f mp4 #{output}" end) end, :mp4}
or
{:ffmpeg, Task.async(fn -> fn(input, output) -> "-i #{input} -c:a copy -s 1920x1080 -f mp4 #{output}" end end), :mp4}
I got "protocol String.Chars not implemented for %Task".
When I try
{:ffmpeg, Task.async(fn(input, output) -> "-i #{input} -c:a copy -s 1920x1080 -f mp4 #{output}" end), :mp4}
I got "#Function<20.83953603/2 in MyWebSite.Content.transform/2> with arity 2 called with no arguments". I tried to pass function as an argument with "&" but it fails as well.
My uploader :
defmodule MyWebSite.Content do
use Arc.Definition
use Arc.Ecto.Definition
@acl :public_read
@versions [:original, :huge]
def transform(:huge, {file, scope}) do
{:ffmpeg, Task.async(fn(input, output) -> "-i #{input} -c:a copy -s 1920x1080 -f mp4 #{output}" end), :mp4}
end
def s3_object_headers(version, {file, scope}) do
[timeout: 3_000_00, content_type: Plug.MIME.path(file.file_name)]
end
end