
Recherche avancée
Médias (91)
-
DJ Z-trip - Victory Lap : The Obama Mix Pt. 2
15 septembre 2011
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Matmos - Action at a Distance
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
DJ Dolores - Oslodum 2004 (includes (cc) sample of “Oslodum” by Gilberto Gil)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Danger Mouse & Jemini - What U Sittin’ On ? (starring Cee Lo and Tha Alkaholiks)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Cornelius - Wataridori 2
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Rapture - Sister Saviour (Blackstrobe Remix)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (97)
-
Demande de création d’un canal
12 mars 2010, parEn fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Automated installation script of MediaSPIP
25 avril 2011, parTo overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
The documentation of the use of this installation script is available here.
The code of this (...)
Sur d’autres sites (6791)
-
Thread safety of FFmpeg when using av_lockmgr_register
12 août 2013, par StocasticoMy application uses FFmpeg to read video streams. So far, I ensured thread safety by defining my own global lock and looking for all the methods inside FFmpeg libraries which are not thread safe.
This makes the code a bit messy, so while looking for better ideas I found this answer, but apparently I couldn't make use of the suggestions.
I tried testing it in my own environment, but I always get critical heap error. Here's the test codeclass TestReader
{
public:
TestReader( std::string sVid )
{
m_sVid = sVid;
m_cVidPtr.reset( new VideoReader() );
}
~TestReader()
{}
void operator() ()
{
readVideoThread();
}
private:
int readVideoThread()
{
m_cVidPtr->init( m_sVid.c_str() );
MPEGFrame::pointer cFramePtr;
for ( int i=0; i< 500; i++ )
{
cFramePtr = m_cVidPtr->getNextFrame();
}
return 0;
}
boost::shared_ptr<videoreader> m_cVidPtr;
std::string m_sVid;
};
/*****************************************************************************/
int lockMgrCallback(void** cMutex, enum AVLockOp op)
{
if (nullptr == cMutex)
return -1;
switch(op)
{
case AV_LOCK_CREATE:
{
*cMutex = nullptr;
boost::mutex* m = new boost::mutex();
*cMutex = static_cast(m);
break;
}
case AV_LOCK_OBTAIN:
{
boost::mutex* m = static_cast(*cMutex);
m->lock();
break;
}
case AV_LOCK_RELEASE:
{
boost::mutex * m = static_cast(*cMutex);
m->unlock();
break;
}
case AV_LOCK_DESTROY:
{
boost::mutex * m = static_cast(*cMutex);
delete m;
break;
}
default:
break;
}
return 0;
}
int testFFmpegMultiThread( std::string sVideo )
{
if ( ::av_lockmgr_register( &lockMgrCallback ) )
{
std::cout << "Could not initialize lock manager!" << std::endl;
return -1;
}
TestReader c1(sVideo);
TestReader c2(sVideo);
boost::thread t1( c1 );
boost::thread t2( c2 );
t1.join();
t2.join();
return 0;
}
</videoreader>The classes VideoReader and MPEGFrame are just wrappers and have always worked perfectly in single threaded scenarios, or in multi-threaded scenario managed using my own global lock.
Am I missing something obvious ? Can anybody point me to some working code ? Thanks in advance -
Rails 5 - Video streaming using Carrierwave uploaded video size constraint on the server
21 mars 2020, par MilindI have a working Rails 5 apps using Reactjs for frontend and React dropzone uploader to upload video files using carrierwave.
So far, what is working great is listed below -
- User can upload videos and videos are encoded based on the selection made by user - HLS or MPEG-DASH for online streaming.
- Once the video is uploaded on the server, it starts streaming it by :-
- FIRST,copying video on
/tmp
folder. - Running a bash script that uses
ffmpeg
to transcode uploaded video using predefined commands to produce new fragments of videos inside/tmp
folder. - Once the background job is done, all the videos are uploaded on AWS S3, which is how the default
carrierwave
works
- FIRST,copying video on
- So, when multiple videos are uploaded, they are all copied in /tmp folder and then transcoded and eventually uploaded to
S3
.
My questions, where i am looking some help are listed below -
1- The above process is good for small videos, BUT what if there are many concurrent users uploading 2GB of videos ? I know this will kill my server as my
/tmp
folder will keep on increasing and consume all the memory, making it to die hard.How can I allow concurrent videos to upload videos without effecting my server’s memory consumption ?2- Is there a way where I can directly upload the videos on AWS-S3 first, and then use one more proxy server/child application to encode videos from S3, download it to the child server, convert it and again upload it to the destination ? but this is almost the same but doing it on cloud, where memory consumption can be on-demand but will be not cost-effective.
3- Is there some easy and cost-effective way by which I can upload large videos, transcode them and upload it to AWS S3, without effecting my server memory. Am i missing some technical architecture here.
4- How Youtube/Netflix works, I know they do the same thing in a smart way but can someone help me to improve this ?
Thanks in advance.
-
RaspberryPi HLS streaming with nginx and ffmpeg ; v4l2 error : ioctl(VIDIOC_STREAMON) : Protocol error
22 janvier 2021, par Mirco WeberI'm trying to realize a baby monitoring with a Raspberry Pi (Model 4B, 4GB RAM) and an ordinary Webcam (with integrated Mic).
I followed this Tutorial : https://github.com/DeTeam/webcam-stream/blob/master/Tutorial.md


Shortly described :


- 

- I installed and configured an nginx server with rtmp module enabled.
- I installed ffmpeg with this configuration —enable-gpl —enable-nonfree —enable-mmal —enable-omx-rpi
- I tried to stream ;)








The configuration of nginx seems to be working (sometimes streaming works, the server starts without any complication and when the server is up and running, the webpage is displayed).
The configuration of ffmpeg seems to be fine as well, since streaming sometimes works...


I was trying a couple of different ffmpeg-commands ; all of them are sometimes working and sometimes resulting in an error.
The command looks like following :


ffmpeg -re
-f v4l2
-i /dev/video0
-f alsa
-ac 1
-thread_queue_size 4096
-i hw:CARD=Camera,DEV=0
-profile:v high
-level:v 4.1
-vcodec h264_omx
-r 10
-b:v 512k
-s 640x360
-acodec aac
-strict
-2
-ac 2
-ab 32k
-ar 44100
-f flv
rtmp://localhost/show/stream;



Note : I rearranged the code to make it easier to read. In the terminal, it is all in one line.
Note : There is no difference when using
-f video4linux2
instead of-f v4l2


The camera is recognized by the system :


pi@raspberrypi:~ $ v4l2-ctl --list-devices
bcm2835-codec-decode (platform:bcm2835-codec):
 /dev/video10
 /dev/video11
 /dev/video12

bcm2835-isp (platform:bcm2835-isp):
 /dev/video13
 /dev/video14
 /dev/video15
 /dev/video16

HD Web Camera: HD Web Camera (usb-0000:01:00.0-1.2):
 /dev/video0
 /dev/video1



When only using
-i /dev/video0
, audio transmission never worked.
The output ofarecord -L
was :

pi@raspberrypi:~ $ arecord -L
default
 Playback/recording through the PulseAudio sound server
null
 Discard all samples (playback) or generate zero samples (capture)
jack
 JACK Audio Connection Kit
pulse
 PulseAudio Sound Server
usbstream:CARD=Headphones
 bcm2835 Headphones
 USB Stream Output
sysdefault:CARD=Camera
 HD Web Camera, USB Audio
 Default Audio Device
front:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 Front speakers
surround21:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 2.1 Surround output to Front and Subwoofer speakers
surround40:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 4.0 Surround output to Front and Rear speakers
surround41:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 4.1 Surround output to Front, Rear and Subwoofer speakers
surround50:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 5.0 Surround output to Front, Center and Rear speakers
surround51:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 5.1 Surround output to Front, Center, Rear and Subwoofer speakers
surround71:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers
iec958:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 IEC958 (S/PDIF) Digital Audio Output
dmix:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 Direct sample mixing device
dsnoop:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 Direct sample snooping device
hw:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 Direct hardware device without any conversions
plughw:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 Hardware device with all software conversions
usbstream:CARD=Camera
 HD Web Camera
 USB Stream Output



that's why i added
-i hw:CARD=Camera,DEV=0
.

As mentioned above, it worked very well a couple of times with this configuration and commands.
But very often, i get the following error message when starting to stream :


pi@raspberrypi:~ $ ffmpeg -re -f video4linux2 -i /dev/video0 -f alsa -ac 1 -thread_queue_size 4096 -i hw:CARD=Camera,DEV=0 -profile:v high -level:v 4.1 -vcodec h264_omx -r 10 -b:v 512k -s 640x360 -acodec aac -strict -2 -ac 2 -ab 32k -ar 44100 -f flv rtmp://localhost/show/stream
ffmpeg version N-100673-g553eb07737 Copyright (c) 2000-2021 the FFmpeg developers
 built with gcc 8 (Raspbian 8.3.0-6+rpi1)
 configuration: --enable-gpl --enable-nonfree --enable-mmal --enable-omx-rpi --extra-ldflags=-latomic
 libavutil 56. 63.101 / 56. 63.101
 libavcodec 58.117.101 / 58.117.101
 libavformat 58. 65.101 / 58. 65.101
 libavdevice 58. 11.103 / 58. 11.103
 libavfilter 7. 96.100 / 7. 96.100
 libswscale 5. 8.100 / 5. 8.100
 libswresample 3. 8.100 / 3. 8.100
 libpostproc 55. 8.100 / 55. 8.100
[video4linux2,v4l2 @ 0x2ea4600] ioctl(VIDIOC_STREAMON): Protocol error
/dev/video0: Protocol error



And when I'm swithing to
/dev/video1
(since this was also an output forv4l2-ctl --list-devices
), I get the following error message :

pi@raspberrypi:~ $ ffmpeg -re -f v4l2 -i /dev/video1 -f alsa -ac 1 -thread_queue_size 4096 -i hw:CARD=Camera,DEV=0 -profile:v high -level:v 4.1 -vcodec h264_omx -r 10 -b:v 512k -s 640x360 -acodec aac -strict -2 -ac 2 -ab 32k -ar 44100 -f flv rtmp://localhost/show/stream
ffmpeg version N-100673-g553eb07737 Copyright (c) 2000-2021 the FFmpeg developers
 built with gcc 8 (Raspbian 8.3.0-6+rpi1)
 configuration: --enable-gpl --enable-nonfree --enable-mmal --enable-omx-rpi --extra-ldflags=-latomic
 libavutil 56. 63.101 / 56. 63.101
 libavcodec 58.117.101 / 58.117.101
 libavformat 58. 65.101 / 58. 65.101
 libavdevice 58. 11.103 / 58. 11.103
 libavfilter 7. 96.100 / 7. 96.100
 libswscale 5. 8.100 / 5. 8.100
 libswresample 3. 8.100 / 3. 8.100
 libpostproc 55. 8.100 / 55. 8.100
[video4linux2,v4l2 @ 0x1aa4610] ioctl(VIDIOC_G_INPUT): Inappropriate ioctl for device
/dev/video1: Inappropriate ioctl for device



When using the
video0
input, the webcam's LED that recognizes an access is constantly on. When usingvideo1
not.

After hours and days of googling and tears and whiskey, for the sake of my liver, my marriage and my physical and mental health, I'm very sincerly asking for your help...
What the f**k is happening and what can I do to make it work ???


Thanks everybody :)


UPDATE 1 :


- 

- using the full path to ffmpeg does not change anything...
- /dev/video0 and /dev/video1 have access rights for everybody
sudo ffmpeg ...
does not change anything as well- the problem seems to be at an "early stage". Stripping the command down to
ffmpeg -i /dev/video0
results in the same problem










UPDATE 2 :

It seems that everything is working when I first start another Application that needs access to the webcam and then ffmpeg...
Might be some driver issue, but when I'm looking for loaded modules withlsmod
, there is absolutely no change before and after I started the application...
Any help still appreciated...

UPDATE 3 :

I was checking the output ofdmesg
.

When I started the first application I received this message :

uvcvideo: Failed to query (GET_DEF) UVC control 12 on unit 2: -32 (exp. 4).


And when I startedffmpeg
, nothing happend but everything worked...