
Recherche avancée
Autres articles (34)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)
Sur d’autres sites (7519)
-
Get image from direct show device [duplicate]
13 août 2014, par user2663781This question already has an answer here :
-
Capturing image from webcam in java ?
15 answers
I want to get image from direct show device in Java but I can’t find a good solution for it.
My device is screen-capture-recorder which is easy to get with ffmpeg, so my first idea was to use Xuggler.
I tried this code :String driverName = "dshow";
String deviceName= "screen-capture-recorder";
// Let's make sure that we can actually convert video pixel formats.
if (!IVideoResampler.isSupported(IVideoResampler.Feature.FEATURE_COLORSPACECONVERSION))
throw new RuntimeException("you must install the GPL version of Xuggler (with IVideoResampler support) for this demo to work");
// Create a Xuggler container object
IContainer container = IContainer.make();
// Tell Xuggler about the device format
IContainerFormat format = IContainerFormat.make();
if (format.setInputFormat(driverName) < 0)
throw new IllegalArgumentException("couldn't open webcam device: " + driverName);
// devices, unlike most files, need to have parameters set in order
// for Xuggler to know how to configure them, for a webcam, these
// parameters make sense
IMetaData params = IMetaData.make();
params.setValue("framerate", "30/1");
params.setValue("video_size", "320x240");
// Open up the container
int retval = container.open(deviceName, IContainer.Type.READ, format,
false, true, params, null);But when I try to open the device (by his device name) it is not working :
0 [main] ERROR org.ffmpeg - [dshow @ 05168820] Malformed dshow input string.
0 [main] DEBUG com.xuggle.xuggler - Could not open output url : screen-capture-recorder (../../../../../../../csrc/com/xuggle/xuggler/Container.cpp:436)
Exception in thread "main" java.lang.IllegalArgumentException : could not open file : screen-capture-recorder ; Error : Input/output error
at DisplayWebcamVideo.main(DisplayWebcamVideo.java:99)
Java Result : 1After searching a bit I found DSJ, but it is for non-commercial use only and we need a registry version, so impossible to use for me.
I also found LTI-CIVIL but it could not detect the "screen-capture-recorder".
I tried FMJ and JMF but it does not find the device too.I tried VLCj but the container with the video must be open if I want to get image and it is not what I need.
I tried webcam-recorder (github.sarxos.webcam), it detects the device but i got this error when I try to open it :
#
# A fatal error has been detected by the Java Runtime Environment:
#
# EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x5faf910d, pid=2552, tid=9780
#I am a bit stuck now, I don’t know how to solve this problem someone can help me ?
Or give me a simple DLL that I can use through JNA to get an image from direct show device... -
Capturing image from webcam in java ?
-
how to play audio from a video file in c#
10 août 2014, par Ivan LisovichFor read video file I use ffmpeg libraries(http://ffmpeg.zeranoe.com/builds/) build ffmpeg-2.2.3-win32-dev.7z.
manage c++ code for read video file :void VideoFileReader::Read( String^ fileName, System::Collections::Generic::List^ imageData, System::Collections::Generic::List^>^ audioData )
{
char *nativeFileName = ManagedStringToUnmanagedUTF8Char(fileName);
libffmpeg::AVFormatContext *pFormatCtx = NULL;
libffmpeg::AVCodec *pCodec = NULL;
libffmpeg::AVCodec *aCodec = NULL;
libffmpeg::av_register_all();
if(libffmpeg::avformat_open_input(&pFormatCtx, nativeFileName, NULL, NULL) != 0)
{
throw gcnew System::Exception( "Couldn't open file" );
}
if(libffmpeg::avformat_find_stream_info(pFormatCtx, NULL) < 0)
{
throw gcnew System::Exception( "Couldn't find stream information" );
}
libffmpeg::av_dump_format(pFormatCtx, 0, nativeFileName, 0);
int videoStream = libffmpeg::av_find_best_stream(pFormatCtx, libffmpeg::AVMEDIA_TYPE_VIDEO, -1, -1, &pCodec, 0);
int audioStream = libffmpeg::av_find_best_stream(pFormatCtx, libffmpeg::AVMEDIA_TYPE_AUDIO, -1, -1, &aCodec, 0);
if(videoStream == -1)
{
throw gcnew System::Exception( "Didn't find a video stream" );
}
if(audioStream == -1)
{
throw gcnew System::Exception( "Didn't find a audio stream" );
}
libffmpeg::AVCodecContext *aCodecCtx = pFormatCtx->streams[audioStream]->codec;
libffmpeg::avcodec_open2(aCodecCtx, aCodec, NULL);
m_channels = aCodecCtx->channels;
m_sampleRate = aCodecCtx->sample_rate;
m_bitsPerSample = aCodecCtx->bits_per_coded_sample;
libffmpeg::AVCodecContext *pCodecCtx = pFormatCtx->streams[videoStream]->codec;
if(libffmpeg::avcodec_open2(pCodecCtx, pCodec, NULL) < 0)
{
throw gcnew System::Exception( "Could not open codec" );
}
m_width = pCodecCtx->width;
m_height = pCodecCtx->height;
m_framesCount = pFormatCtx->streams[videoStream]->nb_frames;
if (pFormatCtx->streams[videoStream]->r_frame_rate.den == 0)
{
m_frameRate = 25;
}
else
{
m_frameRate = pFormatCtx->streams[videoStream]->r_frame_rate.num / pFormatCtx->streams[videoStream]->r_frame_rate.den;
if (m_frameRate == 0)
{
m_frameRate = 25;
}
}
libffmpeg::AVFrame *pFrame = libffmpeg::av_frame_alloc();
int numBytes = libffmpeg::avpicture_get_size(libffmpeg::PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height);
libffmpeg::uint8_t *buffer = (libffmpeg::uint8_t *)libffmpeg::av_malloc(numBytes*sizeof(libffmpeg::uint8_t));
struct libffmpeg::SwsContext *sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, libffmpeg::PIX_FMT_RGB24, SWS_BILINEAR, NULL, NULL, NULL);
libffmpeg::AVPacket packet;
libffmpeg::AVFrame *filt_frame = libffmpeg::av_frame_alloc();
while(av_read_frame(pFormatCtx, &packet) >= 0)
{
if(packet.stream_index == videoStream)
{
System::Drawing::Bitmap ^bitmap = nullptr;
int frameFinished;
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
if(frameFinished)
{
bitmap = gcnew System::Drawing::Bitmap( pCodecCtx->width, pCodecCtx->height, System::Drawing::Imaging::PixelFormat::Format24bppRgb );
System::Drawing::Imaging::BitmapData^ bitmapData = bitmap->LockBits( System::Drawing::Rectangle( 0, 0, pCodecCtx->width, pCodecCtx->height ), System::Drawing::Imaging::ImageLockMode::ReadOnly, System::Drawing::Imaging::PixelFormat::Format24bppRgb );
libffmpeg::uint8_t* ptr = reinterpret_cast( static_cast( bitmapData->Scan0 ) );
libffmpeg::uint8_t* srcData[4] = { ptr, NULL, NULL, NULL };
int srcLinesize[4] = { bitmapData->Stride, 0, 0, 0 };
libffmpeg::sws_scale( sws_ctx, (libffmpeg::uint8_t const * const *)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, srcData, srcLinesize );
bitmap->UnlockBits( bitmapData );
}
imageData->Add(bitmap);
}
else if(packet.stream_index == audioStream)
{
int b = av_dup_packet(&packet);
if(b >= 0) {
int audio_pkt_size = packet.size;
libffmpeg::uint8_t* audio_pkt_data = packet.data;
while(audio_pkt_size > 0)
{
int got_frame = 0;
int len1 = libffmpeg::avcodec_decode_audio4(aCodecCtx, pFrame, &got_frame, &packet);
if(len1 < 0)
{
audio_pkt_size = 0;
break;
}
audio_pkt_data += len1;
audio_pkt_size -= len1;
if (got_frame)
{
int data_size = libffmpeg::av_samples_get_buffer_size ( NULL, aCodecCtx->channels, pFrame->nb_samples, aCodecCtx->sample_fmt, 1 );
array<byte>^ managedBuf = gcnew array<byte>(data_size);
System::IntPtr iptr = System::IntPtr( pFrame->data[0] );
System::Runtime::InteropServices::Marshal::Copy( iptr, managedBuf, 0, data_size );
audioData->Add(managedBuf);
}
}
}
}
libffmpeg::av_free_packet(&packet);
}
libffmpeg::av_free(buffer);
libffmpeg::av_free(pFrame);
libffmpeg::avcodec_close(pCodecCtx);
libffmpeg::avformat_close_input(&pFormatCtx);
delete [] nativeFileName;
}
</byte></byte>This function returns my images in imageData list and audio in audioData list ;
I normal draw image in my c# code, but i don’t playing audio data.
I try playing audio in NAudio library. But I hear crackle in speakers instead of sounds.
Code in c# playing audio :var WaveFormat = new WaveFormat(m_sampleRate, 16, m_channels)
var _waveProvider = new BufferedWaveProvider(WaveFormat) { DiscardOnBufferOverflow = true, BufferDuration = TimeSpan.FromMilliseconds(_fileReader.Length) };
var _waveOut = new DirectSoundOut();
_waveOut.Init(_waveProvider);
_waveOut.Play();
foreach (var data in audioData)
{
_waveProvider.AddSamples(data, 0, data.Length);
}What am I doing wrong ?
-
Get RGB values from AVPicture and change to grey-scale in FFMPEG
22 octobre 2014, par user2742299The main motive of my code is to change the RGB values from the AVPicture in FFMPEG.
I have been able to get the image data "data[0]" by following the article : http://blog.tomaka17.com/2012/03/libavcodeclibavformat-tutorial/
I would like to know that how can I access the 3 bytes of pic.data[0] which is in RGB format. I have been trying to access the pic.data[i][j] via for-loop in 2D matrix fashion but jth element>3.
Any guidance in this regard will be helpful.
Code is here :
AVPicture pic;
avpicture_alloc(&pic, PIX_FMT_RGB24, mpAVFrameInput->width,mpAVFrameInput->height);
auto ctxt = sws_getContext(mpAVFrameInput->width,mpAVFrameInput->height,static_cast<pixelformat>(mpAVFrameInput->format),
mpAVFrameInput->width, mpAVFrameInput->height, PIX_FMT_RGB24, SWS_BILINEAR, nullptr, nullptr, nullptr);
if (ctxt == nullptr)
throw std::runtime_error("Error while calling sws_getContext");
sws_scale(ctxt, mpAVFrameInput->data, mpAVFrameInput->linesize, 0, mpAVFrameInput->height, pic.data,
pic.linesize);
for (int i = 0; i < (mpAVFrameInput->height-1); i++) {
for (int j = 0; j < (mpAVFrameInput->width-1); j++) {
printf("\n value: %d",pic.data[0][j]);
}
}
</pixelformat>Pseudo code which is in my mind is :
For each pixel in image {
Red = pic.data[i][j].pixel.RED;
Green = pic.data[i][j].pixel.GREEN;
Blue = pic.data[i][j].pixel.BLUE;
GRAY = (Red+Green+Blue)/3;
Red = GRAY;
Green = GRAY;
Blue = GRAY;
Save Frame;}I am quite new to FFMPEG therefore any guidance and help will be highly appreciable.
Many Thanks