
Recherche avancée
Autres articles (27)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (4772)
-
avcodec/thread : Don't use ThreadFrame when unnecessary
6 février 2022, par Andreas Rheinhardtavcodec/thread : Don't use ThreadFrame when unnecessary
The majority of frame-threaded decoders (mainly the intra-only)
need exactly one part of ThreadFrame : The AVFrame. They don't
need the owners nor the progress, yet they had to use it because
ff_thread_(get|release)_buffer() requires it.This commit changes this and makes these functions work with ordinary
AVFrames ; the decoders that need the extra fields for progress
use ff_thread_(get|release)_ext_buffer() which work exactly
as ff_thread_(get|release)_buffer() used to do.This also avoids some unnecessary allocations of progress AVBuffers,
namely for H.264 and HEVC film grain frames : These frames are not
used for synchronization and therefore don't need a ThreadFrame.Also move the ThreadFrame structure as well as ff_thread_ref_frame()
to threadframe.h, the header for frame-threaded decoders with
inter-frame dependencies.Reviewed-by : Anton Khirnov <anton@khirnov.net>
Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>- [DH] libavcodec/aic.c
- [DH] libavcodec/alac.c
- [DH] libavcodec/av1dec.c
- [DH] libavcodec/av1dec.h
- [DH] libavcodec/bitpacked_dec.c
- [DH] libavcodec/cfhd.c
- [DH] libavcodec/cllc.c
- [DH] libavcodec/cri.c
- [DH] libavcodec/dnxhddec.c
- [DH] libavcodec/dvdec.c
- [DH] libavcodec/dxtory.c
- [DH] libavcodec/dxv.c
- [DH] libavcodec/dxva2_av1.c
- [DH] libavcodec/error_resilience.h
- [DH] libavcodec/exr.c
- [DH] libavcodec/ffv1.h
- [DH] libavcodec/ffv1dec.c
- [DH] libavcodec/flacdec.c
- [DH] libavcodec/fraps.c
- [DH] libavcodec/h264_picture.c
- [DH] libavcodec/h264_slice.c
- [DH] libavcodec/h264dec.c
- [DH] libavcodec/h264dec.h
- [DH] libavcodec/hapdec.c
- [DH] libavcodec/hevc_refs.c
- [DH] libavcodec/hevcdec.c
- [DH] libavcodec/hevcdec.h
- [DH] libavcodec/hqx.c
- [DH] libavcodec/huffyuvdec.c
- [DH] libavcodec/jpeg2000dec.c
- [DH] libavcodec/lagarith.c
- [DH] libavcodec/lcldec.c
- [DH] libavcodec/libopenjpegdec.c
- [DH] libavcodec/magicyuv.c
- [DH] libavcodec/mdec.c
- [DH] libavcodec/mpegpicture.h
- [DH] libavcodec/notchlc.c
- [DH] libavcodec/nvdec_av1.c
- [DH] libavcodec/photocd.c
- [DH] libavcodec/pixlet.c
- [DH] libavcodec/proresdec2.c
- [DH] libavcodec/pthread_frame.c
- [DH] libavcodec/rv34.c
- [DH] libavcodec/sheervideo.c
- [DH] libavcodec/takdec.c
- [DH] libavcodec/thread.h
- [DH] libavcodec/threadframe.h
- [DH] libavcodec/tiff.c
- [DH] libavcodec/tta.c
- [DH] libavcodec/utils.c
- [DH] libavcodec/utvideodec.c
- [DH] libavcodec/v210dec.c
- [DH] libavcodec/v410dec.c
- [DH] libavcodec/vaapi_av1.c
- [DH] libavcodec/vble.c
- [DH] libavcodec/vp8.h
- [DH] libavcodec/vp9shared.h
- [DH] libavcodec/webp.c
- [DH] libavcodec/ylc.c
-
using libav instead of ffmpeg
21 janvier 2015, par n00bieI want to streaming video over http, i am using ogg(theora + vorbis), now i have sender and receiver, and i can run them using command line :
Sender :
ffmpeg -f video4linux2 -s 320x240 -i /dev/mycam -codec:v libtheora -qscale:v 5 -f ogg http://127.0.0.1:8080
Receiver :
sudo gst-launch-0.10 tcpserversrc port = 8080 ! oggdemux ! theoradec ! autovideosink
Now, sender sends both audio and video, but receiver plays only video.
It works perfect, but now i want not to use ffmpeg and use only libav* instead.
Here’s my class for streaming :
class VCORE_LIBRARY_EXPORT VVideoWriter : private boost::noncopyable
{
public:
VVideoWriter( );
~VVideoWriter( );
bool openFile( const std::string& name,
int fps, int videoBitrate, int width, int height,
int audioSampleRate, bool stereo, int audioBitrate );
void close( );
bool writeVideoFrame( const uint8_t* image, int64_t timestamp );
bool writeAudioFrame( const int16_t* data, int64_t timestamp );
int audioFrameSize( ) const;
private:
AVFrame *m_videoFrame;
AVFrame *m_audioFrame;
AVFormatContext *m_context;
AVStream *m_videoStream;
AVStream *m_audioStream;
int64_t m_startTime;
};Initialization :
bool VVideoWriter::openFile( const std::string& name,
int fps, int videoBitrate, int width, int height,
int audioSampleRate, bool stereo, int audioBitrate )
{
if( ! m_context )
{
// initalize the AV context
m_context = avformat_alloc_context( );
assert( m_context );
// get the output format
m_context->oformat = av_guess_format( "ogg", name.c_str( ), nullptr );
if( m_context->oformat )
{
strcpy( m_context->filename, name.c_str( ) );
auto codecID = AV_CODEC_ID_THEORA;
auto codec = avcodec_find_encoder( codecID );
if( codec )
{
m_videoStream = avformat_new_stream( m_context, codec );
assert( m_videoStream );
// initalize codec
auto codecContext = m_videoStream->codec;
bool globalHeader = m_context->oformat->flags & AVFMT_GLOBALHEADER;
if( globalHeader )
codecContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
codecContext->codec_id = codecID;
codecContext->codec_type = AVMEDIA_TYPE_VIDEO;
codecContext->width = width;
codecContext->height = height;
codecContext->time_base.den = fps;
codecContext->time_base.num = 1;
codecContext->bit_rate = videoBitrate;
codecContext->pix_fmt = PIX_FMT_YUV420P;
codecContext->flags |= CODEC_FLAG_QSCALE;
codecContext->global_quality = FF_QP2LAMBDA * 5;
int res = avcodec_open2( codecContext, codec, nullptr );
if( res >= 0 )
{
auto codecID = AV_CODEC_ID_VORBIS;
auto codec = avcodec_find_encoder( codecID );
if( codec )
{
m_audioStream = avformat_new_stream( m_context, codec );
assert( m_audioStream );
// initalize codec
auto codecContext = m_audioStream->codec;
bool globalHeader = m_context->oformat->flags & AVFMT_GLOBALHEADER;
if( globalHeader )
codecContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
codecContext->codec_id = codecID;
codecContext->codec_type = AVMEDIA_TYPE_AUDIO;
codecContext->sample_fmt = AV_SAMPLE_FMT_FLTP;
codecContext->bit_rate = audioBitrate;
codecContext->sample_rate = audioSampleRate;
codecContext->channels = stereo ? 2 : 1;
codecContext->channel_layout = stereo ? AV_CH_LAYOUT_STEREO : AV_CH_LAYOUT_MONO;
res = avcodec_open2( codecContext, codec, nullptr );
if( res >= 0 )
{
// try to open the file
if( avio_open( &m_context->pb, m_context->filename, AVIO_FLAG_WRITE ) >= 0 )
{
m_audioFrame->nb_samples = codecContext->frame_size;
m_audioFrame->format = codecContext->sample_fmt;
m_audioFrame->channel_layout = codecContext->channel_layout;
boost::posix_time::ptime time_t_epoch( boost::gregorian::date( 1970, 1, 1 ) );
m_context->start_time_realtime = ( boost::posix_time::microsec_clock::universal_time( ) - time_t_epoch ).total_microseconds( );
m_startTime = -1;
// write the header
if( avformat_write_header( m_context, nullptr ) >= 0 )
{
return true;
}
else std::cerr << "VVideoWriter: failed to write video header" << std::endl;
}
else std::cerr << "VVideoWriter: failed to open video file " << name << std::endl;
}
else std::cerr << "VVideoWriter: failed to initialize audio codec" << std::endl;
}
else std::cerr << "VVideoWriter: requested audio codec is not supported" << std::endl;
}
else std::cerr << "VVideoWriter: failed to initialize video codec" << std::endl;
}
else std::cerr << "VVideoWriter: requested video codec is not supported" << std::endl;
}
else std::cerr << "VVideoWriter: requested video format is not supported" << std::endl;
avformat_free_context( m_context );
m_context = nullptr;
m_videoStream = nullptr;
m_audioStream = nullptr;
}
return false;
}Writing video :
bool VVideoWriter::writeVideoFrame( const uint8_t* image, int64_t timestamp )
{
if( m_context ) {
auto codecContext = m_videoStream->codec;
avpicture_fill( reinterpret_cast( m_videoFrame ),
const_cast( image ),
codecContext->pix_fmt, codecContext->width, codecContext->height );
AVPacket pkt;
av_init_packet( & pkt );
pkt.data = nullptr;
pkt.size = 0;
int gotPacket = 0;
if( ! avcodec_encode_video2( codecContext, &pkt, m_videoFrame, & gotPacket ) ) {
if( gotPacket == 1 ) {
pkt.stream_index = m_videoStream->index;
int res;
{
pkt.pts = AV_NOPTS_VALUE;
pkt.dts = AV_NOPTS_VALUE;
pkt.stream_index = m_videoStream->index;
res = av_write_frame( m_context, &pkt );
}
av_free_packet( & pkt );
return res >= 0;
}
assert( ! pkt.size );
return true;
}
}
return false;
}Writing audio (now i write test dummy audio) :
bool VVideoWriter::writeAudioFrame( const int16_t* data, int64_t timestamp )
{
if( m_context ) {
auto codecContext = m_audioStream->codec;
int buffer_size = av_samples_get_buffer_size(nullptr, codecContext->channels, codecContext->frame_size, codecContext->sample_fmt, 0);
float *samples = (float*)av_malloc(buffer_size);
for (int i = 0; i < buffer_size / sizeof(float); i++)
samples[i] = 1000. * sin((double)i/2.);
int ret = avcodec_fill_audio_frame( m_audioFrame, codecContext->channels, codecContext->sample_fmt, (const uint8_t*)samples, buffer_size, 0);
assert( ret >= 0 );
(void)(ret);
AVPacket pkt;
av_init_packet( & pkt );
pkt.data = nullptr;
pkt.size = 0;
int gotPacket = 0;
if( ! avcodec_encode_audio2( codecContext, &pkt, m_audioFrame, & gotPacket ) ) {
if( gotPacket == 1 ) {
pkt.stream_index = m_audioStream->index;
int res;
{
pkt.pts = AV_NOPTS_VALUE;
pkt.dts = AV_NOPTS_VALUE;
pkt.stream_index = m_audioStream->index;
res = av_write_frame( m_context, &pkt );
}
av_free_packet( & pkt );
return res >= 0;
}
assert( ! pkt.size );
return true;
}
return false;
}
return false;
}Here’s test example (i send video from webcam and dummy audio) :
class TestVVideoWriter : public sigslot::has_slots<>
{
public:
TestVVideoWriter( ) :
m_fileOpened( false )
{
}
void onCapturedFrame( cricket::VideoCapturer*, const cricket::CapturedFrame* capturedFrame )
{
if( m_fileOpened ) {
m_writer.writeVideoFrame( reinterpret_cast<const>( capturedFrame->data ),
capturedFrame->time_stamp / 1000 );
m_writer.writeAudioFrame( nullptr , 0 );
} else {
m_fileOpened = m_writer.openFile( "http://127.0.0.1:8080",
15, 40000, capturedFrame->width, capturedFrame->height,
16000, false, 64000 );
}
}
public:
vcore::VVideoWriter m_writer;
bool m_fileOpened;
};
TestVVideoWriter testWriter;
BOOST_AUTO_TEST_SUITE(TEST_VIDEO_WRITER)
BOOST_AUTO_TEST_CASE(testWritingVideo)
{
cricket::LinuxDeviceManager deviceManager;
std::vector devs;
if( deviceManager.GetVideoCaptureDevices( &devs ) ) {
if( devs.size( ) ) {
boost::shared_ptr camera( deviceManager.CreateVideoCapturer( devs[ 0 ] ) );
if( camera ) {
cricket::VideoFormat format( 320, 240, cricket::VideoFormat::FpsToInterval( 30 ),
camera->GetSupportedFormats( )->front( ).fourcc );
cricket::VideoFormat best;
if( camera->GetBestCaptureFormat( format, &best ) ) {
camera->SignalFrameCaptured.connect( &testWriter, &TestVVideoWriter::onCapturedFrame );
if( camera->Start( best ) != cricket::CS_FAILED ) {
boost::this_thread::sleep( boost::posix_time::seconds( 10 ) );
return;
}
}
}
}
}
std::cerr << "Problem has occured with camera" << std::endl;
}
BOOST_AUTO_TEST_SUITE_END() // TEST_VIDEO_WRITER
</const>But, in this case, gstreamer start playing video only when my test program stop executing (after 10 seconds in this case). It does not suit me, i want gstreamer start playing immediately after starting my test program.
Could someone help me ?
P.S. Sorry for my English.
-
Exceeded GA’s 10M hits data limit, now what ?
21 juin 2019, par Joselyn Khor