
Recherche avancée
Autres articles (72)
-
D’autres logiciels intéressants
12 avril 2011, parOn ne revendique pas d’être les seuls à faire ce que l’on fait ... et on ne revendique surtout pas d’être les meilleurs non plus ... Ce que l’on fait, on essaie juste de le faire bien, et de mieux en mieux...
La liste suivante correspond à des logiciels qui tendent peu ou prou à faire comme MediaSPIP ou que MediaSPIP tente peu ou prou à faire pareil, peu importe ...
On ne les connais pas, on ne les a pas essayé, mais vous pouvez peut être y jeter un coup d’oeil.
Videopress
Site Internet : (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)
Sur d’autres sites (5935)
-
ffplay cannot play more than one song
5 février 2020, par Bernie gachi have taken ffplay.c file from http://ffmpeg.org/doxygen/trunk/ffplay_8c-source.html and re edited it to a cpp file to embed in my win32 gui application . i have made the following changes to it.
- made the int main function into a local function as follows, i can pass the HWND to embedd the player
void Ffplay::play_song(string file, HWND parent, bool* successfull)
{
int flags;
VideoState* is;
input_filename = file;
/* register all codecs, demux and protocols */
#if CONFIG_AVDEVICE
avdevice_register_all();
#endif
//avformat_network_init();
//check whether the filename is valid
if (input_filename.empty())
{
logger.log(logger.LEVEL_ERROR, "filename %s is not valid\n", file);
return;
}
if (display_disable)
{
video_disable = 1;
}
flags = SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER;
if (audio_disable)
flags &= ~SDL_INIT_AUDIO;
else
{
/* Try to work around an occasional ALSA buffer underflow issue when the
* period size is NPOT due to ALSA resampling by forcing the buffer size. */
if (!SDL_getenv("SDL_AUDIO_ALSA_SET_BUFFER_SIZE"))
SDL_setenv("SDL_AUDIO_ALSA_SET_BUFFER_SIZE", "1", 1);
}
if (display_disable)
flags &= ~SDL_INIT_VIDEO;
SDL_SetMainReady();
if (SDL_Init(flags))
{
logger.log(logger.LEVEL_ERROR, "Could not initialize SDL - %s\n", SDL_GetError());
logger.log(logger.LEVEL_ERROR, "(Did you set the DISPLAY variable?)\n");
return;
}
//Initialize optional fields of a packet with default values.
//Note, this does not touch the data and size members, which have to be initialized separately.
av_init_packet(&flush_pkt);
flush_pkt.data = (uint8_t*)&flush_pkt;
if (!display_disable)
{
int flags = SDL_WINDOW_HIDDEN;
if (alwaysontop)
#if SDL_VERSION_ATLEAST(2,0,5)
flags |= SDL_WINDOW_ALWAYS_ON_TOP;
#else
logger.log(logger.LEVEL_INFO, "SDL version doesn't support SDL_WINDOW_ALWAYS_ON_TOP. Feature will be inactive.\n");
#endif
if (borderless)
flags |= SDL_WINDOW_BORDERLESS;
else
flags |= SDL_WINDOW_RESIZABLE;
SDL_InitSubSystem(flags);
ShowWindow(parent, true);
//window = SDL_CreateWindow(program_name, SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, default_width, default_height, flags);
window = SDL_CreateWindowFrom(parent);
SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, "linear");
if (window) {
renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC);
if (!renderer)
{
logger.log(logger.LEVEL_ERROR, "Failed to initialize a hardware accelerated renderer: %s\n", SDL_GetError());
renderer = SDL_CreateRenderer(window, -1, 0);
}
if (renderer)
{
if (!SDL_GetRendererInfo(renderer, &renderer_info))
{
logger.log(logger.LEVEL_INFO, "Initialized %s renderer.\n", renderer_info.name);
}
}
}
if (!window || !renderer || !renderer_info.num_texture_formats)
{
logger.log(logger.LEVEL_ERROR, "Failed to create window or renderer: %s\n", SDL_GetError());
return;
}
}
is = stream_open(input_filename.c_str(), file_iformat);
if (!is)
{
logger.log(logger.LEVEL_ERROR, "Failed to initialize VideoState!\n");
return;
}
//the song is playing now
*successfull = true;
event_loop(is);
//the song has quit;
*successfull = false;
}- changed the callback functions as the static ones couldn’t be used by c++ eg,
void Ffplay::static_sdl_audio_callback(void* opaque, Uint8* stream, int len)
{
static_cast(opaque)->sdl_audio_callback(opaque, stream, len);
}closing doesn’t change from the main file to close the audio and sdl framework
void Ffplay::do_exit(VideoState* is)
{
abort = true;
if(is)
{
stream_close(is);
}
if (renderer)
SDL_DestroyRenderer(renderer);
if (window)
SDL_DestroyWindow(window);
#if CONFIG_AVFILTER
av_freep(&vfilters_list);
#endif
avformat_network_deinit();
SDL_Quit();
}i call the functions as follows from main gui
ft=std::async(launch::async, &Menu::play_song, this, songs_to_play.at(0));
the
menu::play_song
function is :void Menu::play_song(wstring song_path)
{
ready_to_play_song = false;
OutputDebugString(L"\nbefore song\n");
using std::future;
using std::async;
using std::launch;
string input{ song_path.begin(),song_path.end() };
Ffplay ffplay;
ffplay.play_song(input, h_sdl_window, &song_opened);
OutputDebugString(L"\nafter song\n");
ready_to_play_song = true;
}THE PROBLEM is i can only play one song . if i call the
menu::play_song
function again the sound is missing and the video/art cover is occasionally missing also. it seems some resources are not been released or something like that.i have localised the proble to this function
int Ffplay::packet_queue_get(PacketQueue* q, AVPacket* pkt, int block, int* serial)
{
MyAVPacketList* pkt1;
int ret;
int count=0;
SDL_LockMutex(q->mutex);
for (;;)
{
if (q->abort_request)
{
ret = -1;
break;
}
pkt1 = q->first_pkt;
if (pkt1) {
q->first_pkt = pkt1->next;
if (!q->first_pkt)
q->last_pkt = NULL;
q->nb_packets--;
q->size -= pkt1->pkt.size + sizeof(*pkt1);
q->duration -= pkt1->pkt.duration;
*pkt = pkt1->pkt;
if (serial)
*serial = pkt1->serial;
av_free(pkt1);
ret = 1;
break;
}
else if (!block) {
ret = 0;
break;
}
else
{
logger.log(logger.LEVEL_INFO, "packet_queue before");
SDL_CondWait(q->cond, q->mutex);
logger.log(logger.LEVEL_INFO, "packet_queue after");
}
}
SDL_UnlockMutex(q->mutex);
return ret;
}the call to
SDL_CondWait(q->cond, q->mutex);
never returns -
Start of video is not labeled as "0" in QuickTime Video from GoPro
26 mars 2020, par John TerragnoliI’m trying to combine four GoPro videos into a single video, and then rotate it 90 degrees. However, the time scales on the bottom of the videos are all wrong. The videos are 17 minutes and 42 second. But the beginning time is labeled as 5:15:20:32 and the ending time is 5:33:01:32. It just looks really weird and I’d like to fix it. After I use ffmpeg to rotate and concatenate the videos, the problem persists. Could it possibly be fixed with Exiftool ?
ffmpeg -safe 0 -f concat -i list.txt -vcodec copy -acodec copy merged_videos.MP4
ffmpeg -i input.mov -vf "transpose=1" output.mov
Here is the exiftool information on one of the videos :
File Name : GOPR3023.MP4
Directory : .
File Size : 3.7 GB
File Modification Date/Time : 2018:04:12 14:56:16-05:00
File Access Date/Time : 2020:03:25 12:17:18-05:00
File Inode Change Date/Time : 2020:03:25 17:57:04-05:00
File Permissions : rwxrwxrwx
File Type : MP4
File Type Extension : mp4
MIME Type : video/mp4
Major Brand : MP4 v1 [ISO 14496-1:ch13]
Minor Version : 2013.10.18
Compatible Brands : mp41
Movie Data Size : 4001979951
Movie Data Offset : 28
Movie Header Version : 0
Create Date : 2018:04:12 14:38:32
Modify Date : 2018:04:12 14:38:32
Time Scale : 60000
Duration : 0:17:42
Preferred Rate : 1
Preferred Volume : 100.00%
Preview Time : 0 s
Preview Duration : 0 s
Poster Time : 0 s
Selection Time : 0 s
Selection Duration : 0 s
Current Time : 0 s
Next Track ID : 6
Firmware Version : HD5.03.02.51.00
Lens Serial Number : NAH6092300301117
Camera Serial Number Hash : 34676f2cdf49b86a1514817a93377bf7
Track Header Version : 0
Track Create Date : 2018:04:12 14:38:32
Track Modify Date : 2018:04:12 14:38:32
Track ID : 1
Track Duration : 0:17:42
Track Layer : 0
Track Volume : 0.00%
Image Width : 1920
Image Height : 1080
Graphics Mode : srcCopy
Op Color : 0 0 0
Compressor ID : avc1
Source Image Width : 1920
Source Image Height : 1080
X Resolution : 72
Y Resolution : 72
Compressor Name : GoPro AVC encoder
Bit Depth : 24
Color Representation : nclx 1 1 1
Video Frame Rate : 59.94
Time Code : 3
Balance : 0
Audio Format : mp4a
Audio Channels : 2
Audio Bits Per Sample : 16
Audio Sample Rate : 48000
Text Font : Unknown (21)
Text Face : Plain
Text Size : 10
Text Color : 0 0 0
Background Color : 65535 65535 65535
Font Name : Helvetica
Other Format : tmcd
Warning : [minor] The ExtractEmbedded option may find more tags in the movie data
Matrix Structure : 1 0 0 0 1 0 0 0 1
Media Header Version : 0
Media Create Date : 2018:04:12 14:38:32
Media Modify Date : 2018:04:12 14:38:32
Media Time Scale : 60000
Media Duration : 0:17:42
Handler Class : Media Handler
Handler Type : NRT Metadata
Handler Description : GoPro SOS
Gen Media Version : 0
Gen Flags : 0 0 0
Gen Graphics Mode : srcCopy
Gen Op Color : 0 0 0
Gen Balance : 0
Meta Format : fdsc
Image Size : 1920x1080
Megapixels : 2.1
Avg Bitrate : 30.1 Mbps
Rotation : 0Part 2
There is a pretty obvious "stutter" at the 17:42 mark where the two clips are combined. I’ve tried using ffmpeg and iMovie, but both give the same results. The GoPro broke up the event into multiple clips on it’s own so it seems weird that there would be any information missing. Is there any way to get rid of this stutter ?Thanks !
-
FFMpeg access AVFoundation usb subdevice camera on OSX Mojave
20 août 2020, par RetiariusI Have a dual USB camera for VR : two cameras, one usb connection. On linux, this appears in /dev/video0 and /dev/video1 and I can capture using ffmpeg -i /dev/video0



On Mojave, I can see both devices in the USB hub :



USB 2.0 Hub:

Product ID: 0x0101
Vendor ID: 0x1a40 (TERMINUS TECHNOLOGY INC.)
Version: 1.11
Speed: Up to 480Mb/sec
Location ID: 0x14200000 / 8
Current Available (mA): 500
Current Required (mA): 100
Extra Operating Current (mA): 0

 Stereo Vision 2:

 Product ID: 0x9901
 Vendor ID: 0x0ac8 (Z-Star Microelectronics Corporation)
 Version: 27.02
 Serial Number: SN0099
 Speed: Up to 480Mb/sec
 Manufacturer: SHENZHEN RERVISION TECHNOLOGY
 Location ID: 0x14220000 / 10
 Current Available (mA): 500
 Current Required (mA): 500
 Extra Operating Current (mA): 0

 Stereo Vision 2:

 Product ID: 0x9902
 Vendor ID: 0x0ac8 (Z-Star Microelectronics Corporation)
 Version: 27.02
 Serial Number: SN0100
 Speed: Up to 480Mb/sec
 Manufacturer: SHENZHEN RERVISION TECHNOLOGY
 Location ID: 0x14210000 / 9
 Current Available (mA): 500
 Current Required (mA): 500
 Extra Operating Current (mA): 0




But when I list devices, I can see only one [0] :



ffmpeg -f avfoundation -list_devices true -i ""
 [AVFoundation input device @ 0x7fae5b501a80] AVFoundation video devices:
 [AVFoundation input device @ 0x7fae5b501a80] [0] Stereo Vision 2
 [AVFoundation input device @ 0x7fae5b501a80] [1] FaceTime HD Camera
 [AVFoundation input device @ 0x7fae5b501a80] [2] Capture screen 0




capturing from this device captures from one of the cameras.



How can I get ffmpeg to detect the second usb device as well ?