
Recherche avancée
Autres articles (66)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Soumettre bugs et patchs
10 avril 2011Un logiciel n’est malheureusement jamais parfait...
Si vous pensez avoir mis la main sur un bug, reportez le dans notre système de tickets en prenant bien soin de nous remonter certaines informations pertinentes : le type de navigateur et sa version exacte avec lequel vous avez l’anomalie ; une explication la plus précise possible du problème rencontré ; si possibles les étapes pour reproduire le problème ; un lien vers le site / la page en question ;
Si vous pensez avoir résolu vous même le bug (...)
Sur d’autres sites (5200)
-
How to implement multiple video resolutions on front and back-end
25 février 2021, par LanGuuI need a solution or hint on how should I handle multiple resolutions on front-end and back-end. I have been reading about hls, dash, and mse for last few days but the more information I read, the more lost I am.


Right now I have only a microservice for downscaling video using FFmpeg. The FFmpeg returns mp4 with no hls or dash, 4 video resolutions with no sound, and 2 different audio quality.
On front-end, I use react-player. The hardest part is that I need to merge audio and video, synchronize and handle change quality events.
I have found Media source extensions but it won't work I am not sure, but probably because I am using raw mp4 files.


I would like to ask you a few questions


- 

- It is possible to combine video and audio sources without MSE and still keep video and audio separate
- Do I need HLS or Dash to use MSE ?
- What is a difference between progressive download and progressive streaming ?
- If I have to choose what is better right now hls or dash ? what is much easier to implement ?










-
Flutter record front facing camera at exact same time as playing video
27 novembre 2020, par xceedI've been playing around with Flutter and trying to get it so I can record the front facing camera (using the camera plugin [https://pub.dev/packages/camera]) as well as playing a video to the user (using the video_player plugin [https://pub.dev/packages/video_player]).


Next I use ffmpeg to horizontally stack the videos together. This all works fine but when I play back the final output there is a slight delay when listening to the audio. I'm calling
Future.wait([cameraController.startVideoRecording(filePath), videoController.play()]);
but there is a slight delay in these tasks actually starting. I don't even need them to fire at the exact same time (which I'm realising is probably impossible), instead if I knew exactly when each of the tasks begun then I can use the time difference to sync the audio using ffmpeg or similar.

I've tried adding a listener on the videoController to see when isPlaying first returns true, and also watching the output directory for when the recorded video appears on the filesystem :


listener = () {
 if (videoController.value.isPlaying) {
 isPlaying = DateTime.now().microsecondsSinceEpoch;
 log('isPlaying '+isPlaying.toString());
 }
 videoController.removeListener(listener);
 };
 videoController.addListener(listener);

 var watcher = DirectoryWatcher('${extDir.path}/');
 watcher.events.listen((event) {
 if (event.type == ChangeType.ADD) {
 fileAdded = DateTime.now().microsecondsSinceEpoch;
 log('added '+fileAdded.toString());
 }
 });



Then likewise for checking if the camera is recording :


var listener;
 listener = () {
 if (cameraController.value.isRecordingVideo) {
 log('isRecordingVideo '+DateTime.now().microsecondsSinceEpoch.toString());
 //cameraController.removeListener(listener);
 }
 };
 cameraController.addListener(listener);



This results in (for example) the following order and microseconds for each event :


is playing: 1606478994797247
is recording: 1606478995492889 (695642 microseconds later)
added: 1606478995839676 (346787 microseconds later)



However, when I play back the video the syncing is off by approx 0.152 seconds so doesn't marry up to the time differences reported above.


Does anyone have any idea how I could accomplish near perfect syncing when combining 2 videos ? Thanks.


-
How do I know the time left in order to convert to an audio file, to use that to display the remainder for downloading in front of the user
6 novembre 2020, par Kerols afifiI got this code from GITHUB, and this is better for me than using the FFmpeg library, because this library only works at a minimum. Issued 24 APIs, but the code that I got I don't know exactly how to tell me when the conversion is finished and also how much time is left because it's the conversion to appear. In front of the user


@SuppressLint("NewApi")
 public void genVideoUsingMuxer(Context context,String srcPath, String dstPath, int startMs, int endMs, boolean useAudio, boolean useVideo,long time) throws IOException {
 // Set up MediaExtractor to read from the source.
 extractor = new MediaExtractor();
 extractor.setDataSource(srcPath);
 int trackCount = extractor.getTrackCount();
 // Set up MediaMuxer for the destination.

 mediaMuxer = new MediaMuxer(dstPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
 // Set up the tracks and retrieve the max buffer size for selected
 // tracks.
 HashMap indexMap = new HashMap(trackCount);
 int bufferSize = -1;
 for (int i = 0; i < trackCount; i++) {
 MediaFormat format = extractor.getTrackFormat(i);
 String mime = format.getString(MediaFormat.KEY_MIME);
 boolean selectCurrentTrack = false;
 if (mime.startsWith("audio/") && useAudio) {
 selectCurrentTrack = true;
 } else if (mime.startsWith("video/") && useVideo) {
 selectCurrentTrack = true;
 }
 if (selectCurrentTrack) {
 extractor.selectTrack(i);
 int dstIndex = mediaMuxer.addTrack(format);
 indexMap.put(i, dstIndex);
 if (format.containsKey(MediaFormat.KEY_MAX_INPUT_SIZE)) {
 int newSize = format.getInteger(MediaFormat.KEY_MAX_INPUT_SIZE);
 bufferSize = newSize > bufferSize ? newSize : bufferSize;
 }
 }
 }
 if (bufferSize < 0) {
 bufferSize = DEFAULT_BUFFER_SIZE;
 }
 // Set up the orientation and starting time for extractor.
 MediaMetadataRetriever retrieverSrc = new MediaMetadataRetriever();
 retrieverSrc.setDataSource(srcPath);
 String degreesString = retrieverSrc.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_ROTATION);
 if (degreesString != null) {
 int degrees = Integer.parseInt(degreesString);
 if (degrees >= 0) {
 mediaMuxer.setOrientationHint(degrees);
 }
 }
 if (startMs > 0) {
 extractor.seekTo(startMs * 1000, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
 }
 // Copy the samples from MediaExtractor to MediamediaMuxer. We will loop
 // for copying each sample and stop when we get to the end of the source
 // file or exceed the end time of the trimming.
 int offset = 0;
 int trackIndex = -1;
 time = bufferSize;
 ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
 mediaMuxer.start();
 while (true) {
 bufferInfo.offset = offset;
 bufferInfo.size = extractor.readSampleData(dstBuf, offset);
 if (bufferInfo.size < 0) {
 Log.d(TAG, "Saw input EOS.");
 bufferInfo.size = 0;
 break;
 } else {
 bufferInfo.presentationTimeUs = extractor.getSampleTime();
 if (endMs > 0 && bufferInfo.presentationTimeUs > (endMs * 1000)) {
 Log.d(TAG, "The current sample is over the trim end time.");
 break;
 } else {
 bufferInfo.flags = extractor.getSampleFlags();
 trackIndex = extractor.getSampleTrackIndex();
 mediaMuxer.writeSampleData(indexMap.get(trackIndex), dstBuf, bufferInfo);
 extractor.advance();
 }
 }
 }

 mediaMuxer.stop();
 mediaMuxer.release();

 return;
}