
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (3)
-
Encodage et transformation en formats lisibles sur Internet
10 avril 2011MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...) -
Qualité du média après traitement
21 juin 2013, parLe bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (3589)
-
VP9 encoding limited to 4 threads ?
14 juillet 2020, par kellerkindtI am considering to use VP9 to encode my BluRays in the future, since its an open source codec. But I cannot get Handbrake or ffmpeg use more than 50% (4) of my (8) cores. The encoding time is therefore much worse than x264/5 which uses all cores.


In Handbrake I just set encoder to VP9 and CQ19. There is no difference if I add
threads 8
,threads 16
orthreads 64
in the parameters field.

Testing ffmpeg in the command line (
-c:v libvpx-vp9 -crf 19 -threads 16 -tile-columns 6 -frame-parallel 1 -speed 0
) also does not use any more cpu threads.

Is the current encoder not capable of encoding on more than 4 threads or am I doing something wrong ?


- 

- Linux Mint 18
- handbrake 0.10.2+ds1-2build1
- ffmpeg 2.8.10-0ubuntu0.16.04.1
- libvpx3 1.5.0-2ubuntu1










-
Pipe opencv images to ffmpeg using python
16 avril 2022, par jlarschHow can I pipe openCV images to ffmpeg (running ffmpeg as a subprocess) ?
(I am using spyder/anaconda)



I am reading frames from a video file and do some processing on each frame.



import cv2 
cap = cv2.VideoCapture(self.avi_path)
img = cap.read()
gray = cv2.cvtColor(img[1], cv2.COLOR_BGR2GRAY)
bgDiv=gray/vidMed #background division




then, to pipe the processed frame to ffmpeg, I found this command in a related question :



sys.stdout.write( bgDiv.tostring() )




next, I am trying to run ffmpeg as a subprocess :



cmd='ffmpeg.exe -f rawvideo -pix_fmt gray -s 2048x2048 -r 30 -i - -an -f avi -r 30 foo.avi'
sp.call(cmd,shell=True)




(this also from the mentioned post)
However, this fills my IPython console with cryptic hieroglyphs and then crashes it. any advice ?



ultimately, I would like to pipe out 4 streams and have ffmpeg encode those 4 streams in parallel.


-
Actual duration of video and aufio files
17 avril 2019, par LiidiaAt the moment I am working on an application that enables to record video files. In parallel with a video file, an audio file is recorded. Eventually, there are two files :
.mp4
and.wav
.My purpose is to merge those two files. To do so the files should be of approximately the same duration. In order to determine if video and audio files are of equal duration, I have conducted a batch of experiments. I have recorded videos : 5 times for 15 minutes, 5 times for 30 minutes, 5 times for 45 minutes and 5 times for 60 minutes.
During the experiments I captured the next values :
-
File creation time (accurate to ms).
IDateTime videoCreated = _fileWrapper.GetCreationTime(videoPath);
IDateTime audioCreated = _fileWrapper.GetCreationTime(audioPath); -
File last modification time (accurate to ms).
IDateTime videoModified = _fileWrapper.GetLastWriteTime(videoPath);
IDateTime audioModified = _fileWrapper.GetLastWriteTime(audioPath); -
I measured the duration of files with the help of ffprobe.
-
I also calculated the length of the files, subtracting the time the file was created from the time of the last change. Let’s call this value the calculated duration.
Next, I’ll describe the results I got.
If we take into account the calculated duration, the video and audio files are approximately the same. Audio files are slightly shorter than video files (on average 0,415 s, which is insignificant).
Let’s move on to the duration by ffprobe. Audio files are significantly longer than video files. The following is the average difference between audio and video files by group :
15 min -12,426 s;
30 min -16,942 s;
45 min -31,403 s;
60 min -34,702 s.Results for audio files only :
The duration of ffprobe is slightly smaller than the calculated duration. Below is the average difference in different groups :
15 min - 0,424 s;
30 min - 1,129 s;
45 min - 1,816 s;
60 min - 2,292 s.Results for video files only :
The duration by ffprobe is significantly smaller than the calculated duration. Group results :
15 min -13,171 s;
30 min -18,630 s;
45 min -33,666 s;
15 min -37,326 s.Why the duration by ffprobe is less than computed duration ? How to explain all these observations ? What is the actual duration of the files ?
Updated.
formats : video - .mp4, audio - .wav
ffprobe command example : ffprove exampleFileName -v info -hide_banner -show_entries stream=duration -of xmlVideo recording. Vlc.DotNet.Core library is used.
public void Execute(object parameter)
{
if (_videoRecorder != null && _videoRecorder.IsPlaying())
{
_videoRecorder.Stop(); //!!!
_videoRecorder = null;
TimeSpan videoRecordingTime = TimeSpan.Zero; OnRecordingStopped?.Invoke(sessionService.CurrentDevice.Device.Id, videoRecordingTime);
return;
}
_videoRecorder = new VlcMediaPlayer(_vlcLibDirectory.DirectoryInfo);
_filenameGenerator.Folder = sessionService.PatientVideoDirectory;
_filenameGenerator.GenerateNextFileName();
string fileName = _filenameGenerator.GetName();
string fileDestination = Path.Combine(_filenameGenerator.FolderPath, fileName);
string[] mediaOptions =
{
":sout=#file{dst='" + fileDestination + "'}",
// ":live-caching=0",// TODO: check what that parameter does!
":sout-keep" /*Keep sout open (default disabled) : use the same sout instance across the various playlist items, if possible.*/
// ":file-logging", ":vvv", ":extraintf=logger", ":logfile=VlcLogs.log"
};
_videoRecorder.SetMedia(sessionService.CurrentDevice.MediaStreamUri, mediaOptions); _videoRecorder.Play();
OnRecordingStarted?.Invoke(fileName);
}Audio recording. NAudio framework is used.
private void StartRecording()
{
try
{
string patientFolder = Patient.GetString();
_waveFileDestination = string.Empty;
if (IsForVideo)
{
_waveFileDestination = Path.Combine(_systemSetting.VideoServerPath, patientFolder,
_videoFileNameGenerator.GetCurrentNameWithExtension("wav"));
}
else
{
_audioFileNameGenerator.Folder = Path.Combine(_systemSetting.AudioPath, patientFolder);
_audioFileNameGenerator.GenerateNextFileName();
_waveFileDestination = Path.Combine(_audioFileNameGenerator.FolderPath,
_audioFileNameGenerator.GetName());
}
_waveSource = new WaveInEvent
{
DeviceNumber = CapturingDevice.Number,
WaveFormat = new WaveFormat(44100, 1)
};
_waveSource.DataAvailable += WaveSourceOnDataAvailable;
_waveSource.RecordingStopped += WaveSourceOnRecordingStopped;
_waveFile = new WaveFileWriter(_waveFileDestination, _waveSource.WaveFormat);
_waveSource.StartRecording();
if (!IsForVideo)
{
// Save audio media entity
_recordingMediaId = Media.AddToLocation(Patient.Id,
ExaminationId, _audioFileNameGenerator.GetName(),
MediaType.Audio, 0, _workstation.LocationId, DateTime.Now);
Media newMedia = Media.Get(_recordingMediaId);
MediaItem mediaItem = new MediaItem(newMedia, _systemSetting, _mediaManager)
{
PatientSsn = newMedia.AccessionNumber
};
OnAudioAdded(mediaItem);
}
}
catch (Exception exception)
{
}
finally
{
}
}
private void StopRecording()
{
try
{
_waveSource.StopRecording();
if (!IsForVideo)
{
Media media = Media.Get(_recordingMediaId);
media.Duration = Duration.Seconds;
Media.Update(media);
_recordingMediaId = 0;
}
else
{
IsForVideo = false;
}
}
catch (Exception exception)
{
}
OnStateChanged();
} -