
Recherche avancée
Autres articles (18)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Other interesting software
13 avril 2011, parWe don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
We don’t know them, we didn’t try them, but you can take a peek.
Videopress
Website : http://videopress.com/
License : GNU/GPL v2
Source code : (...)
Sur d’autres sites (4962)
-
fluent-ffmpeg processing is very slow in Cloud Run
18 août 2021, par Nazarii KahaniakI am trying to merge multiple short videos in the container in Cloud Run using the node.js fluent-ffmpeg package. It takes 20 seconds max to merge videos locally. When I am making a request to clour run to merge the videos it processes them very slowly and after 30 minutes just stops (I assume that's because 1800 seconds timeout is set).
I tried to allocate to 2 CPUs with 8GB each but that didn't help. Within 30 minutes the video is processed by about 30 % as I can see in the cloud run logs.
Any feedback is appreciated !


-
ffmpeg / Audacity channel splitting differences
14 mars 2018, par Adrian ChromenkoSo I’m working on a speech to text project using Python and Google Cloud Services (for phone calls). The mp3s I receive have one voice playing in the left speaker, the other voice in the right speaker.
So during testing, I manually split the original mp3 file into two WAV files (one for each channel, converted to mono). I did this splitting through Audacity. The accuracy was about 80-90%, which was perfect for my purposes.
However, once I tried to automate the splitting using ffmpeg (more specifically : ffmpeg -i input_filename.mp3 -map_channel 0.0.0 left.wav -map_channel 0.0.1 right.wav), the accuracy dropped drastically.
I’ve been experimenting for about a week now but I can’t get the accuracy up. For what it’s worth, the audio files sound identical to the human ear. I found that when I increase the volume of the output files, the accuracy gets better, but never as good as when I did the splitting with Audacity.
I guess what I’m trying to ask is, what does Audacity do differently ?
here are the sox -n stat results for each file :
**Split with ffmpeg( 20-30% accuracy) : **
Samples read: 1690560
Length (seconds): 211.320000
Scaled by: 2147483647.0
Maximum amplitude: 0.433350
Minimum amplitude: -0.475739
Midline amplitude: -0.021194
Mean norm: 0.014808
Mean amplitude: -0.000037
RMS amplitude: 0.028947
Maximum delta: 0.333557
Minimum delta: 0.000000
Mean delta: 0.009001
RMS delta: 0.017949
Rough frequency: 789
Volume adjustment: 2.102Split with Audacity : (80-90% accuracy)
Samples read: 1689984
Length (seconds): 211.248000
Scaled by: 2147483647.0
Maximum amplitude: 0.217194
Minimum amplitude: -0.238373
Midline amplitude: -0.010590
Mean norm: 0.007423
Mean amplitude: -0.000018
RMS amplitude: 0.014510
Maximum delta: 0.167175
Minimum delta: 0.000000
Mean delta: 0.004515
RMS delta: 0.008998
Rough frequency: 789
Volume adjustment: 4.195original mp3 :
Samples read: 3379968
Length (seconds): 211.248000
Scaled by: 2147483647.0
Maximum amplitude: 1.000000
Minimum amplitude: -1.000000
Midline amplitude: -0.000000
Mean norm: 0.014124
Mean amplitude: -0.000030
RMS amplitude: 0.047924
Maximum delta: 1.015332
Minimum delta: 0.000000
Mean delta: 0.027046
RMS delta: 0.067775
Rough frequency: 1800
Volume adjustment: 1.000One thing that stands out to me is that the duration isn’t the same. Also the amplitudes. Can I instruct ffmpeg what the duration is when it is doing the splitting ? And can I change all the amplitudes to match the audacity file ? I’m not sure what to do to get to the 80% accuracy rate, but increasing volume seems to be the most promising solution so far.
Any help would be greatly appreciated. I don’t have to use ffmpeg, but it seems like my only option, as Audacity isn’t scriptable.
-
create video chunks with giving start and end time using ffmpeg and play those chunks sequentially with expiry time for each chunk using java
4 octobre 2018, par JAVA CoderI have done like this I found it from the given link Just modified the code for start and end time and hard coded the times.
Split video into smaller timed segments in Java@RequestMapping(value="/playVideo",method = RequestMethod.GET)
@ResponseBody
public void playVideo() {
System.out.println("controller is working");
int videoDurationSecs = 1800 ;
int numberOfChunks = 5;//dynamically we can define according to video duration
int chunkSize = videoDurationSecs/(numberOfChunks);
int startSecs = 0;
for (int i=0; i/*******Create video chunk*******//
String startTime = convertSecsToTimeString(startSecs);
int endSecs = startSecs+chunkSize;
startSecs = endSecs+1;
if (endSecs > videoDurationSecs) {
//**make sure rounding does not mean we go beyond end of video**//
endSecs = videoDurationSecs;
}
String endTime = convertSecsToTimeString(endSecs);
System.out.println("start time for-------------------->>>> "+startTime);
System.out.println("end time for------------------->>>> "+endTime);
/*
* how to do this means send times for chunk and
* getting chunks and play them one by one like one video
* with expiry time for each
*/
//Call ffmpeg to create this chunk of the video using a ffmpeg wrapper
/*String argv[] = {"ffmpeg", "-i", videoPath,
"-ss",startTime, "-t", endTime,
"-c","copy", segmentVideoPath[i]};
int ffmpegWrapperReturnCode = ffmpegWrapper(argv);*/
}
}
private String convertSecsToTimeString(int timeSeconds) {
//Convert number of seconds into hours:mins:seconds string
int hours = timeSeconds / 3600;
int mins = (timeSeconds % 3600) / 60;
int secs = timeSeconds % 60;
String timeString = String.format("%02d:%02d:%02d", hours, mins, secs);
return timeString;
}}