
Recherche avancée
Autres articles (65)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)
Sur d’autres sites (7067)
-
Encountered an exception of ffmpeg.wasm can only run one command at a time
2 mars 2023, par Itay113I want to make a video chat using ffmepg wasm (I know the standard is WebRTC but my assignment is to do this with ffmpeg wasm and a server connecting the 2 clients) and when doing the follow code I am getting ffmpeg.wasm can only run one command at a time exception on the ffmpegWorker.run line


function App() {
 const ffmpegWorker = createFFmpeg({
 log: true
 })

 async function initFFmpeg() {
 await ffmpegWorker.load();
 }

 async function transcode(webcamData) {
 const name = 'record.webm';
 await ffmpegWorker.FS('writeFile', name, await fetchFile(webcamData));
 ffmpegWorker.run('-i', name, '-preset', 'ultrafast', '-c:v', 'h264', '-crf', '28', '-b:v', '0', '-row-mt', '1', '-f', 'mp4', 'output.mp4')
 .then(()=> {

 const data = ffmpegWorker.FS('readFile', 'output.mp4');
 
 const video = document.getElementById('output-video');
 video.src = URL.createObjectURL(new Blob([data.buffer], { type: 'video/mp4' }));
 ffmpegWorker.FS('unlink', 'output.mp4');
 })
 }

 function requestMedia() {
 const webcam = document.getElementById('webcam');
 const chunks = []
 navigator.mediaDevices.getUserMedia({ video: true, audio: true })
 .then(async (stream) => {
 webcam.srcObject = stream;
 await webcam.play();
 const mediaRecorder = new MediaRecorder(stream);
 mediaRecorder.start(0);
 mediaRecorder.onstop = function(e) {
 stream.stop(); 
 }
 mediaRecorder.ondataavailable = async function(e) {
 chunks.push(e.data);
 await transcode(new Uint8Array(await (new Blob(chunks)).arrayBuffer()));
 
 }
 })
 }

 useEffect(() => {
 requestMedia();
 }, [])

 return (
 <div classname="App">
 <div>
 <video width="320px" height="180px"></video>
 <video width="320px" height="180px"></video>
 </div>
 </div>
 );
}



I have tried messing around with the time slice on the media recorder start method argument but it didn't helped


-
Is there a way to program the (Download) button to save a group of images as a one video ?
9 février 2024, par Lina Al-fawzanThis is my entire code. Its function is that everything the user writes or says will have images returned to him according to what he wrote/said, and the next image will be shown to him after he presses “close,” and he can save each image separately. I want to make a simple modification to it. First, instead of a close button, I want each image to be displayed for 3 seconds and the next one to be displayed, and so on... “all of them in one window”, and for the “download” button to be when the last image is displayed, and for them all to be saved in one video.


import 'package:flutter/material.dart';
import 'package:flutter/services.dart' show rootBundle;
import 'dart:convert';
import 'dart:typed_data';
import 'package:image_gallery_saver/image_gallery_saver.dart';
import 'package:speech_to_text/speech_to_text.dart' as stt;

void main() {
 runApp(MyApp());
}

class MyApp extends StatelessWidget {
 @override
 Widget build(BuildContext context) {
 return MaterialApp(
 home: MyHomePage(),
 );
 }
}

class MyHomePage extends StatefulWidget {
 @override
 _MyHomePageState createState() => _MyHomePageState();
}

class _MyHomePageState extends State<myhomepage> {
 TextEditingController _textEditingController = TextEditingController();
 late stt.SpeechToText _speech;
 bool _isListening = false;

 @override
 void initState() {
 super.initState();
 _speech = stt.SpeechToText();
 }

 void _listen() async {
 if (!_isListening) {
 bool available = await _speech.initialize(
 onStatus: (val) => print('onStatus: $val'),
 onError: (val) => print('onError: $val'),
 );
 if (available) {
 setState(() => _isListening = true);
 _speech.listen(
 onResult: (val) => setState(() {
 _textEditingController.text = val.recognizedWords;
 if (val.hasConfidenceRating && val.confidence > 0) {
 _showImages(val.recognizedWords);
 }
 }),
 );
 }
 } else {
 setState(() => _isListening = false);
 _speech.stop();
 }
 }

 @override
 Widget build(BuildContext context) {
 return Scaffold(
 appBar: AppBar(
 title: Text('Image Viewer'),
 ),
 body: Padding(
 padding: const EdgeInsets.all(16.0),
 child: Column(
 mainAxisAlignment: MainAxisAlignment.center,
 children: [
 TextField(
 controller: _textEditingController,
 decoration: const InputDecoration(
 labelText: 'Enter a word',
 ),
 ),
 SizedBox(height: 16.0),
 ElevatedButton(
 onPressed: () {
 String userInput = _textEditingController.text;
 _showImages(userInput);
 },
 child: Text('Show Images'),
 ),
 SizedBox(height: 16.0),
 ElevatedButton(
 onPressed: _listen,
 child: Text(_isListening ? 'Stop Listening' : 'Start Listening'),
 ),
 ],
 ),
 ),
 );
 }

Future<void> _showImages(String userInput) async {
 String directoryPath = 'assets/output_images/';
 print("User Input: $userInput");
 print("Directory Path: $directoryPath");

 List<string> assetFiles = await rootBundle
 .loadString('AssetManifest.json')
 .then((String manifestContent) {
 final Map manifestMap = json.decode(manifestContent);
 return manifestMap.keys
 .where((String key) => key.startsWith(directoryPath))
 .toList();
 });

 List<string> imageFiles = assetFiles.where((String assetPath) =>
 assetPath.toLowerCase().endsWith('.jpg') ||
 assetPath.toLowerCase().endsWith('.gif')).toList();

 List<string> words = userInput.split(' '); // Tokenize the sentence into words

 for (String word in words) {
 String wordImagePath = '$directoryPath$word.gif';

 if (imageFiles.contains(wordImagePath)) {
 await _showDialogWithImage(wordImagePath);
 } else {
 for (int i = 0; i < word.length; i++) {
 String letter = word[i];
 String letterImagePath = imageFiles.firstWhere(
 (assetPath) => assetPath.toLowerCase().endsWith('$letter.jpg'),
 orElse: () => '',
 );
 if (letterImagePath.isNotEmpty) {
 await _showDialogWithImage(letterImagePath);
 } else {
 print('No image found for $letter');
 }
 }
 }
 }
}

 

 Future<void> _showDialogWithImage(String imagePath) async {
 await showDialog<void>(
 context: context,
 builder: (BuildContext context) {
 return AlertDialog(
 content: Image.asset(imagePath),
 actions: [
 TextButton(
 onPressed: () {
 Navigator.of(context).pop();
 },
 child: Text('Close'),
 ),
 TextButton(
 onPressed: () async {
 await _downloadImage(imagePath);
 Navigator.of(context).pop();
 },
 child: Text('Download'),
 ),
 ],
 );
 },
 );
 }

 Future<void> _downloadImage(String assetPath) async {
 try {
 final ByteData data = await rootBundle.load(assetPath);
 final List<int> bytes = data.buffer.asUint8List();

 final result = await ImageGallerySaver.saveImage(Uint8List.fromList(bytes));

 if (result != null) {
 ScaffoldMessenger.of(context).showSnackBar(
 SnackBar(
 content: Text('Image saved to gallery.'),
 ),
 );
 } else {
 ScaffoldMessenger.of(context).showSnackBar(
 SnackBar(
 content: Text('Failed to save image to gallery.'),
 ),
 );
 }
 } catch (e) {
 print('Error downloading image: $e');
 }
 }
}

</int></void></void></void></string></string></string></void></myhomepage>


-
Capturing audio data (using javascript) and uploading on a server as MP3
4 septembre 2018, par MichelFollowing a number of resources on the internet, I am trying to build a simple web page, where I can go to record something (my voice), then make a mp3 file out of the recording and finally upload that file to a server.
At this point I can do the recording and also play back, but I haven’t gone as far as uploading, it seems like I cannot even make an mp3 file locally.
Can someone tell me what I am doing wrong, or in the wrong order ?Below is all the code I have at this point.
<div>
<h2>Audio record and playback</h2>
<p>
<button></button></p><h3>Start</h3>
<button disabled="disabled"><h3>Stop</h3></button>
<audio controls="controls"></audio>
<a></a>
</div>
<code class="echappe-js"><script><br />
var player = document.getElementById('player');<br />
<br />
var handleSuccess = function(stream) {<br />
rec = new MediaRecorder(stream);<br />
<br />
rec.ondataavailable = e => {<br />
audioChunks.push(e.data);<br />
if (rec.state == "inactive") {<br />
let blob = new Blob(audioChunks,{type:'audio/x-mpeg-3'});<br />
player.src = URL.createObjectURL(blob);<br />
player.controls=true;<br />
player.autoplay=true;<br />
// audioDownload.href = player.src;<br />
// audioDownload.download = 'sound.data';<br />
// audioDownload.innerHTML = 'Download';<br />
mp3Build();<br />
}<br />
}<br />
<br />
player.src = stream;<br />
};<br />
<br />
navigator.mediaDevices.getUserMedia({audio:true/*, video: false */})<br />
.then(handleSuccess);<br />
<br />
startRecord.onclick = e => {<br />
startRecord.disabled = true;<br />
stopRecord.disabled=false;<br />
audioChunks = [];<br />
rec.start();<br />
}<br />
<br />
stopRecord.onclick = e => {<br />
startRecord.disabled = false;<br />
stopRecord.disabled=true;<br />
rec.stop();<br />
}<br />
<br />
<br />
var ffmpeg = require('ffmpeg');<br />
<br />
function mp3Build() {<br />
try {<br />
var process = new ffmpeg('sound.data');<br />
process.then(function (audio) {<br />
// Callback mode.<br />
audio.fnExtractSoundToMP3('sound.mp3', function (error, file) {<br />
if (!error) {<br />
console.log('Audio file: ' + file);<br />
audioDownload.href = player.src;<br />
audioDownload.download = 'sound.mp3';<br />
audioDownload.innerHTML = 'Download';<br />
} else {<br />
console.log('Error-fnExtractSoundToMP3: ' + error);<br />
}<br />
});<br />
}, function (err) {<br />
console.log('Error: ' + err);<br />
});<br />
} catch (e) {<br />
console.log(e.code);<br />
console.log(e.msg);<br />
}<br />
}<br />
<br />
</script>When I try to investigate and see what is happening using the Debugger inside the Web Console ; on the line :
var process = new ffmpeg('sound.data');
I get this message :
Paused on exception
TypeError ffmpeg is not a contructor.And on the line :
var ffmpeg = require('ffmpeg');
I get this message :
Paused on exception
ReferenceError require is not defined.Beside when I watch the expression ffmpeg, I can see :
ffmpeg: undefined
After some further investigations, and using browserify I use the following code :
<div>
<h2>Audio record and playback</h2>
<p>
<button></button></p><h3>Start</h3>
<button disabled="disabled"><h3>Stop</h3></button>
<audio controls="controls"></audio>
<a></a>
</div>
<code class="echappe-js"><script src='http://stackoverflow.com/feeds/tag/bundle.js'></script><script><br />
var player = document.getElementById('player');<br />
<br />
var handleSuccess = function(stream) {<br />
rec = new MediaRecorder(stream);<br />
<br />
rec.ondataavailable = e => {<br />
if (rec.state == "inactive") {<br />
let blob = new Blob(audioChunks,{type:'audio/x-mpeg-3'});<br />
//player.src = URL.createObjectURL(blob);<br />
//player.srcObject = URL.createObjectURL(blob);<br />
//player.srcObject = blob;<br />
player.srcObject = stream;<br />
player.controls=true;<br />
player.autoplay=true;<br />
// audioDownload.href = player.src;<br />
// audioDownload.download = 'sound.data';<br />
// audioDownload.innerHTML = 'Download';<br />
mp3Build();<br />
}<br />
}<br />
<br />
//player.src = stream;<br />
player.srcObject = stream;<br />
};<br />
<br />
navigator.mediaDevices.getUserMedia({audio:true/*, video: false */})<br />
.then(handleSuccess);<br />
<br />
startRecord.onclick = e => {<br />
startRecord.disabled = true;<br />
stopRecord.disabled=false;<br />
audioChunks = [];<br />
rec.start();<br />
}<br />
<br />
stopRecord.onclick = e => {<br />
startRecord.disabled = false;<br />
stopRecord.disabled=true;<br />
rec.stop();<br />
}<br />
<br />
<br />
var ffmpeg = require('ffmpeg');<br />
<br />
function mp3Build() {<br />
try {<br />
var process = new ffmpeg('sound.data');<br />
process.then(function (audio) {<br />
// Callback mode.<br />
audio.fnExtractSoundToMP3('sound.mp3', function (error, file) {<br />
if (!error) {<br />
console.log('Audio file: ' + file);<br />
//audioDownload.href = player.src;<br />
audioDownload.href = player.srcObject;<br />
audioDownload.download = 'sound.mp3';<br />
audioDownload.innerHTML = 'Download';<br />
} else {<br />
console.log('Error-fnExtractSoundToMP3: ' + error);<br />
}<br />
});<br />
}, function (err) {<br />
console.log('Error: ' + err);<br />
});<br />
} catch (e) {<br />
console.log(e.code);<br />
console.log(e.msg);<br />
}<br />
}<br />
<br />
</script>That solved the problem of :
the expression ffmpeg being: undefined
But the play back is no longer working. I may not be doing the right thing with player.srcObject and maybe some other things too.
When I use this line :
player.srcObject = URL.createObjectURL(blob);
I get this message :
Paused on exception
TypeError: Value being assigned to HTMLMediaElement.srcObject is not an object.And when I use this line :
player.srcObject = blob;
I get this message :
Paused on exception
TypeError: Value being assigned to HTMLMediaElement.srcObject does not implement interface MediaStream.Finally, if I use this :
player.srcObject = stream;
I do not get any error message but the voice recording still does not work.