
Recherche avancée
Autres articles (88)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)
Sur d’autres sites (4236)
-
How to make customizable subtitles with FFmpeg ?
6 août 2022, par Alex RypunI need to generate videos with text (aka subtitles) and styles provided by users.
The styles are a few fonts, text border, text background, text shadow, colors, position on the video, etc.


As I understand there are 2 filters I can use :
drawtext
andsubtitles
.

subtitles
is easier to use but it's not fully customizable. For example, I can't add shadow and background for the same text.

drawtext
is more customizable but quite problematic.

I've implemented almost everything I need with
drawtext
but have one problem : multiline text with a background.

boxborderw
parameter adds specified pixels count from all 4 sides from the extreme text point. I need to make backgrounds touch each other (pixel perfect). That means I have to position text lines with the same space between extreme text points (not between baselines). With crazy workarounds I solved it.
Everything almost works but sometimes some strange border appears between lines :



Spending ages investigating I figured out that it depends on the actual line height and the position on the video.
Each letter has its own distances between baseline and the highest point and between baseline and the lowest point.
The whole line height is the difference between the highest point of the highest symbol and the lowest point of the lowest symbol :




Now the most interesting.


For lines that have even pixels height.


If the line is positioned on the odd pixel (y-axis) and has an odd
boxborderw
(or positioned on the even pixel (y-axis) and has an evenboxborderw
), then it's rendered as expected (without any additional borders).
In other cases, the thin dark line is noticeable on the contact line (it's rendered either on top or at bottom of each text block :




For lines that have odd pixels height.


In all cases the thin dark line is noticeable. Depending on y coordinate (odd or even) and
boxborderw
value (odd or even) that magic line appears on top or bottom :



I can make text lines overlap a bit. This solves the problem but adds another problem.
I use fade-in/out by smoothly changing
alfa
(transparency). When the text becomes semi-transparent the overlapped area has a different color :



Here is the command I use :


ffmpeg "-y" "-f" "lavfi" "-i" "color=#ffffff" "-filter_complex" \
"[0:v]loop=-1:1:0,trim=duration=2.00,format=yuv420p,scale=960:540,setdar=16/9[video];\
[video]drawtext=text='qqq':\
fontfile=ComicSansMS.ttf:\
fontcolor=#ffffff:\
fontsize=58:\
bordercolor=#000000:\
borderw=2:\
box=1:\
boxcolor=#ff0000:\
boxborderw=15:\
x=(w-text_w)/2:\
y=105:\
alpha='if(lt(t,0),0,if(lt(t,0.5),(t-0)/0.5,if(lt(t,1.49),1,if(lt(t,1.99),(0.5-(t-1.49))/0.5,0))))',\
drawtext=text='qqq':\
fontfile=ComicSansMS.ttf:\
fontcolor=#ffffff:\
fontsize=58:\
bordercolor=#000000:\
borderw=2:\
box=1:\
boxcolor=#ff0000:\
boxborderw=15:\
x=(w-text_w)/2:\
y=182:\
alpha='if(lt(t,0),0,if(lt(t,0.5),(t-0)/0.5,if(lt(t,1.49),1,if(lt(t,1.99),(0.5-(t-1.49))/0.5,0))))'[video]" \
"-vsync" "2" "-map" "[video]" "-r" "25" "output_multiline.mp4"



I tried to draw rectangles with
drawbox
following the same logic (changing the position and height). But there were no problems.

Does anybody know what is the nature of that dark line ?
And how it can depend on the even/odd of the height and position ?
What should I investigate to figure out such a behavior ?


UPD :


Just accidentally figured out that changing the pixel format (from
format=yuv420p
toformat=uyvy422
or many others) solved the problem. At least on my test commands.
Now will learn about what is pixel format)

-
Is there a way to program the (Download) button to save a group of images as a one video ?
9 février 2024, par Lina Al-fawzanThis is my entire code. Its function is that everything the user writes or says will have images returned to him according to what he wrote/said, and the next image will be shown to him after he presses “close,” and he can save each image separately. I want to make a simple modification to it. First, instead of a close button, I want each image to be displayed for 3 seconds and the next one to be displayed, and so on... “all of them in one window”, and for the “download” button to be when the last image is displayed, and for them all to be saved in one video.


import 'package:flutter/material.dart';
import 'package:flutter/services.dart' show rootBundle;
import 'dart:convert';
import 'dart:typed_data';
import 'package:image_gallery_saver/image_gallery_saver.dart';
import 'package:speech_to_text/speech_to_text.dart' as stt;

void main() {
 runApp(MyApp());
}

class MyApp extends StatelessWidget {
 @override
 Widget build(BuildContext context) {
 return MaterialApp(
 home: MyHomePage(),
 );
 }
}

class MyHomePage extends StatefulWidget {
 @override
 _MyHomePageState createState() => _MyHomePageState();
}

class _MyHomePageState extends State<myhomepage> {
 TextEditingController _textEditingController = TextEditingController();
 late stt.SpeechToText _speech;
 bool _isListening = false;

 @override
 void initState() {
 super.initState();
 _speech = stt.SpeechToText();
 }

 void _listen() async {
 if (!_isListening) {
 bool available = await _speech.initialize(
 onStatus: (val) => print('onStatus: $val'),
 onError: (val) => print('onError: $val'),
 );
 if (available) {
 setState(() => _isListening = true);
 _speech.listen(
 onResult: (val) => setState(() {
 _textEditingController.text = val.recognizedWords;
 if (val.hasConfidenceRating && val.confidence > 0) {
 _showImages(val.recognizedWords);
 }
 }),
 );
 }
 } else {
 setState(() => _isListening = false);
 _speech.stop();
 }
 }

 @override
 Widget build(BuildContext context) {
 return Scaffold(
 appBar: AppBar(
 title: Text('Image Viewer'),
 ),
 body: Padding(
 padding: const EdgeInsets.all(16.0),
 child: Column(
 mainAxisAlignment: MainAxisAlignment.center,
 children: [
 TextField(
 controller: _textEditingController,
 decoration: const InputDecoration(
 labelText: 'Enter a word',
 ),
 ),
 SizedBox(height: 16.0),
 ElevatedButton(
 onPressed: () {
 String userInput = _textEditingController.text;
 _showImages(userInput);
 },
 child: Text('Show Images'),
 ),
 SizedBox(height: 16.0),
 ElevatedButton(
 onPressed: _listen,
 child: Text(_isListening ? 'Stop Listening' : 'Start Listening'),
 ),
 ],
 ),
 ),
 );
 }

Future<void> _showImages(String userInput) async {
 String directoryPath = 'assets/output_images/';
 print("User Input: $userInput");
 print("Directory Path: $directoryPath");

 List<string> assetFiles = await rootBundle
 .loadString('AssetManifest.json')
 .then((String manifestContent) {
 final Map manifestMap = json.decode(manifestContent);
 return manifestMap.keys
 .where((String key) => key.startsWith(directoryPath))
 .toList();
 });

 List<string> imageFiles = assetFiles.where((String assetPath) =>
 assetPath.toLowerCase().endsWith('.jpg') ||
 assetPath.toLowerCase().endsWith('.gif')).toList();

 List<string> words = userInput.split(' '); // Tokenize the sentence into words

 for (String word in words) {
 String wordImagePath = '$directoryPath$word.gif';

 if (imageFiles.contains(wordImagePath)) {
 await _showDialogWithImage(wordImagePath);
 } else {
 for (int i = 0; i < word.length; i++) {
 String letter = word[i];
 String letterImagePath = imageFiles.firstWhere(
 (assetPath) => assetPath.toLowerCase().endsWith('$letter.jpg'),
 orElse: () => '',
 );
 if (letterImagePath.isNotEmpty) {
 await _showDialogWithImage(letterImagePath);
 } else {
 print('No image found for $letter');
 }
 }
 }
 }
}

 

 Future<void> _showDialogWithImage(String imagePath) async {
 await showDialog<void>(
 context: context,
 builder: (BuildContext context) {
 return AlertDialog(
 content: Image.asset(imagePath),
 actions: [
 TextButton(
 onPressed: () {
 Navigator.of(context).pop();
 },
 child: Text('Close'),
 ),
 TextButton(
 onPressed: () async {
 await _downloadImage(imagePath);
 Navigator.of(context).pop();
 },
 child: Text('Download'),
 ),
 ],
 );
 },
 );
 }

 Future<void> _downloadImage(String assetPath) async {
 try {
 final ByteData data = await rootBundle.load(assetPath);
 final List<int> bytes = data.buffer.asUint8List();

 final result = await ImageGallerySaver.saveImage(Uint8List.fromList(bytes));

 if (result != null) {
 ScaffoldMessenger.of(context).showSnackBar(
 SnackBar(
 content: Text('Image saved to gallery.'),
 ),
 );
 } else {
 ScaffoldMessenger.of(context).showSnackBar(
 SnackBar(
 content: Text('Failed to save image to gallery.'),
 ),
 );
 }
 } catch (e) {
 print('Error downloading image: $e');
 }
 }
}

</int></void></void></void></string></string></string></void></myhomepage>


-
avutil/timestamp : introduce av_ts_make_time_string2 for better precision
17 mars 2024, par Marton Balintavutil/timestamp : introduce av_ts_make_time_string2 for better precision
av_ts_make_time_string() used "%.6g" format, but this format was losing
precision even when the timestamp to be printed was not that large. For example
for 3 hours (10800) seconds, only 1 decimal digit was printed, which made this
format inaccurate when it was used in e.g. the silencedetect filter. Other
detection filters printing timestamps had similar issues. Also time base
parameter of the function was *AVRational instead of AVRational.Resolve these problems by introducing a new function, av_ts_make_time_string2().
We change the used format to "%.*f", use a precision of 6, except when printing
values near 0, in which case we calculate the precision dynamically to aim for
a similar precision in normal form as with %.6g. No longer using scientific
representation can make parsing the timestamp easier for the users, we can
safely do this because the theoretical maximum of INT64_MAX*INT32_MAX still
fits into the string buffer in normal form.We somewhat imitate %g by trimming ending zeroes and the potential decimal
point characters. In order not to trim "inf" as well, we assume that the
decimal point string does not contain the letter "f". Note that depending on
printf %f implementation, we might trim "infinity" to "inf".Thanks for Allan Cady for bringing up this issue.
Signed-off-by : Marton Balint <cus@passwd.hu>