Recherche avancée

Médias (0)

Mot : - Tags -/interaction

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (88)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (4236)

  • How to make customizable subtitles with FFmpeg ?

    6 août 2022, par Alex Rypun

    I need to generate videos with text (aka subtitles) and styles provided by users.
The styles are a few fonts, text border, text background, text shadow, colors, position on the video, etc.

    


    As I understand there are 2 filters I can use : drawtext and subtitles.

    


    subtitles is easier to use but it's not fully customizable. For example, I can't add shadow and background for the same text.

    


    drawtext is more customizable but quite problematic.

    


    I've implemented almost everything I need with drawtext but have one problem : multiline text with a background.

    


    boxborderw parameter adds specified pixels count from all 4 sides from the extreme text point. I need to make backgrounds touch each other (pixel perfect). That means I have to position text lines with the same space between extreme text points (not between baselines). With crazy workarounds I solved it.
Everything almost works but sometimes some strange border appears between lines :

    


    enter image description here

    


    Spending ages investigating I figured out that it depends on the actual line height and the position on the video.
Each letter has its own distances between baseline and the highest point and between baseline and the lowest point.
The whole line height is the difference between the highest point of the highest symbol and the lowest point of the lowest symbol :

    


    enter image description here

    


    Now the most interesting.

    


    For lines that have even pixels height.

    


    If the line is positioned on the odd pixel (y-axis) and has an odd boxborderw (or positioned on the even pixel (y-axis) and has an even boxborderw ), then it's rendered as expected (without any additional borders).
In other cases, the thin dark line is noticeable on the contact line (it's rendered either on top or at bottom of each text block :

    


    enter image description here

    

    


    For lines that have odd pixels height.

    


    In all cases the thin dark line is noticeable. Depending on y coordinate (odd or even) and boxborderw value (odd or even) that magic line appears on top or bottom :

    


    enter image description here

    


    I can make text lines overlap a bit. This solves the problem but adds another problem.
I use fade-in/out by smoothly changing alfa (transparency). When the text becomes semi-transparent the overlapped area has a different color :

    


    enter image description here

    


    Here is the command I use :

    


    ffmpeg "-y" "-f" "lavfi" "-i" "color=#ffffff" "-filter_complex" \
"[0:v]loop=-1:1:0,trim=duration=2.00,format=yuv420p,scale=960:540,setdar=16/9[video];\
[video]drawtext=text='qqq':\
fontfile=ComicSansMS.ttf:\
fontcolor=#ffffff:\
fontsize=58:\
bordercolor=#000000:\
borderw=2:\
box=1:\
boxcolor=#ff0000:\
boxborderw=15:\
x=(w-text_w)/2:\
y=105:\
alpha='if(lt(t,0),0,if(lt(t,0.5),(t-0)/0.5,if(lt(t,1.49),1,if(lt(t,1.99),(0.5-(t-1.49))/0.5,0))))',\
drawtext=text='qqq':\
fontfile=ComicSansMS.ttf:\
fontcolor=#ffffff:\
fontsize=58:\
bordercolor=#000000:\
borderw=2:\
box=1:\
boxcolor=#ff0000:\
boxborderw=15:\
x=(w-text_w)/2:\
y=182:\
alpha='if(lt(t,0),0,if(lt(t,0.5),(t-0)/0.5,if(lt(t,1.49),1,if(lt(t,1.99),(0.5-(t-1.49))/0.5,0))))'[video]" \
"-vsync" "2" "-map" "[video]" "-r" "25" "output_multiline.mp4"


    


    I tried to draw rectangles with drawbox following the same logic (changing the position and height). But there were no problems.

    


    Does anybody know what is the nature of that dark line ?
And how it can depend on the even/odd of the height and position ?
What should I investigate to figure out such a behavior ?

    


    UPD :

    


    Just accidentally figured out that changing the pixel format (from format=yuv420p to format=uyvy422 or many others) solved the problem. At least on my test commands.
Now will learn about what is pixel format)

    


  • Is there a way to program the (Download) button to save a group of images as a one video ?

    9 février 2024, par Lina Al-fawzan

    This is my entire code. Its function is that everything the user writes or says will have images returned to him according to what he wrote/said, and the next image will be shown to him after he presses “close,” and he can save each image separately. I want to make a simple modification to it. First, instead of a close button, I want each image to be displayed for 3 seconds and the next one to be displayed, and so on... “all of them in one window”, and for the “download” button to be when the last image is displayed, and for them all to be saved in one video.

    


    import &#x27;package:flutter/material.dart&#x27;;&#xA;import &#x27;package:flutter/services.dart&#x27; show rootBundle;&#xA;import &#x27;dart:convert&#x27;;&#xA;import &#x27;dart:typed_data&#x27;;&#xA;import &#x27;package:image_gallery_saver/image_gallery_saver.dart&#x27;;&#xA;import &#x27;package:speech_to_text/speech_to_text.dart&#x27; as stt;&#xA;&#xA;void main() {&#xA;  runApp(MyApp());&#xA;}&#xA;&#xA;class MyApp extends StatelessWidget {&#xA;  @override&#xA;  Widget build(BuildContext context) {&#xA;    return MaterialApp(&#xA;      home: MyHomePage(),&#xA;    );&#xA;  }&#xA;}&#xA;&#xA;class MyHomePage extends StatefulWidget {&#xA;  @override&#xA;  _MyHomePageState createState() => _MyHomePageState();&#xA;}&#xA;&#xA;class _MyHomePageState extends State<myhomepage> {&#xA;  TextEditingController _textEditingController = TextEditingController();&#xA;  late stt.SpeechToText _speech;&#xA;  bool _isListening = false;&#xA;&#xA;  @override&#xA;  void initState() {&#xA;    super.initState();&#xA;    _speech = stt.SpeechToText();&#xA;  }&#xA;&#xA;  void _listen() async {&#xA;    if (!_isListening) {&#xA;      bool available = await _speech.initialize(&#xA;        onStatus: (val) => print(&#x27;onStatus: $val&#x27;),&#xA;        onError: (val) => print(&#x27;onError: $val&#x27;),&#xA;      );&#xA;      if (available) {&#xA;        setState(() => _isListening = true);&#xA;        _speech.listen(&#xA;          onResult: (val) => setState(() {&#xA;            _textEditingController.text = val.recognizedWords;&#xA;            if (val.hasConfidenceRating &amp;&amp; val.confidence > 0) {&#xA;              _showImages(val.recognizedWords);&#xA;            }&#xA;          }),&#xA;        );&#xA;      }&#xA;    } else {&#xA;      setState(() => _isListening = false);&#xA;      _speech.stop();&#xA;    }&#xA;  }&#xA;&#xA;  @override&#xA;  Widget build(BuildContext context) {&#xA;    return Scaffold(&#xA;      appBar: AppBar(&#xA;        title: Text(&#x27;Image Viewer&#x27;),&#xA;      ),&#xA;      body: Padding(&#xA;        padding: const EdgeInsets.all(16.0),&#xA;        child: Column(&#xA;          mainAxisAlignment: MainAxisAlignment.center,&#xA;          children: [&#xA;            TextField(&#xA;              controller: _textEditingController,&#xA;              decoration: const InputDecoration(&#xA;                labelText: &#x27;Enter a word&#x27;,&#xA;              ),&#xA;            ),&#xA;            SizedBox(height: 16.0),&#xA;            ElevatedButton(&#xA;              onPressed: () {&#xA;                String userInput = _textEditingController.text;&#xA;                _showImages(userInput);&#xA;              },&#xA;              child: Text(&#x27;Show Images&#x27;),&#xA;            ),&#xA;            SizedBox(height: 16.0),&#xA;            ElevatedButton(&#xA;              onPressed: _listen,&#xA;              child: Text(_isListening ? &#x27;Stop Listening&#x27; : &#x27;Start Listening&#x27;),&#xA;            ),&#xA;          ],&#xA;        ),&#xA;      ),&#xA;    );&#xA;  }&#xA;&#xA;Future<void> _showImages(String userInput) async {&#xA;  String directoryPath = &#x27;assets/output_images/&#x27;;&#xA;  print("User Input: $userInput");&#xA;  print("Directory Path: $directoryPath");&#xA;&#xA;  List<string> assetFiles = await rootBundle&#xA;      .loadString(&#x27;AssetManifest.json&#x27;)&#xA;      .then((String manifestContent) {&#xA;    final Map manifestMap = json.decode(manifestContent);&#xA;    return manifestMap.keys&#xA;        .where((String key) => key.startsWith(directoryPath))&#xA;        .toList();&#xA;  });&#xA;&#xA;  List<string> imageFiles = assetFiles.where((String assetPath) =>&#xA;      assetPath.toLowerCase().endsWith(&#x27;.jpg&#x27;) ||&#xA;      assetPath.toLowerCase().endsWith(&#x27;.gif&#x27;)).toList();&#xA;&#xA;  List<string> words = userInput.split(&#x27; &#x27;); // Tokenize the sentence into words&#xA;&#xA;  for (String word in words) {&#xA;    String wordImagePath = &#x27;$directoryPath$word.gif&#x27;;&#xA;&#xA;    if (imageFiles.contains(wordImagePath)) {&#xA;      await _showDialogWithImage(wordImagePath);&#xA;    } else {&#xA;      for (int i = 0; i &lt; word.length; i&#x2B;&#x2B;) {&#xA;        String letter = word[i];&#xA;        String letterImagePath = imageFiles.firstWhere(&#xA;          (assetPath) => assetPath.toLowerCase().endsWith(&#x27;$letter.jpg&#x27;),&#xA;          orElse: () => &#x27;&#x27;,&#xA;        );&#xA;        if (letterImagePath.isNotEmpty) {&#xA;          await _showDialogWithImage(letterImagePath);&#xA;        } else {&#xA;          print(&#x27;No image found for $letter&#x27;);&#xA;        }&#xA;      }&#xA;    }&#xA;  }&#xA;}&#xA;&#xA;  &#xA;&#xA;  Future<void> _showDialogWithImage(String imagePath) async {&#xA;    await showDialog<void>(&#xA;      context: context,&#xA;      builder: (BuildContext context) {&#xA;        return AlertDialog(&#xA;          content: Image.asset(imagePath),&#xA;          actions: [&#xA;            TextButton(&#xA;              onPressed: () {&#xA;                Navigator.of(context).pop();&#xA;              },&#xA;              child: Text(&#x27;Close&#x27;),&#xA;            ),&#xA;            TextButton(&#xA;              onPressed: () async {&#xA;                await _downloadImage(imagePath);&#xA;                Navigator.of(context).pop();&#xA;              },&#xA;              child: Text(&#x27;Download&#x27;),&#xA;            ),&#xA;          ],&#xA;        );&#xA;      },&#xA;    );&#xA;  }&#xA;&#xA;  Future<void> _downloadImage(String assetPath) async {&#xA;    try {&#xA;      final ByteData data = await rootBundle.load(assetPath);&#xA;      final List<int> bytes = data.buffer.asUint8List();&#xA;&#xA;      final result = await ImageGallerySaver.saveImage(Uint8List.fromList(bytes));&#xA;&#xA;      if (result != null) {&#xA;        ScaffoldMessenger.of(context).showSnackBar(&#xA;          SnackBar(&#xA;            content: Text(&#x27;Image saved to gallery.&#x27;),&#xA;          ),&#xA;        );&#xA;      } else {&#xA;        ScaffoldMessenger.of(context).showSnackBar(&#xA;          SnackBar(&#xA;            content: Text(&#x27;Failed to save image to gallery.&#x27;),&#xA;          ),&#xA;        );&#xA;      }&#xA;    } catch (e) {&#xA;      print(&#x27;Error downloading image: $e&#x27;);&#xA;    }&#xA;  }&#xA;}&#xA;&#xA;</int></void></void></void></string></string></string></void></myhomepage>

    &#xA;

  • avutil/timestamp : introduce av_ts_make_time_string2 for better precision

    17 mars 2024, par Marton Balint
    avutil/timestamp : introduce av_ts_make_time_string2 for better precision
    

    av_ts_make_time_string() used "%.6g" format, but this format was losing
    precision even when the timestamp to be printed was not that large. For example
    for 3 hours (10800) seconds, only 1 decimal digit was printed, which made this
    format inaccurate when it was used in e.g. the silencedetect filter. Other
    detection filters printing timestamps had similar issues. Also time base
    parameter of the function was *AVRational instead of AVRational.

    Resolve these problems by introducing a new function, av_ts_make_time_string2().

    We change the used format to "%.*f", use a precision of 6, except when printing
    values near 0, in which case we calculate the precision dynamically to aim for
    a similar precision in normal form as with %.6g. No longer using scientific
    representation can make parsing the timestamp easier for the users, we can
    safely do this because the theoretical maximum of INT64_MAX*INT32_MAX still
    fits into the string buffer in normal form.

    We somewhat imitate %g by trimming ending zeroes and the potential decimal
    point characters. In order not to trim "inf" as well, we assume that the
    decimal point string does not contain the letter "f". Note that depending on
    printf %f implementation, we might trim "infinity" to "inf".

    Thanks for Allan Cady for bringing up this issue.

    Signed-off-by : Marton Balint <cus@passwd.hu>

    • [DH] doc/APIchanges
    • [DH] libavutil/Makefile
    • [DH] libavutil/timestamp.c
    • [DH] libavutil/timestamp.h
    • [DH] libavutil/version.h