
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (9)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...) -
Formulaire personnalisable
21 juin 2013, parCette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire. (...)
Sur d’autres sites (2920)
-
ffmpeg drawtext Arabic fonts doesn't render correctly [closed]
11 février 2024, par Mahmoud Abdellatiefwhat i'm trying to achieve :
loop an image into a video and overlay Arabic Text from the Qur'an on it including the text diacritical mark, using a custom font.


example of the text to be rendered :


بِسْمِ ٱللَّهِ ٱلرَّحْمَـٰنِ ٱلرَّحِيمِ



the font used :
https://fonts.qurancomplex.gov.sa/wp02/wp-content/uploads/2024/01/UthmanicHafs_v22.zip


font's unicode module :


Unicode Module 
 
The Research and Development Unit in the Computer Department at King Fahd Glorious Qur’an Printing Complex relied on the unicode system unicode to create (Hafs) font in the Uthmanic Script, because this system is followed globally among computer and systems manufacturing companies in the world.

Unicode organization is a global code group that is used to define all codes and letters used in most of the world's languages and gathered in one code to facilitate the presentation and delivery on information despite of the language used. This global coding uses 1 to 4 bytes (byte = 8 bits) to encode letters, and so far only a third of the number available in Unicode organization to encode the letters of these languages.

Taking into account Hafs font with the Uthmanic Script, which was built entirely on the unicode system. We can explore the basic letters that were formed according to the following figure:

Whereas the font was developed starting from code (0600 ) to code (066FF ) Taking into account there are several encoded letters that haven't been used at all so it was replaces with the code []. The displayed copy above is the one that is been developed from the basic Arabic coding (0600-06FF) which was updated by Unicode organization in 2009.



expected result :
exported video with correctly rendered text using the given font.


actual result :
the text rendered contains only the diacritical marks ( which are the accents on top of the letters ) without the actual letters.



what i tried :
this is my test command which exports just an image for faster results :


Ffmpeg -loop 1 -i "image-2.jpg" -vf "drawtext=text='بِسْمِ ٱللَّهِ ٱلرَّحْمَـٰنِ ٱلرَّحِيمِ':fontsize=124:fontcolor=white:fontfile='UthmanicHafsV22.ttf':x=(w-text_w)/2:y=(h-text_h)/2" -frames:v 1 "output.png"



- 

- tried adding ft_load_flags , almost tried all of them
- tried text_shaping=1 , with no success
- tried textfile instead of text
- tried changing the font, any font i try with it always have different problems, either some squares instead of the diacritical mark










p.s im having same results on both latest ffmpeg compiled by myself on macos terminal with all required libraries enabled , and also on flutter ffmpeg kit full gpl


-
Incrementality Testing : Quick-Start Guide (With Calculations)
26 mars 2024, par Erin -
ffmpeg GRAY16 stream over network
28 novembre 2023, par Norbert P.Im working in a school project where we need to use depth cameras. The camera produces color and depth (in other words 16bit grayscale image). We decided to use ffmpeg, as later on compression could be very useful. For now we got some basic stream running form one PC to other. These settings include :


- 

- rtmp
- flv as container
- pixel format AV_PIX_FMT_YUV420P
- codec AV_CODEC_ID_H264










The problem we are having is with grayscale image. Not every codec is able to cope with this format, so as not every protocol able to work with given codec. I got some settings "working" but receiver side is just stuck on avformat_open_input() method.
I have also tested it with commandline where ffmpeg is listening for connection and same happens.


I include a minimum "working" example of client code. Server can be tested with "ffmpeg.exe -f apng -listen 1 -i rtmp ://localhost:9999/stream/stream1 -c copy -f apng -listen 1 rtmp ://localhost:2222/live/l" or code below. I get no warnings, ffmpeg is newest version installed with "vcpkg install —triplet x64-windows ffmpeg[ffmpeg,ffprobe,zlib]" on windows or packet manager on linux.


The question : Did I miss something ? How do I get it to work ? If you have any better ideas I would very gladly consider them. In the end I need 16 bits of lossless transmission, could be split between channels etc. which I also tried with same effect.


Client code that would have camera and connect to server :


extern "C" {
#include <libavutil></libavutil>opt.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavutil></libavutil>channel_layout.h>
#include <libavutil></libavutil>common.h>
#include <libavformat></libavformat>avformat.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavutil></libavutil>imgutils.h>
}

int main() {

 std::string container = "apng";
 AVCodecID codec_id = AV_CODEC_ID_APNG;
 AVPixelFormat pixFormat = AV_PIX_FMT_GRAY16BE;

 AVFormatContext* format_ctx;
 AVCodec* out_codec;
 AVStream* out_stream;
 AVCodecContext* out_codec_ctx;
 AVFrame* frame;
 uint8_t* data;

 std::string server = "rtmp://localhost:9999/stream/stream1";

 int width = 1280, height = 720, fps = 30, bitrate = 1000000;

 //initialize format context for output with flv and no filename
 avformat_alloc_output_context2(&format_ctx, nullptr, container.c_str(), server.c_str());
 if (!format_ctx) {
 return 1;
 }

 //AVIOContext for accessing the resource indicated by url
 if (!(format_ctx->oformat->flags & AVFMT_NOFILE)) {
 int avopen_ret = avio_open(&format_ctx->pb, server.c_str(),
 AVIO_FLAG_WRITE);// , nullptr, nullptr);
 if (avopen_ret < 0) {
 fprintf(stderr, "failed to open stream output context, stream will not work\n");
 return 1;
 }
 }


 const AVCodec* tmp_out_codec = avcodec_find_encoder(codec_id);
 //const AVCodec* tmp_out_codec = avcodec_find_encoder_by_name("hevc");
 out_codec = const_cast(tmp_out_codec);
 if (!(out_codec)) {
 fprintf(stderr, "Could not find encoder for '%s'\n",
 avcodec_get_name(codec_id));

 return 1;
 }

 out_stream = avformat_new_stream(format_ctx, out_codec);
 if (!out_stream) {
 fprintf(stderr, "Could not allocate stream\n");
 return 1;
 }

 out_codec_ctx = avcodec_alloc_context3(out_codec);

 const AVRational timebase = { 60000, fps };
 const AVRational dst_fps = { fps, 1 };
 av_log_set_level(AV_LOG_VERBOSE);
 //codec_ctx->codec_tag = 0;
 //codec_ctx->codec_id = codec_id;
 out_codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;
 out_codec_ctx->width = width;
 out_codec_ctx->height = height;
 out_codec_ctx->gop_size = 1;
 out_codec_ctx->time_base = timebase;
 out_codec_ctx->pix_fmt = pixFormat;
 out_codec_ctx->framerate = dst_fps;
 out_codec_ctx->time_base = av_inv_q(dst_fps);
 out_codec_ctx->bit_rate = bitrate;
 //if (fctx->oformat->flags & AVFMT_GLOBALHEADER)
 //{
 // codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 //}

 out_stream->time_base = out_codec_ctx->time_base; //will be set afterwards by avformat_write_header to 1/1000

 int ret = avcodec_parameters_from_context(out_stream->codecpar, out_codec_ctx);
 if (ret < 0)
 {
 fprintf(stderr, "Could not initialize stream codec parameters!\n");
 return 1;
 }

 AVDictionary* codec_options = nullptr;
 av_dict_set(&codec_options, "tune", "zerolatency", 0);

 // open video encoder
 ret = avcodec_open2(out_codec_ctx, out_codec, &codec_options);
 if (ret < 0)
 {
 fprintf(stderr, "Could not open video encoder!\n");
 return 1;
 }
 av_dict_free(&codec_options);

 out_stream->codecpar->extradata_size = out_codec_ctx->extradata_size;
 out_stream->codecpar->extradata = static_cast(av_mallocz(out_codec_ctx->extradata_size));
 memcpy(out_stream->codecpar->extradata, out_codec_ctx->extradata, out_codec_ctx->extradata_size);

 av_dump_format(format_ctx, 0, server.c_str(), 1);

 frame = av_frame_alloc();

 int sz = av_image_get_buffer_size(pixFormat, width, height, 32);
#ifdef _WIN32
 data = (uint8_t*)_aligned_malloc(sz, 32);
 if (data == NULL)
 return ENOMEM;
#else
 ret = posix_memalign(reinterpret_cast(&data), 32, sz);
#endif
 av_image_fill_arrays(frame->data, frame->linesize, data, pixFormat, width, height, 32);
 frame->format = pixFormat;
 frame->width = width;
 frame->height = height;
 frame->pts = 1;
 if (avformat_write_header(format_ctx, nullptr) < 0) //Header making problems!!!
 {
 fprintf(stderr, "Could not write header!\n");
 return 1;
 }

 printf("stream time base = %d / %d \n", out_stream->time_base.num, out_stream->time_base.den);

 double inv_stream_timebase = (double)out_stream->time_base.den / (double)out_stream->time_base.num;
 printf("Init OK\n");
 /* Init phase end*/
 int dts = 0;
 int frameNo = 0;

 while (true) {
 //Fill dummy frame with something
 for (int y = 0; y < height; y++) {
 uint16_t color = ((y + frameNo) * 256) % (256 * 256);
 for (int x = 0; x < width; x++) {
 data[x+y*width] = color;
 }
 }

 memcpy(frame->data[0], data, 1280 * 720 * sizeof(uint16_t));
 AVPacket* pkt = av_packet_alloc();

 int ret = avcodec_send_frame(out_codec_ctx, frame);
 if (ret < 0)
 {
 fprintf(stderr, "Error sending frame to codec context!\n");
 return ret;
 }
 while (ret >= 0) {
 ret = avcodec_receive_packet(out_codec_ctx, pkt);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 break;
 else if (ret < 0) {
 fprintf(stderr, "Error during encoding\n");
 break;
 }
 pkt->dts = dts;
 pkt->pts = dts;
 dts += 33;
 av_write_frame(format_ctx, pkt);
 frameNo++;
 av_packet_unref(pkt);
 }
 printf("Streamed %d frames\n", frameNo);
 }
 return 0;
}



And part of server that should receive. code where is stops and waits


extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavformat></libavformat>avio.h>
}

int main() {
 AVFormatContext* fmt_ctx = NULL;
 av_log_set_level(AV_LOG_VERBOSE);
 AVDictionary* options = nullptr;
 av_dict_set(&options, "protocol_whitelist", "file,udp,rtp,tcp,rtmp,rtsp,hls", 0);
 av_dict_set(&options, "timeout", "500000", 0); // Timeout in microseconds 

//Next Line hangs 
 int ret = avformat_open_input(&fmt_ctx, "rtmp://localhost:9999/stream/stream1", NULL, &options);
 if (ret != 0) {
 fprintf(stderr, "Could not open RTMP stream\n");
 return -1;
 }

 // Find the first video stream
 ret = avformat_find_stream_info(fmt_ctx, nullptr);
 if (ret < 0) {
 return ret;
 }
 //...
} 




Edit :
I tried to just create a animated png and tried to stream that from the console to another console window to avoid any programming mistakes on my side. It was the same, I just could not get 16 PNG encoded stream to work. I hung trying to receive and closed when the file ended with in total zero frames received.


I managed to get other thing working :
To not encode gray frames with YUV420, I installed ffmpeg with libx264 support (was thinking is the same as H264, which in code is, but it adds support to new pixel formats). Used H264 again but with GRAY8 with doubled image width and reconstructing the image on the other side.


Maybe as a side note, I could not get any other formats to work. Is "flv" the only option here ? Could I get more performance if I changed it to... what ?