
Recherche avancée
Médias (1)
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (75)
-
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (4202)
-
Get ffmpeg information in friendly way
17 décembre 2020, par JBernardoEvery time I try to get some information about my video files with ffmpeg, it pukes a lot of useless information mixed with good things.



I'm using
ffmpeg -i name_of_the_video.mpg
.


There are any possibilities to get that in a friendly way ? I mean JSON would be great (and even ugly XML is fine).



By now, I made my application parse the data with regex but there are lots of nasty corners that appear on some specific video files. I fixed all that I encountered, but there may be more.



I wanted something like :



{
 "Stream 0": {
 "type": "Video",
 "codec": "h264",
 "resolution": "720x480"
 },
 "Stream 1": {
 "type": "Audio",
 "bitrate": "128 kbps",
 "channels": 2
 }
}



-
Blurry picture when decoding h264 mpegts udp stream with ffmpeg
10 juin 2019, par Fredrik AxlingIm using ffmpeg do read an udp stream (contains only video) and to decode frames , I then like to encode again , but during the decoding or from the demuxing i get blurry pictures pictures especially the lower part.
I have a video player that also uses ffmpeg that displays the video perfectly and I try to look in that code but I don’t see any differences.
In the log I se things likeInvalid NAL unit 8, skipping.
nal_unit_type : 1(Coded slice of a non-IDR picture), nal_ref_idc : 3
Invalid NAL unit 7, skipping.
bytestream overread td
error while decoding MB 109 49, bytestream tdThe main things in the code looks like :
av_register_all();
AVFormatContext *fmt_ctx = 0;
AVDictionary *options = 0;
av_dict_set(&options, "analyzeduration", "500000", NULL);
av_dict_set(&options, "probesize", "500000", NULL);
char* url = "udp://239.0.0.3:8081";
avformat_open_input(&fmt_ctx, url, 0, &options);
avformat_find_stream_info(fmt_ctx, &options);
int nRet = 0;
av_dump_format(fmt_ctx, 0, url, 0);
AVStream *pStream = fmt_ctx->streams[0];
AVCodecID nCodecid = pStream->codec->codec_id;
AVCodec* pCodec = avcodec_find_decoder(nCodecid);
AVCodecContext* pCodecCtx = pStream->codec;
nRet = avcodec_open2(pCodecCtx, pCodec, NULL);
int nInH = pStream->codec->height;
int nInW = pStream->codec->width;
int nOutW = nInW / 4;
int nOutH = nInH / 4;
SwsContext* pSwsCtx = sws_getContext(nInW, nInH, AV_PIX_FMT_YUV420P,
nOutW, nOutH, AV_PIX_FMT_RGB24,
SWS_BICUBIC, NULL, NULL, NULL);
m_pFilmWdg->m_img = QImage(nOutW, nOutH, QImage::Format_RGB888);
int linesizes[4];
av_image_fill_linesizes(linesizes, AV_PIX_FMT_RGB24, nOutW);
for (;;)
{
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
nRet = av_read_frame(fmt_ctx, &pkt);
nRet = avcodec_send_packet(pCodecCtx, &pkt);
AVFrame* picture = av_frame_alloc();
nRet = avcodec_receive_frame(pCodecCtx, picture);
if (AVERROR(EAGAIN) == nRet)
continue;
uint8_t* p[] = { m_pFilmWdg->m_img.bits() };
nRet = sws_scale(pSwsCtx, picture->data, picture->linesize, 0, nInH, p, linesizes);
av_packet_unref(&pkt);
av_frame_free(&picture);
m_pFilmWdg->update();
} -
Learning Shell - creating a script with parameters that runs two separate cli apps
16 mai 2013, par GuilhermeNagatomoI want to learn shell script, so I'm trying to download a youtube video using youtube-dl then convert it to mp3 using ffmpeg.
I do it manually running
youtube-dl http://youtube.com/watch?v=...
then
ffmpeg -i downloadedFile -ab 256000 -ar 44100 audioFile.mp3
.I know that I need to pass two arguments to my script, one for the video url and another for the audio file to keep things as simple as possible, but I don't know how to start. Maybe grep the video id in the url and using it to know which file to use to convert into mp3 ? (since youtube-dl saves the video named by it's id)
Can someone recommend me an article or documentation that can help me ?