
Recherche avancée
Autres articles (54)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...) -
Automated installation script of MediaSPIP
25 avril 2011, parTo overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
The documentation of the use of this installation script is available here.
The code of this (...)
Sur d’autres sites (5890)
-
FFMPEG audio transcoding using libav* libraries
10 février 2014, par vinvinodI am writing an audio transcoding application using ffmpeg libraries.
Here is my code/*
* File: main.cpp
* Author: vinod
* Compile with "g++ -std=c++11 -o audiotranscode main.cpp -lavformat -lavcodec -lavutil -lavfilter"
*
*/
#if !defined PRId64 || PRI_MACROS_BROKEN
#undef PRId64
#define PRId64 "lld"
#endif
#define __STDC_FORMAT_MACROS
#ifdef __cplusplus
extern "C" {
#endif
#include
#include
#include <sys></sys>types.h>
#include
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>samplefmt.h>
#include <libavutil></libavutil>frame.h>
#include <libavutil></libavutil>timestamp.h>
#include <libavformat></libavformat>avformat.h>
#include <libavfilter></libavfilter>avfilter.h>
#include <libavfilter></libavfilter>buffersrc.h>
#include <libavfilter></libavfilter>buffersink.h>
#include <libswscale></libswscale>swscale.h>
#include <libavutil></libavutil>opt.h>
#ifdef __cplusplus
}
#endif
#include <iostream>
using namespace std;
int select_stream, got_frame, got_packet;
AVFormatContext *in_fmt_ctx = NULL, *out_fmt_ctx = NULL;
AVCodec *dec_codec = NULL, * enc_codec = NULL;
AVStream *audio_st = NULL;
AVCodecContext *enc_ctx = NULL, *dec_ctx = NULL;
AVFrame *pFrame = NULL, * pFrameFiltered = NULL;
AVFilterGraph *filter_graph = NULL;
AVFilterContext *buffersrc_ctx = NULL;
AVFilterContext *buffersink_ctx = NULL;
AVPacket packet;
string inFileName = "/home/vinod/vinod/Media/univac.webm";
string outFileName = "audio_extracted.m4a";
int target_bit_rate = 128000,
sample_rate = 22050,
channels = 1;
AVSampleFormat sample_fmt = AV_SAMPLE_FMT_S16;
string filter_description = "aresample=22050,aformat=sample_fmts=s16:channel_layouts=mono";
int log_averror(int errcode)
{
char *errbuf = (char *) calloc(AV_ERROR_MAX_STRING_SIZE, sizeof(char));
av_strerror(errcode, errbuf, AV_ERROR_MAX_STRING_SIZE);
std::cout << "Error - " << errbuf << std::endl;
delete [] errbuf;
return -1;
}
/**
* Initialize conversion filter */
int initialize_audio_filter()
{
char args[512];
int ret;
AVFilter *buffersrc = avfilter_get_by_name("abuffer");
AVFilter *buffersink = avfilter_get_by_name("abuffersink");
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
filter_graph = avfilter_graph_alloc();
const enum AVSampleFormat out_sample_fmts[] = {sample_fmt, AV_SAMPLE_FMT_NONE};
const int64_t out_channel_layouts[] = {av_get_default_channel_layout(out_fmt_ctx -> streams[0] -> codec -> channels), -1};
const int out_sample_rates[] = {out_fmt_ctx -> streams[0] -> codec -> sample_rate, -1};
if (!dec_ctx->channel_layout)
dec_ctx->channel_layout = av_get_default_channel_layout(dec_ctx->channels);
snprintf(args, sizeof(args), "time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%" PRIx64,
in_fmt_ctx -> streams[select_stream] -> time_base.num, in_fmt_ctx -> streams[select_stream] -> time_base.den,
dec_ctx->sample_rate,
av_get_sample_fmt_name(dec_ctx->sample_fmt),
dec_ctx->channel_layout);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in", args, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
return -1;
}
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out", NULL, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
return ret;
}
ret = av_opt_set_int_list(buffersink_ctx, "sample_fmts", out_sample_fmts, -1,
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n");
return ret;
}
ret = av_opt_set_int_list(buffersink_ctx, "channel_layouts", out_channel_layouts, -1,
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
return ret;
}
ret = av_opt_set_int_list(buffersink_ctx, "sample_rates", out_sample_rates, -1,
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n");
return ret;
}
/* Endpoints for the filter graph. */
outputs -> name = av_strdup("in");
outputs -> filter_ctx = buffersrc_ctx;
outputs -> pad_idx = 0;
outputs -> next = NULL;
/* Endpoints for the filter graph. */
inputs -> name = av_strdup("out");
inputs -> filter_ctx = buffersink_ctx;
inputs -> pad_idx = 0;
inputs -> next = NULL;
string filter_desc = filter_description;
if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_desc.c_str(), &inputs, &outputs, NULL)) < 0) {
log_averror(ret);
exit(1);
}
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0) {
log_averror(ret);
exit(1);
}
/* Print summary of the sink buffer
* Note: args buffer is reused to store channel layout string */
AVFilterLink *outlink = buffersink_ctx->inputs[0];
av_get_channel_layout_string(args, sizeof(args), -1, outlink->channel_layout);
av_log(NULL, AV_LOG_INFO, "Output: srate:%dHz fmt:%s chlayout:%s\n",
(int) outlink->sample_rate,
(char *) av_x_if_null(av_get_sample_fmt_name((AVSampleFormat) outlink->format), "?"),
args);
return 0;
}
/*
*
*/
int main(int argc, char **argv)
{
int ret;
cout << "Hello World" << endl;
printf("abcd");
avcodec_register_all();
av_register_all();
avfilter_register_all();
/* open input file, and allocate format context */
if (avformat_open_input(&in_fmt_ctx, inFileName.c_str(), NULL, NULL) < 0) {
std::cout << "error opening input file - " << inFileName << std::endl;
return -1;
}
/* retrieve stream information */
if (avformat_find_stream_info(in_fmt_ctx, NULL) < 0) {
std::cerr << "Could not find stream information in the input file " << inFileName << std::endl;
}
/* Dump format details */
printf("\n ---------------------------------------------------------------------- \n");
av_dump_format(in_fmt_ctx, 0, inFileName.c_str(), 0);
printf("\n ---------------------------------------------------------------------- \n");
/* Choose a audio stream */
select_stream = av_find_best_stream(in_fmt_ctx, AVMEDIA_TYPE_AUDIO, -1, -1, &dec_codec, 0);
if (select_stream == AVERROR_STREAM_NOT_FOUND) {
std::cerr << "No audio stream found" << std::endl;
return -1;
}
if (select_stream == AVERROR_DECODER_NOT_FOUND) {
std::cerr << "No suitable decoder found" << std::endl;
return -1;
}
dec_ctx = in_fmt_ctx -> streams[ select_stream] -> codec;
av_opt_set_int(dec_ctx, "refcounted_frames", 1, 0);
/* init the audio decoder */
if ((ret = avcodec_open2(dec_ctx, dec_codec, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open audio decoder\n");
return ret;
}
/* allocate output context */
ret = avformat_alloc_output_context2(&out_fmt_ctx, NULL, NULL,
outFileName.c_str());
if (ret < 0) {
std::cerr << "Could not create output context for the file " << outFileName << std::endl;
return -1;
}
/* find the encoder */
enum AVCodecID codec_id = out_fmt_ctx -> oformat -> audio_codec;
enc_codec = avcodec_find_encoder(codec_id);
if (!(enc_codec)) {
std::cerr << "Could not find encoder for - " << avcodec_get_name(codec_id) << std::endl;
return -1;
}
/* add a new stream */
audio_st = avformat_new_stream(out_fmt_ctx, enc_codec);
if (!audio_st) {
std::cerr << "Could not add audio stream - " << std::endl;
}
/* Initialise audio codec */
audio_st -> id = out_fmt_ctx -> nb_streams - 1;
enc_ctx = audio_st -> codec;
enc_ctx -> codec_id = codec_id;
enc_ctx -> codec_type = AVMEDIA_TYPE_AUDIO;
enc_ctx -> bit_rate = target_bit_rate;
enc_ctx -> sample_rate = sample_rate;
enc_ctx -> sample_fmt = sample_fmt;
enc_ctx -> channels = channels;
enc_ctx -> channel_layout = av_get_default_channel_layout(enc_ctx -> channels);
/* Some formats want stream headers to be separate. */
if (out_fmt_ctx -> oformat -> flags & AVFMT_GLOBALHEADER) {
enc_ctx -> flags |= CODEC_FLAG_GLOBAL_HEADER;
}
ret = avcodec_open2(out_fmt_ctx -> streams[0] -> codec, enc_codec, NULL);
if (ret < 0) {
std::cerr << "Could not create codec context for the file " << outFileName << std::endl;
return -1;
}
/* Initialize filter */
initialize_audio_filter();
if (!(out_fmt_ctx -> oformat -> flags & AVFMT_NOFILE)) {
int ret = avio_open(& out_fmt_ctx -> pb, outFileName.c_str(),
AVIO_FLAG_WRITE);
if (ret < 0) {
log_averror(ret);
return -1;
}
}
/* Write header */
if (avformat_write_header(out_fmt_ctx, NULL) < 0) {
if (ret < 0) {
log_averror(ret);
return -1;
}
}
/* Allocate frame */
pFrame = av_frame_alloc();
if (!pFrame) {
std::cerr << "Could not allocate frame\n";
return -1;
}
pFrameFiltered = av_frame_alloc();
if (!pFrameFiltered) {
std::cerr << "Could not allocate frame\n";
return -1;
}
av_init_packet(&packet);
packet.data = NULL;
packet.size = 0;
/* Read packet from the stream */
while (av_read_frame(in_fmt_ctx, &packet) >= 0) {
if (packet.stream_index == select_stream) {
avcodec_get_frame_defaults(pFrame);
ret = avcodec_decode_audio4(dec_ctx, pFrame, &got_frame, &packet);
if (ret < 0) {
log_averror(ret);
return ret;
}
printf("Decoded packet pts : %ld ", packet.pts);
printf("Frame Best Effor pts : %ld \n", pFrame->best_effort_timestamp);
/* Set frame pts */
pFrame -> pts = av_frame_get_best_effort_timestamp(pFrame);
if (got_frame) {
/* push the decoded frame into the filtergraph */
ret = av_buffersrc_add_frame_flags(buffersrc_ctx, pFrame, AV_BUFFERSRC_FLAG_KEEP_REF);
if (ret < 0) {
log_averror(ret);
return ret;
}
/* pull filtered frames from the filtergraph */
while (1) {
ret = av_buffersink_get_frame(buffersink_ctx, pFrameFiltered);
if ((ret == AVERROR(EAGAIN)) || (ret == AVERROR_EOF)) {
break;
}
if (ret < 0) {
printf("Error while getting filtered frames from filtergraph\n");
log_averror(ret);
return -1;
}
/* Initialize the packets */
AVPacket encodedPacket = {0};
av_init_packet(&encodedPacket);
ret = avcodec_encode_audio2(out_fmt_ctx -> streams[0] -> codec, &encodedPacket, pFrameFiltered, &got_packet);
if (!ret && got_packet && encodedPacket.size) {
/* Set correct pts and dts */
if (encodedPacket.pts != AV_NOPTS_VALUE) {
encodedPacket.pts = av_rescale_q(encodedPacket.pts, buffersink_ctx -> inputs[0] -> time_base,
out_fmt_ctx -> streams[0] -> time_base);
}
if (encodedPacket.dts != AV_NOPTS_VALUE) {
encodedPacket.dts = av_rescale_q(encodedPacket.dts, buffersink_ctx -> inputs[0] -> time_base,
out_fmt_ctx -> streams[0] -> time_base);
}
printf("Encoded packet pts %ld\n", encodedPacket.pts);
/* Write the compressed frame to the media file. */
ret = av_interleaved_write_frame(out_fmt_ctx, &encodedPacket);
if (ret < 0) {
log_averror(ret);
return -1;
}
} else if (ret < 0) {
log_averror(ret);
return -1;
}
av_frame_unref(pFrameFiltered);
}
av_frame_unref(pFrame);
}
}
}
/* Flush delayed frames from encoder*/
got_packet=1;
while (got_packet) {
AVPacket encodedPacket = {0};
av_init_packet(&encodedPacket);
ret = avcodec_encode_audio2(out_fmt_ctx -> streams[0] -> codec, &encodedPacket, NULL, &got_packet);
if (!ret && got_packet && encodedPacket.size) {
/* Set correct pts and dts */
if (encodedPacket.pts != AV_NOPTS_VALUE) {
encodedPacket.pts = av_rescale_q(encodedPacket.pts, buffersink_ctx -> inputs[0] -> time_base,
out_fmt_ctx -> streams[0] -> time_base);
}
if (encodedPacket.dts != AV_NOPTS_VALUE) {
encodedPacket.dts = av_rescale_q(encodedPacket.dts, buffersink_ctx -> inputs[0] -> time_base,
out_fmt_ctx -> streams[0] -> time_base);
}
printf("Encoded packet pts %ld\n", encodedPacket.pts);
/* Write the compressed frame to the media file. */
ret = av_interleaved_write_frame(out_fmt_ctx, &encodedPacket);
if (ret < 0) {
log_averror(ret);
return -1;
}
} else if (ret < 0) {
log_averror(ret);
return -1;
}
}
/* Write Trailer */
av_write_trailer(out_fmt_ctx);
avfilter_graph_free(&filter_graph);
if (dec_ctx)
avcodec_close(dec_ctx);
avformat_close_input(&in_fmt_ctx);
av_frame_free(&pFrame);
av_frame_free(&pFrameFiltered);
if (!(out_fmt_ctx -> oformat -> flags & AVFMT_NOFILE))
avio_close(out_fmt_ctx -> pb);
avcodec_close(out_fmt_ctx->streams[0]->codec);
avformat_free_context(out_fmt_ctx);
return 0;
}
</iostream>The audio file after transcoding is same duration as the input. But its completely noisy. Can somebody tell me what I am doing wrong here !
-
Our latest improvement to QA : Screenshot Testing
2 octobre 2013, par benaka — DevelopmentIntroduction to QA in Piwik
Like any piece of good software, Piwik comes with a comprehensive QA suite that includes unit and integration tests. The unit tests make sure core components of Piwik work properly. The integration tests make sure Piwik’s tracking and report aggregation and APIs work properly.
To complete our QA suite, we’ve recently added a new type of tests : Screenshot tests, that we use to make sure Piwik’s controller and JavaScript code works properly.
This blog post will explain how they work and describe our experiences setting them up ; we hope to show you an example of innovative QA practices in an active open source project.
Screenshot Tests
As the name implies, our screenshot tests (1) first capture a screenshot of a URL, then (2) compare the result with an expected image. This lets us test the code in Piwik’s controllers and Piwik’s JavaScript simply by specifying a URL.
Contrast this with conventional UI tests that test for page content changes. Such tests require writing large amounts of test code that, at most, check for changes in HTML. Our tests, on the otherhand, will be able to show regressions in CSS and JavaScript rendering logic with a bare minimum of testing code.
Capturing Screenshots
Screenshots are captured using a 3rd party tool. We tried several tools before settling on PhantomJS. PhantomJS executes a JavaScript file with an environment that allows it to create WebKit powered web views. When capturing a screenshot, we supply PhantomJS with a script that :
- opens a web page view,
- loads a URL,
- waits for all AJAX requests to be completed,
- waits for all images to be loaded
- waits for all JavaScript to be run.
Then it renders the completed page to an PNG file.
- To see how we use PhantomJS see capture.js.
- To see how we wait for AJAX requests to complete and images to load see override.js.
Comparing Screenshots
Once a screenshot is generated we test for UI regressions by comparing it with an expected image. There is no sort of fuzzy matching involved. We just check that the images consist of the same bytes.
If a screenshot test fails we use ImageMagick’s compare command line tool to generate an image diff :
In this example above, there was a change that caused the Search box to be hidden in the datatable. This resulted in the whole Data table report being shifted up a few pixels. The differences are visible in red color which gives rapid feedback to the developers what has changed in the last commit.
Screenshot Tests on Travis
We experienced trouble generating identical screenshots on different machines, so our tests were not initially automated by Travis. Once we surpassed this hurdle, we created a new github repo to store our UI tests and screenshots and then enabled the travis build for it. We also made sure that every time a commit is pushed to the Piwik repo, our travis build will push a commit to the UI test repo to run the UI tests.
We decided to create a new repository so the main repository wouldn’t be burdened with the large screenshot files (which git would not handle very well). We also made sure the travis build would upload all the generated screenshots to a server so debugging failures would be easier.
Problems we experienced
Getting generated screenshots to render identically on separate machines was quite a challenge. It took months to figure out how to get it right. Here’s what we learned :
Fonts will render identically on different machines, but different machines can pick the wrong fonts. When we first tried getting these tests to run on Travis, we noticed small differences in the way fonts were rendered on different machines. We thought this was an insurmountable problem that would occur due to the libraries installed on these machines. It turns out, the machines were just picking the wrong fonts. After installing certain fonts during our Travis build, everything started working.
Different versions of GD can generate slightly different images. GD is used in Piwik to, among other things, generate sparkline images. Different versions of GD will result in slightly different images. They look the same to the naked eye, but some pixels will have slightly different colors. This is, unfortunately, a problem we couldn’t solve. We couldn’t make sure that everyone who runs the tests uses the same version of GD, so instead we disabled sparklines for UI testing.
What we learned about existing screenshot capturing tools
We tried several screenshot capturing tools before finding one that would work adequately. Here’s what we learned about them :
-
CutyCapt This is the first screenshot capturing tool we tried. CutyCapt is a C++ program that uses QtWebKit to load and take a screenshot of a page. It can’t be used to capture multiple screenshots in one run and it can’t be used to wait for all AJAX/Images/JavaScript to complete/load (at least not currently).
-
PhantomJS This is the solution we eventually chose. PhantomJS is a headless scriptable browser that currently uses WebKit as its rendering engine.
For the most part, PhantomJS is the best solution we found. It reliably renders screenshots, allows JavaScript to be injected into pages it loads, and since it essentially just runs JavaScript code that you provide, it can be made to do whatever you want.
-
SlimerJS SlimerJS is a clone of PhantomJS that uses Gecko as the rendering engine. It is meant to function similarly to PhantomJS. Unfortunately, due to some limitations hard-coded in Mozilla’s software, we couldn’t use it.
For one, SlimerJS is not headless. There is, apparently, no way to do that when embedding Mozilla. You can, however, run it through xvfb, however the fact that it has to create a window means some odd things can happen. When using SlimerJS, we would sometimes end up with images where tooltips would display as if the mouse was hovering over an element. This inconsistency meant we couldn’t use it for our tests.
One tool we didn’t try was Selenium Webdriver. Although Selenium is traditionally used to create tests that check for HTML content, it can be used to generate screenshots. (Note : PhantomJS supports using a remote WebDriver.)
Our Future Plans for Screenshot Testing
At the moment we render a couple dozen screenshots. We test how our PHP code, JavaScript code and CSS makes Piwik’s UI look, but we don’t test how it behaves. This is our next step.
We want to create Screenshot Unit Tests for each UI control Piwik uses (for example, the Data Table View or the Site Selector). These tests would use the Widgetize plugin to load a control by itself, then execute JavaScript that simulates events and user behavior, and finally take a screenshot. This way we can test how our code handles clicks and hovers and all sorts of other behavior.
Screenshots Tests will make Piwik more stable and keep us agile and able to release early and often. Thank you for your support & Spreading the word about Piwik !
-
Evolution #3081 : image_reduire sur #TEXTE dans la dist
24 novembre 2013, par denisb -en réponse à la note-10 de (b b) :
peut-être faudrait-il tester spip_meta.max_taille_vignettes dès l’appel d’un filtre image_ et sortir gentiment en cas de dépassement.