
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (22)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Création définitive du canal
12 mars 2010, parLorsque votre demande est validée, vous pouvez alors procéder à la création proprement dite du canal. Chaque canal est un site à part entière placé sous votre responsabilité. Les administrateurs de la plateforme n’y ont aucun accès.
A la validation, vous recevez un email vous invitant donc à créer votre canal.
Pour ce faire il vous suffit de vous rendre à son adresse, dans notre exemple "http://votre_sous_domaine.mediaspip.net".
A ce moment là un mot de passe vous est demandé, il vous suffit d’y (...)
Sur d’autres sites (3266)
-
Notes on Linux for Dreamcast
23 février 2011, par Multimedia Mike — Sega Dreamcast, VP8I wanted to write down some notes about compiling Linux on Dreamcast (which I have yet to follow through to success). But before I do, allow me to follow up on my last post where I got Google’s libvpx library decoding VP8 video on the DC. Remember when I said the graphics hardware could only process variations of RGB color formats ? I was mistaken. Reading over some old documentation, I noticed that the DC’s PowerVR hardware can also handle packed YUV textures (UYVY, specifically) :
The video looks pretty sharp in the small photo. Up close, less so, due to the low resolution and high quantization of the test vector combined with the naive chroma upscaling. For the curious, the grey box surrounding the image highlights the 256-square texture that the video frame gets plotted on. Texture dimensions have to be powers of 2.
Notes on Linux for Dreamcast
I’ve occasionally dabbled with Linux on my Dreamcast. There’s an ancient (circa 2001) distro based around a build of kernel 2.4.5 out there. But I wanted to try to get something more current compiled. Thus far, I have figured out how to cross compile kernels pretty handily but have been unsuccessful in making them run.Here are notes are the compilation portion :
- kernel.org provides a very useful set of cross compiling toolchains
- get the gcc 4.5.1 cross toolchain for SH-4 (the gcc 4.3.3 one won’t work because the binutils is too old ; it will fail to assemble certain instructions as described in this post)
- working off of Linux kernel 2.6.37, edit the top-level Makefile ; find the ARCH and CROSS_COMPILE variables and set appropriately :
ARCH ?= sh CROSS_COMPILE ?= /path/to/gcc-4.5.1-nolibc/sh4-linux/bin/sh4-linux-
$ make dreamcast_defconfig
$ make menuconfig
... if any changes to the default configuration are desired- manually edit arch/sh/Makefile, changing :
cflags-$(CONFIG_CPU_SH4) := $(call cc-option,-m4,) \ $(call cc-option,-mno-implicit-fp,-m4-nofpu)
to :
cflags-$(CONFIG_CPU_SH4) := $(call cc-option,-m4,) \ $(call cc-option,-mno-implicit-fp)
I.e., remove the
'-m4-nofpu'
option. According to the gcc man page, this will "Generate code for the SH4 without a floating-point unit." Why this is a default is a mystery since the DC’s SH-4 has an FPU and compilation fails when enabling this option. - On that note, I was always under the impression that the DC sported an SH-4 CPU with the model number SH7750. According to this LinuxSH wiki page as well as the Linux kernel help, it actually has an SH7091 variant. This photo of the physical DC hardware corroborates the model number.
$ make
... to build a Linux kernel for the Sega Dreamcast
Running
So I can compile the kernel but running the kernel (the resulting vmlinux ELF file) gives me trouble. The default kernel ELF file reports an entry point of 0x8c002000. Attempting to upload this through the serial uploading facility I have available to me triggers a system reset almost immediately, probably because that’s the same place that the bootloader calls home. I have attempted to alter the starting address via ’make menuconfig’ -> System type -> Memory management options -> Physical memory start address. This allows the upload to complete but it still does not run. It’s worth noting that the 2.4.5 vmlinux file from the old distribution can be executed when uploaded through the serial loader, and it begins at 0x8c210000. -
RoQ on Dreamcast
18 mars 2011, par Multimedia Mike — Sega DreamcastI have been working on that challenge to play back video on the Sega Dreamcast. To review, I asserted that the RoQ format would be a good fit for the Sega Dreamcast hardware. The goal was to play 640x480 video at 30 frames/second. Short version : I have determined that it is possible to decode such video in real time. However, I ran into certain data rate caveats.
First off : Have you ever wondered if the Dreamcast can read an 80mm optical disc ? It can ! I discovered this when I only had 60 MB of RoQ samples to burn on a disc and a spindle full of these 210MB-capacity 80mm CD-Rs that I never have occasion to use.
New RoQ Library
There are open source RoQ decoders out there but I decided to write a new one. A few reasons : 1) RoQ is so simple that I didn’t think it would take too long ; 2) it would be nice to have a RoQ library that is license-compatible (BSD-like) with the rest of the KallistiOS distribution ; 3) the idroq.tar.gz distribution, while license-compatible, has enough issues that I didn’t want to correct it.Thankfully, I was correct about the task not being too difficult : I put together a new RoQ decoder in short order. I’m a bit embarrassed to admit that the part I had the most trouble with was properly converting YUV -> RGB.
About the approach I took : While the original idroq.tar.gz decoder maintains YUV 4:2:0 codebooks (which led to chroma bugs during motion compensation) and FFmpeg’s decoder maintains YUV 4:4:4 codebooks, this decoder is built to convert the YUV 4:2:0 vectors into RGB565 vectors during the vector unpacking phase. Thus, the entire frame is rendered in RGB565 — no lengthy YUV -> RGB conversion after decoding — and all pixels are shuffled around as 16-bit units (minor speedup vs. shuffling everything as bytes).I also entertained the idea of maintaining YUYV codebooks (since the DC supports that colorspace as a texture format). But I scrapped that idea when I remembered it would lead to the same chroma bleeding problem seen in the original idroq.tar.gz decoder.
Onto The Dreamcast
I developed the library on a Linux computer, allowing it to output a series of PNM files for visual verification and debugging. Dropping it into a basic DC/KOS-compatible program was trivial and the first order of business was profiling.At first, I profiled the entire decode operation : open file, then read and decode each chunk while tossing away the results. I was roundly disappointed to see that, e.g., an 8.5-second RoQ sample needed a little more than 20 seconds to complete. Not real time. I performed a series of optimizations on the decoding library that netted notable performance gains when profiling on Linux. When I brought these same optimizations over to the DC, decoding time didn’t improve at all. This was my first suspicion that perhaps my assumptions regarding the DC’s optical drive’s data rate were not correct.
Dreamcast Data Rate Profiling
Let’s start with some definitions : In terms of data rate, an ’X’, i.e., 1X is the minimum data rate needed to read CD quality audio from a disc. At that speed, a drive should be able to stream 75 sectors each second. When reading mode 1/form 1 CD-ROM data, each sector has 2048 bytes (2 kbytes), so a single-speed data rate should achieve 150 kbytes/sec.The Dreamcast is supposed to possess a 12X optical drive. This would imply a maximum data rate of 150 kbytes/sec * 12 = 1800 kbytes/sec.
Rigging up a trivial experiment using the RoQ samples burned on a few different CD-R discs, the best data rate I can see is about 500-525 kbytes/sec, or around 3.5X.
Where’s the discrepancy ? My first theory has to do with the fact that not all optical media is created equal. This is why optical drives often advertise a slew of numbers which refer to the best theoretical speed for reading a CD vs. writing a CD-R vs. writing a CD-RW, etc. Perhaps the DC drive can’t read CD-Rs very quickly. To test this theory, I tried streaming a large file from a conventionally mastered CD-ROM. This worked well for the closest CD-ROM I had on hand : I was able to stream data at a rate that works out to about 6.5X.
I smell a science project for another evening : Profiling read speeds from a mastered CD-ROM, burned CD-R, and also a mastered GD-ROM, on each of the 3 Dreamcast consoles I possess (I’ve heard that there’s variance between optical drives depending on manufacturing run).
The Good News
I added a little finer-grained code to profile just the video decoding functions. The good news is that the decoder meets my real time goals : That 8.5-second RoQ sample encoded at 640x480x30fps makes its way through the video decoding functions on the DC in a little less than 5 seconds. If the optical drive can supply the data fast enough, the video decoder can take care of the rest.The RoQ encoder included with FFmpeg does not honor any bitrate parameters. Instead, I encoded the same file at 320x240. It reportedly decoded in real time and can be streamed in real time as well.
I say "reportedly" because I’m simply working from textual output at this point ; the next phase is to hook the decoder up to the display hardware.
-
Make better marketing decisions with attribution modeling
19 décembre 2017, par InnoCraftDo you suspect some traffic sources are not getting the rewards they deserve ? Do you want to know how much credit each of your marketing channel actually gets ?
When you look at which referrers contribute the most to your goal conversions or purchases, Matomo (Piwik) shows you only the referrer of the last visit. However, in reality, a visitor often visits a website multiple times from different referrers before they convert a goal. Giving all credit to the referrer of the last visit ignores all other referrers that contributed to a conversion as well.
You can now push your marketing analysis to the next level with attribution modeling and finally discover the true value of all your marketing channels. As a result, you will be able to shift your marketing efforts and spending accordingly to maximize your success and stop wasting resources. In marketing, studying this data is called attribution modeling.
Get the true value of your referrers
Attribution is a premium feature that you can easily purchase from the Matomo (Piwik) marketplace.
Once installed, you will be able to :
- identify valuable referrers that you did not see before
- invest in potential new partners
- attribute a new level of conversion
- make this work very easily by filling just a couple of form information
Identify valuable referrers that you did not see before
You probably have hundreds or even thousands of different sources listed within the referrer reports. We also guess that you have the feeling that it is always the same referrers which are credited of conversions.
Guess what, those data are probably biased or at least are not telling you the whole story.
Why ? Because by default, Matomo (Piwik) only attributes all credit to the last referrer.It is likely that many non credited sources played a role in the conversion process as well as people often visit your website several times before converting and they may come from different referrers.
This is exactly where attribution modeling comes into play. With attribution modeling, you can decide which touchpoint you want to study. For example, you can choose to give credit to all the referrers a single visitor came from each time the user visits your website, and not only look at the last one. Without this feature, chances are, that you have spent too much money and / or efforts on the wrong referrer channels in the past because many referrers that contributed to conversions were ignored. Based on the insights you get by applying different attribution models, you can make better decisions on where to shift your marketing spending and efforts.
Invest in potential new partners
Once you apply different attribution models, you will find out that you need to consider a new list of referrers which you before either over- or under-estimated in terms of how much they contributed to your conversions. You probably did not identify those sources before because Matomo (Piwik) shows only the last referrer before a conversion. But you can now also look at what these newly discovered referrers are saying about your company, looking for any advertising programs they may offer, getting in contact with the owner of the website, and more.
Apply up to 6 different attribution models
By default, Matomo (Piwik) is attributing the conversion to the last referrer only. With attribution modeling you can analyze 6 different models :
- Last Interaction : the conversion is attributed to the last referrer, even if it is a direct access.
- Last Non-Direct : the conversion is attributed to the last referrer, but not in the case of a direct access.
- First Interaction : the conversion is attributed to the first referrer which brought you the visit.
- Linear : whatever the number of referrers which brought you the conversion, they will all get the same value.
- Position Based : first and last referrer will be attributed 40% each the conversion value, the remaining 60% is divided between the rest of the referrers.
- Time Decay : this attribution model means that the closer to the date of the conversion is, the more your last referrers will get credit.
Those attribution models will enable you to analyze all your referrers deeply and increase your conversions.
Let’s look at an example where we are comparing two models : “last interaction” and “first interaction”. Our goal is to identify whether some referrers that we are currently considering as less important, are finally playing a serious role in the total amount of conversions :
Comparing Last Interaction model to First Interaction model
Here it is interesting to observe that the website www.hongkiat.com is bringing almost 90% conversion more with the first interaction model rather than the last one.
As a result we can look at this website and take the following actions :
- have a look at the message on this website
- look at opportunities to change the message
- look at opportunities to display extra marketing messages
- get in contact with the owner to identify any other communication opportunities
The Multi Channel Attribution report
Attribution modeling in Matomo (Piwik) does not require you to add any tracking code. The only thing you need is to install the plugin and let the magic happen.
Simple as pie is the word you should keep in mind for this feature. Once installed, you will find the report within the goal section, just above the goals you created :The Multi Attribution menu
There you can select the attribution model you would like to apply or compare.
Attribution modeling is not just about playing with a new report. It is above all an opportunity to increase the number of conversions by identifying referrers that you may have not recognized as valuable in the past. To grow your business, it is crucial to identify the most (and least) successful channels correctly so you can spend your time and money wisely.
The post Make better marketing decisions with attribution modeling appeared first on Analytics Platform - Matomo.