
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (17)
-
Utilisation et configuration du script
19 janvier 2011, parInformations spécifiques à la distribution Debian
Si vous utilisez cette distribution, vous devrez activer les dépôts "debian-multimedia" comme expliqué ici :
Depuis la version 0.3.1 du script, le dépôt peut être automatiquement activé à la suite d’une question.
Récupération du script
Le script d’installation peut être récupéré de deux manières différentes.
Via svn en utilisant la commande pour récupérer le code source à jour :
svn co (...) -
Configuration spécifique d’Apache
4 février 2011, parModules spécifiques
Pour la configuration d’Apache, il est conseillé d’activer certains modules non spécifiques à MediaSPIP, mais permettant d’améliorer les performances : mod_deflate et mod_headers pour compresser automatiquement via Apache les pages. Cf ce tutoriel ; mode_expires pour gérer correctement l’expiration des hits. Cf ce tutoriel ;
Il est également conseillé d’ajouter la prise en charge par apache du mime-type pour les fichiers WebM comme indiqué dans ce tutoriel.
Création d’un (...) -
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)
Sur d’autres sites (4032)
-
Adventures In NAS
1er janvier, par Multimedia Mike — GeneralIn my post last year about my out-of-control single-board computer (SBC) collection which included my meager network attached storage (NAS) solution, I noted that :
I find that a lot of my fellow nerds massively overengineer their homelab NAS setups. I’ll explore this in a future post. For my part, people tend to find my homelab NAS solution slightly underengineered.
So here I am, exploring this is a future post. I’ve been in the home NAS game a long time, but have never had very elaborate solutions for such. For my part, I tend to take an obsessively reductionist view of what constitutes a NAS : Any small computer with a pool of storage and a network connection, running the Linux operating system and the Samba file sharing service.
Many home users prefer to buy turnkey boxes, usually that allow you to install hard drives yourself, and then configure the box and its services with a friendly UI. My fellow weird computer nerds often buy cast-off enterprise hardware and set up more resilient, over-engineered solutions, as long as they have strategies to mitigate the noise and dissipate the heat, and don’t mind the electricity bills.
If it works, awesome ! As an old hand at this, I am rather stuck in my ways, however, preferring to do my own stunts, both with the hardware and software solutions.
My History With Home NAS Setups
In 1998, I bought myself a new computer — beige box tower PC, as was the style as the time. This was when normal people only had one computer at most. It ran Windows, but I was curious about this new thing called “Linux” and learned to dual boot that. Later that year, it dawned on me that nothing prevented me from buying a second ugly beige box PC and running Linux exclusively on it. Further, it could be a headless Linux box, connected by ethernet, and I could consolidate files into a single place using this file sharing software named Samba.
I remember it being fairly onerous to get Samba working in those days. And the internet was not quite so helpful in those days. I recall that the thing that blocked me for awhile was needing to know that I had to specify an entry for the Samba server machine in the LMHOSTS (Lanman hosts) file on the Windows 95 machine.
However, after I cracked that code, I have pretty much always had some kind of ad-hoc home NAS setup, often combined with a headless Linux development box.
In the early 2000s, I built a new beige box PC for a file server, with a new hard disk, and a coworker tutored me on setting up a (P)ATA UDMA 133 (or was it 150 ? anyway, it was (P)ATA’s last hurrah before SATA conquered all) expansion card and I remember profiling that the attached hard drive worked at a full 21 MBytes/s reading. It was pretty slick. Except I hadn’t really thought things through. You see, I had a hand-me-down ethernet hub cast-off from my job at the time which I wanted to use. It was a 100 Mbps repeater hub, not a switch, so the catch was that all connected machines had to be capable of 100 Mbps. So, after getting all of my machines (3 at the time) upgraded to support 10/100 ethernet (the old off-brand PowerPC running Linux was the biggest challenge), I profiled transfers and realized that the best this repeater hub could achieve was about 3.6 MBytes/s. For a long time after that, I just assumed that was the upper limit of what a 100 Mbps network could achieve. Obviously, I now know that the upper limit ought to be around 11.2 MBytes/s and if I had gamed out that fact in advance, I would have realized it didn’t make sense to care about super-fast (for the time) disk performance.
At this time, I was doing a lot for development for MPlayer/xine/FFmpeg. I stored all of my multimedia material on this NAS. I remember being confused when I was working with Y4M data, which is raw frames, which is lots of data. xine, which employed a pre-buffering strategy, would play fine for a few seconds and then stutter. Eventually, I reasoned out that the files I was working with had a data rate about twice what my awful repeater hub supported, which is probably the first time I came to really understand and respect streaming speeds and their implications for multimedia playback.
Smaller Solutions
For a period, I didn’t have a NAS. Then I got an Apple AirPort Extreme, which I noticed had a USB port. So I bought a dual drive brick to plug into it and used that for a time. Later (2009), I had this thing called the MSI Wind Nettop which is the only PC I’ve ever seen that can use a CompactFlash (CF) card for a boot drive. So I did just that, and installed a large drive so it could function as a NAS, as well as a headless dev box. I’m still amazed at what a low-power I/O beast this thing is, at least when compared to all the ARM SoCs I have tried in the intervening 1.5 decades. I’ve had spinning hard drives in this thing that could read at 160 MBytes/s (‘dd’ method) and have no trouble saturating the gigabit link at 112 MBytes/s, all with its early Intel Atom CPU.Around 2015, I wanted a more capable headless dev box and discovered Intel’s line of NUCs. I got one of the fat models that can hold a conventional 2.5″ spinning drive in addition to the M.2 SATA SSD and I was off and running. That served me fine for a few years, until I got into the ARM SBC scene. One major limitation here is that 2.5″ drives aren’t available in nearly the capacities that make a NAS solution attractive.
Current Solution
My current NAS solution, chronicled in my last SBC post– the ODroid-HC2, which is a highly compact ARM SoC with an integrated USB3-SATA bridge so that a SATA drive can be connected directly to it :
I tend to be weirdly proficient at recalling dates, so I’m surprised that I can’t recall when I ordered this and put it into service. But I’m pretty sure it was circa 2018. It’s only equipped with an 8 TB drive now, but I seem to recall that it started out with only a 4 TB drive. I think I upgraded to the 8 TB drive early in the pandemic in 2020, when ISPs were implementing temporary data cap amnesty and I was doing what a r/DataHoarder does.
The HC2 has served me well, even though it has a number of shortcomings for a hardware set chartered for NAS :
- While it has a gigabit ethernet port, it’s documented that it never really exceeds about 70 MBytes/s, due to the SoC’s limitations
- The specific ARM chip (Samsung Exynos 5422 ; more than a decade old as of this writing) lacks cryptography instructions, slowing down encryption if that’s your thing (e.g., LUKS)
- While the SoC supports USB3, that block is tied up for the SATA interface ; the remaining USB port is only capable of USB2 speeds
- 32-bit ARM, which prevented me from running certain bits of software I wanted to try (like Minio)
- Only 1 drive, so no possibility for RAID (again, if that’s your thing)
I also love to brag on the HC2’s power usage : I once profiled the unit for a month using a Kill-A-Watt and under normal usage (with the drive spinning only when in active use). The unit consumed 4.5 kWh… in an entire month.
New Solution
Enter the ODroid-HC4 (I purchased mine from Ameridroid but Hardkernel works with numerous distributors) :
I ordered this earlier in the year and after many months of procrastinating and obsessing over the best approach to take with its general usage, I finally have it in service as my new NAS. Comparing point by point with the HC2 :
- The gigabit ethernet runs at full speed (though a few things on my network run at 2.5 GbE now, so I guess I’ll always be behind)
- The ARM chip (Amlogic S905X3) has AES cryptography acceleration and handles all the LUKS stuff without breaking a sweat ; “cryptsetup benchmark” reports between 500-600 MBytes/s on all the AES variants
- The USB port is still only USB2, so no improvement there
- 64-bit ARM, which means I can run Minio to simulate block storage in a local dev environment for some larger projects I would like to undertake
- Supports 2 drives, if RAID is your thing
How I Set It Up
How to set up the drive configuration ? As should be apparent from the photo above, I elected for an SSD (500 GB) for speed, paired with a conventional spinning HDD (18 TB) for sheer capacity. I’m not particularly trusting of RAID. I’ve watched it fail too many times, on systems that I don’t even manage, not to mention that aforementioned RAID brick that I had attached to the Apple AirPort Extreme.I had long been planning to use bcache, the block caching interface for Linux, which can use the SSD as a speedy cache in front of the more capacious disk. There is also LVM cache, which is supposed to achieve something similar. And then I had to evaluate the trade-offs in whether I wanted write-back, write-through, or write-around configurations.
This was all predicated on the assumption that the spinning drive would not be able to saturate the gigabit connection. When I got around to setting up the hardware and trying some basic tests, I found that the conventional HDD had no trouble keeping up with the gigabit data rate, both reading and writing, somewhat obviating the need for SSD acceleration using any elaborate caching mechanisms.
Maybe that’s because I sprung for the WD Red Pro series this time, rather than the Red Plus ? I’m guessing that conventional drives do deteriorate over the years. I’ll find out.
For the operating system, I stuck with my newest favorite Linux distro : DietPi. While HardKernel (parent of ODroid) makes images for the HC units, I had also used DietPi for the HC2 for the past few years, as it tends to stay more up to date.
Then I rsync’d my data from HC2 -> HC4. It was only about 6.5 TB of total data but it took days as this WD Red Plus drive is only capable of reading at around 10 MBytes/s these days. Painful.
For file sharing, I’m pretty sure most normal folks have nice web UIs in their NAS boxes which allow them to easily configure and monitor the shares. I know there are such applications I could set up. But I’ve been doing this so long, I just do a bare bones setup through the terminal. I installed regular Samba and then brought over my smb.conf file from the HC2. 1 by 1, I tested that each of the old shares were activated on the new NAS and deactivated on the old NAS. I also set up a new share for the SSD. I guess that will just serve as a fast I/O scratch space on the NAS.
The conventional drive spins up and down. That’s annoying when I’m actively working on something but manage not to hit the drive for like 5 minutes and then an application blocks while the drive wakes up. I suppose I could set it up so that it is always running. However, I micro-manage this with a custom bash script I wrote a long time ago which logs into the NAS and runs the “date” command every 2 minutes, appending the output to a file. As a bonus, it also prints data rate up/down stats every 5 seconds. The spinning file (“nas-main/zz-keep-spinning/keep-spinning.txt”) has never been cleared and has nearly a quarter million lines. I suppose that implies that it has kept the drive spinning for 1/2 million minutes which works out to around 347 total days. I should compare that against the drive’s SMART stats, if I can remember how. The earliest timestamp in the file is from March 2018, so I know the HC2 NAS has been in service at least that long.
For tasks, vintage cron still does everything I could need. In this case, that means reaching out to websites (like this one) and automatically backing up static files.
I also have to have a special script for starting up. Fortunately, I was able to bring this over from the HC2 and tweak it. The data disks (though not boot disk) are encrypted. Those need to be unlocked and only then is it safe for the Samba and Minio services to start up. So one script does all that heavy lifting in the rare case of a reboot (this is the type of system that’s well worth having on a reliable UPS).
Further Work
I need to figure out how to use the OLED display on the NAS, and how to make it show something more useful than the current time and date, which is what it does in its default configuration with HardKernel’s own Linux distro. With DietPi, it does nothing by default. I’m thinking it should be able to show the percent usage of each of the 2 drives, at a minimum.I also need to establish a more responsible backup regimen. I’m way too lazy about this. Fortunately, I reason that I can keep the original HC2 in service, repurposed to accept backups from the main NAS. Again, I’m sort of micro-managing this since a huge amount of data isn’t worth backing up (remember the whole DataHoarder bit), but the most important stuff will be shipped off.
The post Adventures In NAS first appeared on Breaking Eggs And Making Omelettes.
-
Open Banking Security 101 : Is open banking safe ?
3 décembre 2024, par Daniel Crough — Banking and Financial Services -
Overcoming Fintech and Finserv’s Biggest Data Analytics Challenges
Data powers innovation in financial technology (fintech), from personalized banking services to advanced fraud detection systems. Industry leaders recognize the value of strong security measures and customer privacy. A recent survey highlights this focus, with 72% of finance Chief Risk Officers identifying cybersecurity as their primary concern.
Beyond cybersecurity, fintech and financial services (finserv) companies are bogged down with massive amounts of data spread throughout disconnected systems. Between this, a complex regulatory landscape and an increasingly tech-savvy and sceptical consumer base, fintech and finserv companies have a lot on their plates.
How can marketing teams get the information they need while staying focused on compliance and providing customer value ?
This article will examine strategies to address common challenges in the finserv and fintech industries. We’ll focus on using appropriate tools, following effective data management practices, and learning from traditional banks’ approaches to similar issues.
What are the biggest fintech data analytics challenges, and how do they intersect with traditional banking ?
Recent years have been tough for the fintech industry, especially after the pandemic. This period has brought new hurdles in data analysis and made existing ones more complex. As the market stabilises, both fintech and finserve companies must tackle these evolving data issues.
Let’s examine some of the most significant data analytics challenges facing the fintech industry, starting with an issue that’s prevalent across the financial sector :
1. Battling data silos
In a recent survey by InterSystems, 54% of financial institution leaders said data silos are their biggest barrier to innovation, while 62% said removing silos is their priority data strategy for the next year.
Data silos segregate data repositories across departments, products and other divisions. This is a major issue in traditional banking and something fintech companies should avoid inheriting at all costs.
Siloed data makes it harder for decision-makers to view business performance with 360-degree clarity. It’s also expensive to maintain and operationalise and can evolve into privacy and data compliance issues if left unchecked.
To avoid or remove data silos, develop a data governance framework and centralise your data repositories. Next, simplify your analytics stack into as few integrated tools as possible because complex tech stacks are one of the leading causes of data silos.
Use an analytics system like Matomo that incorporates web analytics, marketing attribution and CRO testing into one toolkit.
Matomo’s support plans help you implement a data system to meet the unique needs of your business and avoid issues like data silos. We also offer data warehouse exporting as a feature to bring all of your web analytics, customer data, support data, etc., into one centralised location.
Try Matomo for free today, or contact our sales team to discuss support plans.
2. Compliance with laws and regulations
A survey by Alloy reveals that 93% of fintech companies find it difficult to meet compliance regulations. The cost of staying compliant tops their list of worries (23%), outranking even the financial hit from fraud (21%) – and this in a year marked by cyber threats.
Data privacy laws are constantly changing, and the landscape varies across global regions, making adherence even more challenging for fintechs and traditional banks operating in multiple markets.
In the US market, companies grapple with regulations at both federal and state levels. Here are some of the state-level legislation coming into effect for 2024-2026 :
- Oregon – Oregon Consumer Privacy Act (July 1, 2024)
- Florida – Florida Digital Bill of Rights (July 1, 2024)
- Texas – Texas Data Privacy and Security Act (July 1, 2024)
- Montana – Montana Consumer Data Privacy Act (October 1, 2024)
- Delaware – Delaware Personal Privacy Act (January 1, 2025)
- Iowa – Iowa Consumer Data Protection Act (January 1, 2025)
- New Jersey – SB 332 (January 15, 2025)
- Tennessee – Tennessee Information Protection Act (July 1, 2025)
- Indiana – Indiana Consumer Data Protection Act ( January 1, 2026)
Other countries are also ramping up regional regulations. For instance, Canada has Quebec’s Act Respecting the Protection of Personal Information in the Private Sector and British Columbia’s Personal Information Protection Act (BC PIPA).
Ignorance of country- or region-specific laws will not stop companies from suffering the consequences of violating them.
The only answer is to invest in adherence and manage business growth accordingly. Ultimately, compliance is more affordable than non-compliance – not only in terms of the potential fines but also the potential risks to reputation, consumer trust and customer loyalty.
This is an expensive lesson that fintech and traditional financial companies have had to learn together. GDPR regulators hit CaixaBank S.A, one of Spain’s largest banks, with multiple multi-million Euro fines, and Klarna Bank AB, a popular Swedish fintech company, for €720,000.
To avoid similar fates, companies should :
- Build solid data systems
- Hire compliance experts
- Train their teams thoroughly
- Choose data analytics tools carefully
Remember, even popular tools like Google Analytics aren’t automatically safe. Find out how Matomo helps you gather useful insights while sticking to rules like GDPR.
3. Protecting against data security threats
Cyber threats are increasing in volume and sophistication, with the financial sector becoming the most breached in 2023.
The cybersecurity risks will only worsen, with WEF estimating annual cybercrime expenses of up to USD $10.5 trillion globally by 2025, up from USD $3 trillion in 2015.
While technology brings new security solutions, it also amplifies existing risks and creates new ones. A 2024 McKinsey report warns that the risk of data breaches will continue to increase as the financial industry increasingly relies on third-party data tools and cloud computing services unless they simultaneously improve their security posture.
The reality is that adopting a third-party data system without taking the proper precautions means adopting its security vulnerabilities.
In 2023, the MOVEit data breach affected companies worldwide, including financial institutions using its file transfer system. One hack created a global data crisis, potentially affecting the customer data of every company using this one software product.
The McKinsey report emphasises choosing tools wisely. Why ? Because when customer data is compromised, it’s your company that takes the heat, not the tool provider. As the report states :
“Companies need reliable, insightful metrics and reporting (such as security compliance, risk metrics and vulnerability tracking) to prove to regulators the health of their security capabilities and to manage those capabilities.”
Don’t put user or customer data in the hands of companies you can’t trust. Work with providers that care about security as much as you do. With Matomo, you own all of your data, ensuring it’s never used for unknown purposes.
4. Protecting users’ privacy
With security threats increasing, fintech companies and traditional banks must prioritise user privacy protection. Users are also increasingly aware of privacy threats and ready to walk away from companies that lose their trust.
Cisco’s 2023 Data Privacy Benchmark Study reveals some eye-opening statistics :
- 94% of companies said their customers wouldn’t buy from them if their data wasn’t protected, and
- 95% see privacy as a business necessity, not just a legal requirement.
Modern financial companies must balance data collection and management with increasing privacy demands. This may sound contradictory for companies reliant on dated practices like third-party cookies, but they need to learn to thrive in a cookieless web as customers move to banks and service providers that have strong data ethics.
This privacy protection journey starts with implementing web analytics ethically from the very first session.
The most important elements of ethically-sound web analytics in fintech are :
- 100% data ownership : Make sure your data isn’t used in other ways by the tools that collect it.
- Respecting user privacy : Only collect the data you absolutely need to do your job and avoid personally identifiable information.
- Regulatory compliance : Stick with solutions built for compliance to stay out of legal trouble.
- Data transparency : Know how your tools use your data and let your customers know how you use it.
Read our guide to ethical web analytics for more information.
5. Comparing customer trust across industries
While fintech companies are making waves in the financial world, they’re still playing catch-up when it comes to earning customer trust. According to RFI Global, fintech has a consumer trust score of 5.8/10 in 2024, while traditional banking scores 7.6/10.
This trust gap isn’t just about perception – it’s rooted in real issues :
- Security breaches are making headlines more often.
- Privacy regulations like GDPR are making consumers more aware of their rights.
- Some fintech companies are struggling to handle fraud effectively.
According to the UK’s Payment Systems Regulator, digital banking brands Monzo and Starling had some of the highest fraudulent activity rates in 2022. Yet, Monzo only reimbursed 6% of customers who reported suspicious transactions, compared to 70% for NatWest and 91% for Nationwide.
So, what can fintech firms do to close this trust gap ?
- Start with privacy-centric analytics from day one. This shows customers you value their privacy from the get-go.
- Build and maintain a long-term reputation free of data leaks and privacy issues. One major breach can undo years of trust-building.
- Learn from traditional banks when it comes to handling issues like fraudulent transactions, identity theft, and data breaches. Prompt, customer-friendly resolutions go a long way.
- Remember : cutting-edge financial technology doesn’t make up for poor customer care. If your digital bank won’t refund customers who’ve fallen victim to credit card fraud, they’ll likely switch to a traditional bank that will.
The fintech sector has made strides in innovation, but there’s still work to do in establishing trustworthiness. By focusing on robust security, transparent practices, and excellent customer service, fintech companies can bridge the trust gap and compete more effectively with traditional banks.
6. Collecting quality data
Adhering to data privacy regulations, protecting user data and implementing ethical analytics raises another challenge. How can companies do all of these things and still collect reliable, quality data ?
Google’s answer is using predictive models, but this replaces real data with calculations and guesswork. The worst part is that Google Analytics doesn’t even let you use all of the data you collect in the first place. Instead, it uses something called data sampling once you pass certain thresholds.
In practice, this means that Google Analytics uses a limited set of your data to calculate reports. We’ve discussed GA4 data sampling at length before, but there are two key problems for companies here :
- A sample size that’s too small won’t give you a full representation of your data.
- The more visitors that come to your site, the less accurate your reports will become.
For high-growth companies, data sampling simply can’t keep up. Financial marketers widely recognise the shortcomings of big tech analytics providers. In fact, 80% of them say they’re concerned about data bias from major providers like Google and Meta affecting valuable insights.
This is precisely why CRO:NYX Digital approached us after discovering Google Analytics wasn’t providing accurate campaign data. We set up an analytics system to suit the company’s needs and tested it alongside Google Analytics for multiple campaigns. In one instance, Google Analytics failed to register 6,837 users in a single day, approximately 9.8% of the total tracked by Matomo.
In another instance, Google Analytics only tracked 600 visitors over 24 hours, while Matomo recorded nearly 71,000 visitors – an 11,700% discrepancy.
Financial companies need a more reliable, privacy-centric alternative to Google Analytics that captures quality data without putting users at potential risk. This is why we built Matomo and why our customers love having total control and visibility of their data.
Unlock the full power of fintech data analytics with Matomo
Fintech companies face many data-related challenges, so compliant web analytics shouldn’t be one of them.
With Matomo, you get :
- An all-in-one solution that handles traditional web analytics, behavioural analytics and more with strong integrations to minimise the likelihood of data siloing
- Full compliance with GDPR, CCPA, PIPL and more
- Complete ownership of your data to minimise cybersecurity risks caused by negligent third parties
- An abundance of ways to protect customer privacy, like IP address anonymisation and respect for DoNotTrack settings
- The ability to import data from Google Analytics and distance yourself from big tech
- High-quality data that doesn’t rely on sampling
- A tool built with financial analytics in mind
Don’t let big tech companies limit the power of your data with sketchy privacy policies and counterintuitive systems like data sampling.
Start your Matomo free trial or request a demo to unlock the full power of fintech data analytics without putting your customers’ personal information at unnecessary risk.