Recherche avancée

Médias (91)

Autres articles (50)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Problèmes fréquents

    10 mars 2010, par

    PHP et safe_mode activé
    Une des principales sources de problèmes relève de la configuration de PHP et notamment de l’activation du safe_mode
    La solution consiterait à soit désactiver le safe_mode soit placer le script dans un répertoire accessible par apache pour le site

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

Sur d’autres sites (6299)

  • The use cases for a element in HTML

    1er janvier 2014, par silvia

    The W3C HTML WG and the WHATWG are currently discussing the introduction of a <main> element into HTML.

    The <main> element has been proposed by Steve Faulkner and is specified in a draft extension spec which is about to be accepted as a FPWD (first public working draft) by the W3C HTML WG. This implies that the W3C HTML WG will be looking for implementations and for feedback by implementers on this spec.

    I am supportive of the introduction of a <main> element into HTML. However, I believe that the current spec and use case list don’t make a good enough case for its introduction. Here are my thoughts.

    Main use case : accessibility

    In my opinion, the main use case for the introduction of <main> is accessibility.

    Like any other users, when blind users want to perceive a Web page/application, they need to have a quick means of grasping the content of a page. Since they cannot visually scan the layout and thus determine where the main content is, they use accessibility technology (AT) to find what is known as “landmarks”.

    “Landmarks” tell the user what semantic content is on a page : a header (such as a banner), a search box, a navigation menu, some asides (also called complementary content), a footer, …. and the most important part : the main content of the page. It is this main content that a blind user most often wants to skip to directly.

    In the days of HTML4, a hidden “skip to content” link at the beginning of the Web page was used as a means to help blind users access the main content.

    In the days of ARIA, the aria @role=main enables authors to avoid a hidden link and instead mark the element where the main content begins to allow direct access to the main content. This attribute is supported by AT – in particular screen readers – by making it part of the landmarks that AT can directly skip to.

    Both the hidden link and the ARIA @role=main approaches are, however, band aids : they are being used by those of us that make “finished” Web pages accessible by adding specific extra markup.

    A world where ARIA is not necessary and where accessibility developers would be out of a job because the normal markup that everyone writes already creates accessible Web sites/applications would be much preferable over the current world of band-aids.

    Therefore, to me, the primary use case for a <main> element is to achieve exactly this better world and not require specialized markup to tell a user (or a tool) where the main content on a page starts.

    An immediate effect would be that pages that have a <main> element will expose a “main” landmark to blind and vision-impaired users that will enable them to directly access that main content on the page without having to wade through other text on the page. Without a <main> element, this functionality can currently only be provided using heuristics to skip other semantic and structural elements and is for this reason not typically implemented in AT.

    Other use cases

    The <main> element is a semantic element not unlike other new semantic elements such as <header>, <footer>, <aside>, <article>, <nav>, or <section>. Thus, it can also serve other uses where the main content on a Web page/Web application needs to be identified.

    Data mining

    For data mining of Web content, the identification of the main content is one of the key challenges. Many scholarly articles have been published on this topic. This stackoverflow article references and suggests a multitude of approaches, but the accepted answer says “there’s no way to do this that’s guaranteed to work”. This is because Web pages are inherently complex and many <div>, <p>, <iframe> and other elements are used to provide markup for styling, notifications, ads, analytics and other use cases that are necessary to make a Web page complete, but don’t contribute to what a user consumes as semantically rich content. A <main> element will allow authors to pro-actively direct data mining tools to the main content.

    Search engines

    One particularly important “data mining” tool are search engines. They, too, have a hard time to identify which sections of a Web page are more important than others and employ many heuristics to do so, see e.g. this ACM article. Yet, they still disappoint with poor results pointing to findings of keywords in little relevant sections of a page rather than ranking Web pages higher where the keywords turn up in the main content area. A <main> element would be able to help search engines give text in main content areas a higher weight and prefer them over other areas of the Web page. It would be able to rank different Web pages depending on where on the page the search words are found. The <main> element will be an additional hint that search engines will digest.

    Visual focus

    On small devices, the display of Web pages designed for Desktop often causes confusion as to where the main content can be found and read, in particular when the text ends up being too small to be readable. It would be nice if browsers on small devices had a functionality (maybe a default setting) where Web pages would start being displayed as zoomed in on the main content. This could alleviate some of the headaches of responsive Web design, where the recommendation is to show high priority content as the first content. Right now this problem is addressed through stylesheets that re-layout the page differently depending on device, but again this is a band-aid solution. Explicit semantic markup of the main content can solve this problem more elegantly.

    Styling

    Finally, naturally, <main> would also be used to style the main content differently from others. You can e.g. replace a semantically meaningless <div id=”main”> with a semantically meaningful <main> where their position is identical. My analysis below shows, that this is not always the case, since oftentimes <div id=”main”> is used to group everything together that is not the header – in particular where there are multiple columns. Thus, the ease of styling a <main> element is only a positive side effect and not actually a real use case. It does make it easier, however, to adapt the style of the main content e.g. with media queries.

    Proposed alternative solutions

    It has been proposed that existing markup serves to satisfy the use cases that <main> has been proposed for. Let’s analyse these on some of the most popular Web sites. First let’s list the propsed algorithms.

    Proposed solution No 1 : Scooby-Doo

    On Sat, Nov 17, 2012 at 11:01 AM, Ian Hickson <ian@hixie.ch> wrote :
    | The main content is whatever content isn’t
    | marked up as not being main content (anything not marked up with <header>,
    | <aside>, <nav>, etc).
    

    This implies that the first element that is not a <header>, <aside>, <nav>, or <footer> will be the element that we want to give to a blind user as the location where they should start reading. The algorithm is implemented in https://gist.github.com/4032962.

    Proposed solution No 2 : First article element

    On Sat, Nov 17, 2012 at 8:01 AM, Ian Hickson  wrote :
    | On Thu, 15 Nov 2012, Ian Yang wrote :
    | >
    | > That’s a good idea. We really need an element to wrap all the <p>s,
    | > <ul>s, <ol>s, <figure>s, <table>s ... etc of a blog post.
    |
    | That’s called <article>.
    

    This approach identifies the first <article> element on the page as containing the main content. Here’s the algorithm for this approach.

    Proposed solution No 3 : An example heuristic approach

    The readability plugin has been developed to make Web pages readable by essentially removing all the non-main content from a page. An early source of readability is available. This demonstrates what a heuristic approach can perform.

    Analysing alternative solutions

    Comparison

    I’ve picked 4 typical Websites (top on Alexa) to analyse how these three different approaches fare. Ideally, I’d like to simply apply the above three scripts and compare pictures. However, since the semantic HTML5 elements <header>, <aside>, <nav>, and <footer> are not actually used by any of these Web sites, I don’t actually have this choice.

    So, instead, I decided to make some assumptions of where these semantic elements would be used and what the outcome of applying the first two algorithms would be. I can then compare it to the third, which is a product so we can take screenshots.

    Google.com

    http://google.com – search for “Scooby Doo”.

    The search results page would likely be built with :

    • a <nav> menu for the Google bar
    • a <header> for the search bar
    • another <header> for the login section
    • another <nav> menu for the search types
    • a <div> to contain the rest of the page
    • a <div> for the app bar with the search number
    • a few <aside>s for the left and right column
    • a set of <article>s for the search results
    “Scooby Doo” would find the first element after the headers as the “main content”. This is the element before the app bar in this case. Interestingly, there is a <div @id=main> already in the current Google results page, which “Scooby Doo” would likely also pick. However, there are a nav bar and two asides in this div, which clearly should not be part of the “main content”. Google actually placed a @role=main on a different element, namely the one that encapsulates all the search results.

    “First Article” would find the first search result as the “main content”. While not quite the same as what Google intended – namely all search results – it is close enough to be useful.

    The “readability” result is interesting, since it is not able to identify the main text on the page. It is actually aware of this problem and brings a warning before displaying this page :

    Readability of google.com

    Facebook.com

    https://facebook.com

    A user page would likely be built with :

    • a <header> bar for the search and login bar
    • a <div> to contain the rest of the page
    • an <aside> for the left column
    • a <div> to contain the center and right column
    • an <aside> for the right column
    • a <header> to contain the center column “megaphone”
    • a <div> for the status posting
    • a set of <article>s for the home stream
    “Scooby Doo” would find the first element after the headers as the “main content”. This is the element that contains all three columns. It’s actually a <div @id=content> already in the current Facebook user page, which “Scooby Doo” would likely also pick. However, Facebook selected a different element to place the @role=main : the center column.

    “First Article” would find the first news item in the home stream. This is clearly not what Facebook intended, since they placed the @role=main on the center column, above the first blog post’s title. “First Article” would miss that title and the status posting.

    The “readability” result again disappoints but warns that it failed :

    YouTube.com

    http://youtube.com

    A video page would likely be built with :

    • a <header> bar for the search and login bar
    • a <nav> for the menu
    • a <div> to contain the rest of the page
    • a <header> for the video title and channel links
    • a <div> to contain the video with controls
    • a <div> to contain the center and right column
    • an <aside> for the right column with an <article> per related video
    • an <aside> for the information below the video
    • a <article> per comment below the video
    “Scooby Doo” would find the first element after the headers as the “main content”. This is the element that contains the rest of the page. It’s actually a <div @id=content> already in the current YouTube video page, which “Scooby Doo” would likely also pick. However, YouTube’s related videos and comments are unlikely to be what the user would regard as “main content” – it’s the video they are after, which generously has a <div id=watch-player>.

    “First Article” would find the first related video or comment in the home stream. This is clearly not what YouTube intends.

    The “readability” result is not quite as unusable, but still very bare :

    Wikipedia.com

    http://wikipedia.com (“Overscan” page)

    A Wikipedia page would likely be built with :

    • a <header> bar for the search, login and menu items
    • a <div> to contain the rest of the page
    • an &ls ; article> with title and lots of text
    • <article> an <aside> with the table of contents
    • several <aside>s for the left column
    Good news : “Scooby Doo” would find the first element after the headers as the “main content”. This is the element that contains the rest of the page. It’s actually a <div id=”content” role=”main”> element on Wikipedia, which “Scooby Doo” would likely also pick.

    “First Article” would find the title and text of the main element on the page, but it would also include an <aside>.

    The “readability” result is also in agreement.

    Results

    In the following table we have summarised the results for the experiments :

    Site Scooby-Doo First article Readability
    Google.com FAIL SUCCESS FAIL
    Facebook.com FAIL FAIL FAIL
    YouTube.com FAIL FAIL FAIL
    Wikipedia.com SUCCESS SUCCESS SUCCESS

    Clearly, Wikipedia is the prime example of a site where even the simple approaches find it easy to determine the main content on the page. WordPress blogs are similarly successful. Almost any other site, including news sites, social networks and search engine sites are petty hopeless with the proposed approaches, because there are too many elements that are used for layout or other purposes (notifications, hidden areas) such that the pre-determined list of semantic elements that are available simply don’t suffice to mark up a Web page/application completely.

    Conclusion

    It seems that in general it is impossible to determine which element(s) on a Web page should be the “main” piece of content that accessibility tools jump to when requested, that a search engine should put their focus on, or that should be highlighted to a general user to read. It would be very useful if the author of the Web page would provide a hint through a <main> element where that main content is to be found.

    I think that the <main> element becomes particularly useful when combined with a default keyboard shortcut in browsers as proposed by Steve : we may actually find that non-accessibility users will also start making use of this shortcut, e.g. to get to videos on YouTube pages directly without having to tab over search boxes and other interactive elements, etc. Worthwhile markup indeed.

  • Banking Data Strategies – A Primer to Zero-party, First-party, Second-party and Third-party data

    25 octobre 2024, par Daniel Crough — Banking and Financial Services, Privacy

    Banks hold some of our most sensitive information. Every transaction, loan application, and account balance tells a story about their customers’ lives. Under GDPR and banking regulations, protecting this information isn’t optional – it’s essential.

    Yet banks also need to understand how customers use their services to serve them better. The solution lies in understanding different types of banking data and how to handle each responsibly. From direct customer interactions to market research, each data source serves a specific purpose and requires its own privacy controls.

    Before diving into how banks can use each type of data effectively, let’s look into the key differences between them :

    Data TypeWhat It IsBanking ExampleLegal Considerations
    First-partyData from direct customer interactions with your servicesTransaction records, service usage patternsDifferent legal bases apply (contract, legal obligation, legitimate interests)
    Zero-partyInformation customers actively provideStated preferences, financial goalsRequires specific legal basis despite being voluntary ; may involve profiling
    Second-partyData shared through formal partnershipsInsurance history from partnersMust comply with PSD2 and specific data sharing regulations
    Third-partyData from external providersMarket analysis, demographic dataRequires due diligence on sources and specific transparency measures

    What is first-party data ?

    Person looking at their first party banking data.

    First-party data reveals how customers actually use your banking services. When someone logs into online banking, withdraws money from an ATM, or speaks with customer service, they create valuable information about real banking habits.

    This direct interaction data proves more reliable than assumptions or market research because it shows genuine customer behaviour. Banks need specific legal grounds to process this information. Basic banking services fall under contractual necessity, while fraud detection is required by law. Marketing activities need explicit customer consent. The key is being transparent with customers about what information you process and why.

    Start by collecting only what you need for each specific purpose. Store information securely and give customers clear control through privacy settings. This approach builds trust while helping meet privacy requirements under the GDPR’s data minimisation principle.

    What is zero-party data ?

    A person sharing their banking data with their bank to illustrate zero party data in banking.

    Zero-party data emerges when customers actively share information about their financial goals and preferences. Unlike first-party data, which comes from observing customer behaviour, zero-party data comes through direct communication. Customers might share their retirement plans, communication preferences, or feedback about services.

    Interactive tools create natural opportunities for this exchange. A retirement calculator helps customers plan their future while revealing their financial goals. Budget planners offer immediate value through personalised advice. When customers see clear benefits, they’re more likely to share their preferences.

    However, voluntary sharing doesn’t mean unrestricted use. The ICO’s guidance on purpose limitation applies even to freely shared information. Tell customers exactly how you’ll use their data, document specific reasons for collecting each piece of information, and make it simple to update or remove personal data.

    Regular reviews help ensure you still need the information customers have shared. This aligns with both GDPR requirements and customer expectations about data management. By treating voluntary information with the same care as other customer data, banks build lasting trust.

    What is second-party data ?

    Two people collaborating by sharing data to illustrate second party data sharing in banking.

    Second-party data comes from formal partnerships between banks and trusted companies. For example, a bank might work with an insurance provider to better understand shared customers’ financial needs.

    These partnerships need careful planning to protect customer privacy. The ICO’s Data Sharing Code provides clear guidelines : both organisations must agree on what data they’ll share, how they’ll protect it, and how long they’ll keep it before any sharing begins.

    Transparency builds trust in these arrangements. Tell customers about planned data sharing before it happens. Explain what information you’ll share and how it helps provide better services.

    Regular audits help ensure both partners maintain high privacy standards. Review shared data regularly to confirm it’s still necessary and properly protected. Be ready to adjust or end partnerships if privacy standards slip. Remember that your responsibility to protect customer data extends to information shared with partners.

    Successful partnerships balance improved service with diligent privacy protection. When done right, they help banks understand customer needs better while maintaining the trust that makes banking relationships work.

    What is third-party data ?

    People conducting market research to get third party banking data.

    Third-party data comes from external sources outside your bank and its partners. Market research firms, data analytics companies, and economic research organizations gather and sell this information to help banks understand broader market trends.

    This data helps fill knowledge gaps about the wider financial landscape. For example, third-party data might reveal shifts in consumer spending patterns across different age groups or regions. It can show how customers interact with different financial services or highlight emerging banking preferences in specific demographics.

    But third-party data needs careful evaluation before use. Since your bank didn’t collect this information directly, you must verify both its quality and compliance with privacy laws. Start by checking how providers collected their data and whether they had proper consent. Look for providers who clearly document their data sources and collection methods.

    Quality varies significantly among third-party data providers. Some key questions to consider before purchasing :

    • How recent is the data ?
    • How was it collected ?
    • What privacy protections are in place ?
    • How often is it updated ?
    • Which specific market segments does it cover ?

    Consider whether third-party data will truly add value beyond your existing information. Many banks find they can gain similar insights by analysing their first-party data more effectively. If you do use third-party data, document your reasons for using it and be transparent about your data sources.

    Creating your banking data strategy

    A team collaborating on a banking data strategy.

    A clear data strategy helps your bank collect and use information effectively while protecting customer privacy. This matters most with first-party data – the information that comes directly from your customers’ banking activities.

    Start by understanding what data you already have. Many banks collect valuable information through everyday transactions, website visits, and customer service interactions. Review these existing data sources before adding new ones. Often, you already have the insights you need – they just need better organization.

    Map each type of data to a specific purpose. For example, transaction data might help detect fraud and improve service recommendations. Website analytics could reveal which banking features customers use most. Each data point should serve a clear business purpose while respecting customer privacy.

    Strong data quality standards support better decisions. Create processes to update customer information regularly and remove outdated records. Check data accuracy often and maintain consistent formats across your systems. These practices help ensure your insights reflect reality.

    Remember that strategy means choosing what not to do. You don’t need to collect every piece of data possible. Focus on information that helps you serve customers better while maintaining their privacy.

    Managing multiple data sources

    An image depicting multiple data sources.

    Banks work with many types of data – from direct customer interactions to market research. Each source serves a specific purpose, but combining them effectively requires careful planning and precise attention to regulations like GDPR and ePrivacy.

    First-party data forms your foundation. It shows how your customers actually use your services and what they need from their bank. This direct interaction data proves most valuable because it reflects real behaviour rather than assumptions. When customers check their balances, transfer money, or apply for loans, they show you exactly how they use banking services.

    Zero-party data adds context to these interactions. When customers share their financial goals or preferences directly, they help you understand the “why” behind their actions. This insight helps shape better services. For example, knowing a customer plans to buy a house helps you offer relevant savings tools or mortgage information at the right time.

    Second-party partnerships can fill specific knowledge gaps. Working with trusted partners might reveal how customers manage their broader financial lives. But only pursue partnerships when they offer clear value to customers. Always explain these relationships clearly and protect shared information carefully.

    Third-party data helps provide market context, but use it selectively. External market research can highlight broader trends or opportunities. However, this data often proves less reliable than information from direct customer interactions. Consider it a supplement to, not a replacement for, your own customer insights.

    Keep these principles in mind when combining data sources :

    • Prioritize direct customer interactions
    • Focus on information that improves services
    • Maintain consistent privacy standards across sources
    • Document where each insight comes from
    • Review regularly whether each source adds value
    • Work with privacy and data experts to ensure customer information is handled properly

    Enhance your web analytics strategy with Matomo

    Users flow report in Matomo analytics

    The financial sector finds powerful and compliant web analytics increasingly valuable as it navigates data management and privacy regulations. Matomo provides a configurable privacy-centric solution that meets the requirements of banks and financial institutions.

    Matomo empowers your organisation to :

    • Collect accurate, GDPR-compliant web data
    • Integrate web analytics with your existing tools and platforms
    • Maintain full control over your analytics data
    • Gain insights without compromising user privacy

    Matomo is trusted by some of the world’s biggest banks and financial institutions. Try Matomo for free for 30 days to see how privacy-focused analytics can get you the insights you need while maintaining compliance and user trust.

  • Evolution #4659 (Nouveau) : Avoir un système générique de pré-action à confirmer

    11 février 2021, par RastaPopoulos ♥

    Dans la continuité de #3247 mais ce n’est pas que pour les suppressions, c’est pour tout action quelconque.

    Ça fait plusieurs années que j’ai ça en tête : actuellement à chaque fois qu’on a besoin de confirmation, c’est juste une boite JS, ce qui pose clairement problème, ça ne marche pas pour tout le monde, si jamais JS désactivé ou bugué, ya plus de confirmation du tout.

    Donc ce qu’il faudrait c’est système générique de réelle confirmation (pas spécialement que pour les suppressions quoi) : une pré-action qui va amener sur une page avec un message intermédiaire et un bouton Confirmer et un Annuler. Et après en amélioration progressive on peut voir à charger le "content" de cette page dans une box pour pas partir sur une autre page. Ça existe dans d’autres CMS, Drupal, Typo3, et je trouve ça très bien.

    Comme le message doit pouvoir être personnalisé, les labels des boutons (si c’est pas que pour supprimer faut pouvoir mettre "Migrer", "Confirmer le déplacement", etc n’importe quoi), on va pas passer ça en GET visible, donc j’ai plutôt l’impression qu’il faudrait que le système soit forcément un bouton-formulaire pour envoyer ça en POST plus caché. Si on peut avoir quelques phrases, chaines, par défaut pour ne pas avoir à personnaliser aussi (mais je vois pas comment ça pourrait être générique pour l’instant).

    Du genre :

    <span class="CodeRay">#BOUTON_CONFIRMATION{
       #URL_ACTION{choseafaire, arg, etc},
       #ARRAY{
           message, "Voulez-vous vraiment dansez grand-mère ?", // Là je vois pas comment avoir un défaut
           label_confirmer, "Oui je veux valser grand-père", // "Confirmer" ou "Je confirme" ou "OK" par défaut ?
           label_annuler, "Non une soupe et au lit", // "Annuler" par défaut
           url_retour, #AUTRE_URL, // la page d'origine par défaut
       }
    }
    </span>

    Par défaut ça amènerait sur une page dédiée, qui génère alors une #BOUTON_ACTION avec l’action fournie en paramètre, et les deux boutons.
    Mais on pourrait en amélioration charger ça dans une box en JS (avec zajax) afin de rester sur la page de départ.

    Moins urgent mais à réfléchir : normalement tous nos formulaires et action sont normalement utilisables ailleurs, donc est-ce qu’on permet ce système dans le site public aussi ? Et dans ce cas comment ? On pourrait détecter d’où s’est fait la demande de départ et diriger autrement suivant admin ou public. Genre une page d’admin (chargée en zajax si JS ok) si on est en admin, et une page=spip_confirmiation si dans le site… minipres par défaut, mais surchargeable (tout comme on fait en z-core pour 404, page=login, etc). Ça me parait faisable.