Recherche avancée

Médias (3)

Mot : - Tags -/pdf

Autres articles (39)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (5525)

  • Consent Mode v2 : Everything You Need to Know

    7 mai 2024, par Alex — Analytics Tips

    Confused about Consent Mode v2 and its impact on your website analytics ? You’re not the only one. 

    Google’s latest update has left many scratching their heads about data privacy and tracking. 

    In this blog, we’re getting straight to the point. We’ll break down what Consent Mode v2 is, how it works, and the impact it has.

    What is Consent Mode ?

    What exaclty is Google Consent Mode and why is there so much buzz surrounding it ? This question has been frustrating analysts and marketers worldwide since the beginning of this year. 

    Consent Mode is the solution from Google designed to manage data collection on websites in accordance with user privacy requirements.

    This mode enables website owners to customise how Google tags respond to users’ consent status for cookie usage. At its core, Consent Mode adheres to privacy regulations such as GDPR in Europe and CCPA in California, without significant loss of analytical data.

    Diagram displaying how consent mode works

    How does Consent Mode work ?

    Consent Mode operates by adjusting the behaviour of tags on a website depending on whether consent for cookie usage is provided or not. If a user does not consent to the use of analytical or advertising cookies, Google tags automatically switch to collecting a limited amount of data, ensuring privacy compliance.

    This approach allows for continued valuable insights into website traffic and user behavior, even if users opt out of most tracking cookies.

    What types of consent are available in Consent Mode ?

    As of 6 March 2024, Consent Mode v2 has become the current standard (and in terms of utilising Google Advertising Services, practically mandatory), indicating the incorporation of four consent types :

    1. ad_storage : allows for the collection and storage of data necessary for delivering personalised ads based on user actions.
    2. ad_user_data : pertains to the collection and usage of data that can be associated with the user for ad customisation and optimisation.
    3. ad_personalization : permits the use of user data for ad personalisation and providing more relevant content.
    4. analytics_storage : relates to the collection and storage of data for analytics, enabling websites to analyse user behaviour and enhance user experience.

    Additionally, in Consent Mode v2, there are two modes :

    1. Basic Consent Mode : in which Google tags are not used for personalised advertising and measurements if consent is not obtained.
    2. Advanced Consent Mode : allows Google tags to utilise anonymised data for personalised advertising campaigns and measurements, even if consent is not obtained.

    What is Consent Mode v2 ? (And how does it differ from Consent Mode v1 ?)

    Consent Mode v2 is an improved version of the original Consent Mode, offering enhanced customisation capabilities and better compliance with privacy requirements. 

    The new version introduces additional consent configuration parameters, allowing for even more precise control over which data is collected and how it’s used. The key difference between Consent Mode v2 and Consent Mode v1 lies in more granular consent management, making this tool even more flexible and powerful in safeguarding personal data.

    In Consent Mode v2, the existing markers (ad_storage and analytics_storage) are accompanied by two new markers :

    1. ad_user_data – does the user agree to their personal data being utilized for advertising purposes ?
    2. ad_personalization – does the user agree to their data being employed for remarketing ?

    In contrast to ad_storage and analytics_storage, these markers don’t directly affect how the tags operate on the site itself. 

    They serve as additional directives sent alongside the pings to Google services, indicating how user data can be utilised for advertising purposes.

    While ad_storage and analytics_storage serve as upstream qualifiers for data (determining which identifiers are sent with the pings), ad_user_data and ad_personalization serve as downstream instructions for Google services regarding data processing.

    How is the implementation of Consent Mode v2 going ?

    The implementation of Consent Mode v2 is encountering some issues and bugs (as expected). The most important thing to understand :

    1. Advanced Consent Mode v2 is essential if you have traffic and campaigns with Google Ads in the European Union.
    2. If you don’t have substantially large traffic, enabling Advanced Consent Mode v2 will likely result in a traffic drop in GA4 – because this version of consent mode (unlike the basic one) applies behavioural modelling to users who haven’t accepted the use of cookies. And modelling the behaviour requires time.

    The aspect of behavioural modelling in Consent Mode v2 implies the following : the data of users who have declined tracking options begin to be modelled using machine learning. 

    However, training the model requires a suitable data volume. As the Google’s documentation states :

    The property should collect at least 1,000 events per day with analytics_storage=’denied’ for at least 7 days. The property should have at least 1,000 daily users submitting events with analytics_storage=’granted’ for at least 7 of the previous 28 days.

    Largely due to this, the market’s response to the Consent Mode v2 implementation was mixed : many reported a significant drop in traffic in their GA4 and Google Ads reports upon enabling the Advanced mode. Essentially, a portion of the data was lost because Google’s models lacked enough data for training. 

    And from the very beginning of implementation, users regularly report about a few examples of that scenario. If your website doesn’t have enough traffic for behaviour modelling, after Consent Mode v2 switching you will face significant drop in your traffic in Google Ads and GA4 reports. There are a lot of cases of observing 90-95% drop in metrics of users and sessions.

    In a nutshell, you should be prepared for significant data losses if you are planning to switch to Google Consent Mode v2.

    How does Consent Mode v2 impact web analytics ? 

    The transition to Consent Mode v2 alters the methods of user data collection and processing. The main concerns arise from the potential loss of accuracy and completeness of analytical data due to restrictions on the use of cookies and other identifiers when user consent is absent. 

    With Google Consent Mode v2, the data of visitors who have not agreed to tracking will be modelled and may not accurately reflect your actual visitors’ behaviours and actions. So as an analyst or marketer, you will not have true insights into these visitors and the data acquired will be more generalised and less accurate.

    Google Consent Mode v2 appears to be a kind of compromise band-aid solution. 

    It tries to solve these issues by using data modelling and anonymised data collection. However, it’s critical to note that there are specific limitations inherent to the modelling mechanism.

    This complicates the analysis of visitor behavior, advertising campaigns, and website optimisation, ultimately impacting decision-making and resulting in poor website performance and marketing outcomes.

    Wrap up

    Consent Mode v2 is a mechanism of managing Google tag operations based on user consent settings. 

    It’s mandatory if you’re using Google’s advertising services, and optional (at least for Advanced mode) if you don’t advertise on Google Ads. 

    There are particular indications that this technology is unreliable from a GDPR perspective. 

    Using Google Consent Mode will inevitably lead to data losses and inaccuracies in its analysis. 

    In other words, it in some sense jeopardises your business.

  • How can I get matplotlib to show full subplots in an animation ?

    12 mars 2015, par Matt Stone

    I’m trying to write a simple immune system simulator. I’m modeling infected tissue as a simple grid of cells and various intracellular signals, and I’d like to animate movement of cells in one plot and the intensity of viral presence in another as the infection progresses. I’m doing so with the matshow function provided by matplotlib. However, when I plot the two next to each other, the full grid gets clipped unless I stretch out the window myself. I can’t address the problem at all when saving to an mp4.

    Here’s the default view, which is identical to what I observe when saving to mp4 :

    And here’s what it looks like after stretching out the viewer window

    I’m running Python 2.7.9 with matplotlib 1.4.2 on OS X 10.10.2, using ffmpeg 2.5.2 (installed via Homebrew). Below is the code I’m using to generate the animation. I tried using plt.tight_layout() but it didn’t affect the problem. If anyone has any advice as to how to solve this, I’d really appreciate it ! I’d especially like to be able to save it without viewing with plt.show(). Thanks !

    def animate(self, fname=None, frames=100):
       fig, (agent_ax, signal_ax) = plt.subplots(1, 2, sharey=True)

       agent_ax.set_ylim(0, self.grid.shape[0])
       agent_ax.set_xlim(0, self.grid.shape[1])
       signal_ax.set_ylim(0, self.grid.shape[0])
       signal_ax.set_xlim(0, self.grid.shape[1])

       agent_mat = agent_ax.matshow(self.display_grid(),
                                    vmin=0, vmax=10)
       signal_mat = signal_ax.matshow(self.signal_display(virus),
                                      vmin=0, vmax=20)
       fig.colorbar(signal_mat)

       def anim_update(tick):
           self.update()
           self.diffuse()
           agent_mat.set_data(self.display_grid())
           signal_mat.set_data(self.signal_display(virus))
           return agent_mat, signal_mat

       anim = animation.FuncAnimation(fig, anim_update, frames=frames,
                                      interval=3000, blit=False)

       if fname:
           anim.save(fname, fps=5, extra_args=['-vcodec', 'libx264'])
       else:
           plt.show()
  • GC and onTouch cause Fatal signal 11 (SIGSEGV) error in app using ffmpeg through ndk

    30 janvier 2015, par grzebyk

    I am getting a nasty but well known error while working with FFmpeg and NDK :

    A/libc(9845): Fatal signal 11 (SIGSEGV), code 1, fault addr 0xa0a9f000 in tid 9921 (AsyncTask #4)

    UPDATE

    After couple hours i found out that there might be two sources of the problem. One was related to multithreading. I checked it and I fixed it. Now the app crashes ONLY when the video playback (ndk) is on.

    I put a "counter" in touch event

     surfaceSterowanieKamera.setOnTouchListener(new View.OnTouchListener() {
               int counter = 0;
               @Override
               public boolean onTouch(View v, MotionEvent event) {            
                   if ((event.getAction() == MotionEvent.ACTION_MOVE)){
                       Log.i(TAG, "counter = " + counter);
                       //cameraMover.setPanTilt(some parameters);
                       counter++;
                    }

    And I started disabling other app functionalities one by one, but no video. I found out, that with every single functionality less, it takes app longer to crush - counter reaches higher values. After turning off everything besides video playback and touch interface (cameraMover.setPanTilt() commented out) the app crushes usually when counter is between 1600 - 1700.

    In such case logcat shows the above error and GC related info. For me it seems like GC is messing up with the ndk.

    01-23 12:27:13.163: I/Display Activity(20633): n = 1649
    01-23 12:27:13.178: I/art(20633): Background sticky concurrent mark sweep GC freed 158376(6MB) AllocSpace objects, 1(3MB) LOS objects, 17% free, 36MB/44MB, paused 689us total 140.284ms
    01-23 12:27:13.169: A/libc(20633): Fatal signal 11 (SIGSEGV), code 1, fault addr 0x9bd6ec0c in tid 20734 (AsyncTask #3)

    Why is GC causing problem with ndk part of application ?


    ORIGINAL PROBLEM

    What am I doing ?

    I am developing an application that streams live video feed from a webcam and enables user to pan and tilt the remote camera. I am using FFmpeg library built with NDK to achieve smooth playback with little delay.

    I am using FFMpeg library to connect to the video stream. Then the ndk part creates bitmap, does the image processing and render frames on the SurfaceView videoSurfaceView object which is located in the android activity (java part).

    To move the webcam I created a separate class - public class CameraMover implements Runnable{/**/}. This class is a separate thread that connects through sockets with the remote camera and manages tasks connected ONLY with pan-tilt movement.

    Next in the main activity i created a touch listener

    videoSurfaceView.setOnTouchListener(new View.OnTouchListener() {/**/
    cameraMover.setPanTilt(some parameters);
    /**/}

    which reads user’s finger movement and sends commands to the camera.

    All tasks - moving camera around, touch interface and video playback are working perfectly when the one of the others is disabled, i.e. when I disable possibility to move camera, I can watch video streaming and register touch events till the end of time (or battery at least). The problem occurs only when task are configured to work simultaneously.

    I am unable to find steps to reproduce the problem. It just happens, but only after user touches the screen to move camera. It can be 15 seconds after first interaction, but sometimes it takes app 10 or more minutes to crash. Usually it is something around a minute.

    What have I done to fix it ?

    • I tried to display millions of logs in logcat to find an error but
      the last log was always different.
    • I created a transparent surface, that I put over the videoSurfaceView and assigned touch listener to it. It all ended in the same error.
    • As I mentioned before, I turned off some functionalities to find which one produces the error, but it appears that error occurs only when everything is working simultaneously.

    Types of the error

    Almost every time the error looks like this :

    A/libc(11528): Fatal signal 11 (SIGSEGV), code 1, fault addr 0x9aa9f00c in tid 11637 (AsyncTask #4)

    the difference between two errors is the number right after libc, addr number and tid number. Rarely the AsyncTask number varies - i received #1 couple times but I was unable to reproduce it.

    Question

    How can i avoid this error ? What can be the source of it ?