
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (57)
-
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (4615)
-
Revision 36037 : s’assurer que la class ffmpeg_movie est disponible sinon cela ne sert pas ...
9 mars 2010, par kent1@… — Logs’assurer que la class ffmpeg_movie est disponible sinon cela ne sert pas à grand chose
-
Strategies for Reducing Bank Customer Acquisition Cost [2024]
24 septembre 2024, par Daniel Crough — Banking and Financial ServicesAcquiring new customers is no small feat — regardless of the size of your team. The expenses of various marketing efforts tend to pile up fast, even more so when your business operates in a highly competitive industry like banking. At the same time, marketing budgets continue to decrease — dropping from an average of 9.1% of total company revenue in 2023 down to 7.7% in 2024 — prompting businesses in the financial services industry to figure out how they can do more with less.
That brings us to bank customer acquisition cost (CAC) — a key business metric that can reveal quite a bit about your bank’s long-term profitability and potential for achieving sustainable growth.
This article will cover the ins and outs of bank customer acquisition costs and share actionable tips and strategies you can implement to reduce CAC.
What is customer acquisition cost in banking ?
The global market volume of neobanks — fintech companies and digital banking platforms, often referred to as “challenger banks” — was estimated at $4.96 trillion in 2023. It’s expected to continue growing at a compound annual growth rate (CAGR) of 13.15% in the coming years, potentially reaching $10.44 trillion by 2028.
That’s enough of an indicator that the financial services industry is now a highly competitive landscape where companies are often competing for the attention of a relatively limited audience.
Plus, several app-only banks based in Europe have made significant progress in attracting new customers to their financial products :
Unsurprisingly, this flurry of competition is putting upward pressure on customer acquisition and retention costs across the banking sector.
Customer acquisition cost (CAC) — the sum of all costs and resources related to acquiring an additional customer — is one of the key business metrics to keep an eye on when trying to maximise your return on investment (ROI) and profitability, especially if your company operates in the banking industry.
Here’s the basic formula you can use to calculate the cost of acquisition in banking :
Customer Acquisition Cost (CAC) = Total Amount Spent (TS) / Total New Customers Acquired (TNC)
In essence, it requires you to divide the total cost of acquiring consumers — including sales and marketing expenses — by the total number of new customers your company has gained within a specific timeframe.
There’s one thing you need to keep in mind :
The customer acquisition process involves more than just your marketing and sales departments.
While marketing and sales channels play a crucial role in this process, the list of expenses that may contribute to customer acquisition costs in banking goes well beyond that.
Here’s a quick breakdown of the customer acquisition cost formula to show you which costs make up the total amount spent :
- All advertising and marketing costs, including traditional (direct mail, billboards, TV and print advertising) and digital channels (email, Google ads, social media and influencer marketing)
- Cost of outsourced marketing services, including any independent contractors involved in the process
- Salaries and commissions for the marketing team and sales representatives
- Software subscriptions, including marketing software and web analytics tools
- Other overhead and operational costs
And until you’ve taken all these expenses into account, you won’t be able to accurately estimate how much it actually costs you to attract potential customers.
Another thing to keep in mind is that there’s no universal definition of “good CAC.”
The average customer acquisition cost varies across different industries and business models. That said, you can generally expect a higher-than-average CAC in highly competitive sectors — namely, the financial, manufacturing and real estate industries.
Importance of tracking customer acquisition cost in banking
Customer acquisition costs are an important indicator of a banking business’s potential growth and profitability. Monitoring this fundamental business metric can provide data-driven insights about your current bank customer acquisition strategy — and offers a few notable benefits :
- Measuring the performance and effectiveness of different channels and campaigns and making data-driven decisions regarding future marketing efforts
- Improving return on investment (ROI) by determining the most effective strategies for acquiring new customers
- Improving profitability by assessing the value per customer and improving profit margins
- Benchmarking against industry competitors to see where your business’s CAC stands compared to the banking industry average
At the risk of stating the obvious, acquiring new customers isn’t always easy. That’s true for many highly competitive industries — especially the banking sector, which is currently witnessing the rapid rise of digital disruptors.
Case in point, the fintech market alone is currently valued at $312.98 billion and is expected to reach $556.70 billion by 2030, following a CAGR of 14%.
However, strong competition is only one of the challenges banks face throughout the process of attracting potential customers.
Here are a few other things to keep in mind :
- Ethical business practices and strict compliance requirements when it comes to the privacy and security of customer data, including meeting data protection standards and ensuring regulatory compliance
- Lack of personalisation throughout the customer journey, which today’s customers view as a lack of understanding of — and even interest in — their needs and preferences
- Limited mobile banking capabilities, which further points to a failure to innovate and adapt — one of the leading risks that financial services may face
7 strategies for reducing bank customer acquisition costs
When working on optimising your banking customer acquisition strategy, the key thing to keep in mind is that there are two sides to improving CAC :
On the one hand, you have efforts to decrease the costs associated with acquiring a new customer — and on the other, you have the importance of attracting high-value customers.
1. Eliminate friction points in the customer onboarding process
One of the first things financial institutions should do is examine their existing digital onboarding process and look for friction points that might cause potential customers to drop off. After all, a streamlined onboarding process will minimise barriers to conversion, increasing the number of new customers acquired and improving overall customer satisfaction.
Keep in mind that, at the 30-day mark, finance mobile apps have an average user retention rate of 3% :
That says a lot about the importance of providing a frictionless onboarding experience as a retail bank or any other financial institution.
Granted, a single point of friction is rarely enough to cause customers to churn. It’s typically a combination of several factors — a lengthy sign-up process with complicated password requirements and time-consuming customer identification or poor customer service, for example — that occur during the key moments of the customer journey.
In order to keep tabs on customer experiences across different touchpoints and spot potential barriers in their journey, you’ll need a reliable source of data. Matomo’s Funnels report can show you exactly where your website visitors are dropping off.
2. Get more personalised with your marketing efforts
Generic experiences are rarely the way to go — especially when you’re contending for the attention of prospective customers in such a competitive sector.
Besides, 62% of people who made an online purchase within the last six months have said that brands would lose their loyalty following a non-personalised experience.
What’s more shocking is that only a year earlier, that number stood at 45%.
When it comes to improving marketing efficiency and sales strategies, 94% of marketers agree that personalisation is key :
It’s evident that personalised marketing supported by behavioural segmentation can significantly improve conversion rates — and, most importantly, reduce acquisition costs.
Of course, it’s virtually impossible to deliver targeted, personalised marketing messaging without creating audience segments and detailed buyer personas. Matomo’s Segmentation feature can help by allowing you to split website visitors into smaller groups and get much-needed insights for behavioural segmentation.
3. Build an omnichannel marketing strategy
Customer expectations, behaviours and preferences are constantly evolving, making it crucial for financial services to adapt their customer acquisition strategies accordingly. Meeting prospective customers on their preferred channels is a big part of that.
The issue is that modern banking customers tend to move across different channels. That’s one of the reasons why it’s becoming increasingly more difficult to deliver a unified experience throughout the entire customer journey and close the gap between digital and in-person customer interactions.
Omnichannel marketing gives you a way to keep up with customers’ ever-evolving expectations :
Adopting this marketing strategy will allow you to meet customers where they are and deliver a seamless experience across a wide range of digital channels and touchpoints, leading to more exposure — and, ultimately, increasing the number of acquired customers.
Matomo can support your omnichannel efforts by providing accurate, unsampled data needed for cross-channel analytics and marketing attribution.
4. Work on your social media presence
Social networks are among the most popular — and successful — digital marketing channels, with millions (even billions, depending on the platform) of active users.
In fact, 89% of marketers report using Facebook as their main platform for social media marketing, while another 80% use Instagram to reach their target audience and promote their business.
And according to The State of Social Media in Banking 2023 report, nine out of ten banks (89%) consider social media is important, while another 88% are active on their social media accounts.
That is to say, even traditionally conservative industries — like banking and finance — realise the crucial role of social media in promoting their services and engaging with customers on their preferred channels :
It’s an excellent way for businesses in the financial sector to gain exposure, drive traffic to their website and acquire new customers.
If you’re ready to improve social media visibility as part of your multichannel efforts, Matomo can help you track social media activity across 70 different platforms.
5. Shift the focus on customer loyalty and retention
Up until this point, the focus has mainly been on building new business relationships. However, one thing to keep in mind is that retaining existing customers is generally cheaper than investing in customer acquisition activities to attract new ones.
Of course, customer retention won’t directly impact your CAC. But what it can do is increase customer lifetime value, contributing to your company’s revenue and profits — which, in turn, can “balance out” your acquisition costs in the long run.
That’s not to say that you should stop trying to bring in new clients ; far from it.
However, focusing on increasing customer loyalty — namely, delivering excellent customer service and building lasting business relationships — could motivate satisfied customers to become brand advocates.
As this survey of customer satisfaction for leading banks in the UK has shown, when clients are satisfied with a bank’s products and services, they’re more likely to recommend it.
Positive word-of-mouth recommendations can be a powerful way to drive customer acquisition. You can leverage that by launching a customer referral program and incentivising loyal customers to refer new ones to your business.
6. A/B test different elements to find ones that work
We’ve already underlined the importance of understanding your audience ; it’s the foundation for optimising the customer journey and delivering targeted marketing efforts that will attract more customers.
Another proven method that can be used to refine your customer acquisition strategy is A/B or split testing.
It involves testing different versions of specific elements of your marketing content — such as language, CTAs and visuals — to determine the most effective combinations that resonate with your target audience.
Besides your marketing campaigns, you can also split test different variants of your website or mobile app to see which version gets them to convert.
Matomo’s A/B Testing feature can be of huge help here :
7. Track other relevant customer acquisition metrics
To better assess your company’s profitability, you’ll have to go beyond CAC and factor in other critical metrics — namely, customer lifetime value (CLTV), churn rate and return on investment (ROI).
Here are the most important KPIs you should monitor in addition to CAC :
- Customer lifetime value (CLTV), which represents the revenue generated by a single customer throughout the duration of their relationship with your company and is another crucial indicator of customer profitability
- Churn rate — the rate at which your company loses clients within a given timeframe — can indicate how well you’re retaining customers
- Return on investment (ROI) — the revenue generated by new clients compared to the initial costs of acquiring them — can help you identify the most effective customer acquisition channels
These metrics work hand in hand. There needs to be a balance between the revenue the customer generates over their lifetime and the costs related to attracting them.
Ideally, you should be aiming for lower CAC and customer churn and higher CLTV ; that’s usually a solid indicator of financial health and sustainable growth.
Lower bank customer acquisition costs with Matomo
Acquiring new customers will require a lot of time and resources, regardless of the industry you’re working in — but can be even more challenging in the financial sector, where you have to adapt to the ever-changing customer expectations and demands.
The strategies outlined above — combined with a thorough understanding of your customer’s behaviours and preferences — can help you lower the cost of bank customer acquisition.
On that note, you can learn a lot about your customers through web analytics — and use those insights to support your customer acquisition process and ensure you’re delivering a seamless online banking experience.
If you need an alternative to Google Analytics that doesn’t rely on data sampling and ensures compliance with the strictest privacy regulations, all while being easy to use, choose Matomo — the go-to web analytics platform for more than 1 million websites around the globe.
CTA : Start your 21-day free trial today to see how Matomo’s all-in-one solution can help you understand and attract new customers — all while respecting their privacy.
-
Open Media Developers Track at OVC 2011
11 octobre 2011, par silviaThe Open Video Conference that took place on 10-12 September was so overwhelming, I’ve still not been able to catch my breath ! It was a dense three days for me, even though I only focused on the technology sessions of the conference and utterly missed out on all the policy and content discussions.
Roughly 60 people participated in the Open Media Software (OMS) developers track. This was an amazing group of people capable and willing to shape the future of video technology on the Web :
- HTML5 video developers from Apple, Google, Opera, and Mozilla (though we missed the NZ folks),
- codec developers from WebM, Xiph, and MPEG,
- Web video developers from YouTube, JWPlayer, Kaltura, VideoJS, PopcornJS, etc.,
- content publishers from Wikipedia, Internet Archive, YouTube, Netflix, etc.,
- open source tool developers from FFmpeg, gstreamer, flumotion, VideoLAN, PiTiVi, etc,
- and many more.
To provide a summary of all the discussions would be impossible, so I just want to share the key take-aways that I had from the main sessions.
WebRTC : Realtime Communications and HTML5
Tim Terriberry (Mozilla), Serge Lachapelle (Google) and Ethan Hugg (CISCO) moderated this session together (slides). There are activities both at the W3C and at IETF – the ones at IETF are supposed to focus on protocols, while the W3C ones on HTML5 extensions.
The current proposal of a PeerConnection API has been implemented in WebKit/Chrome as open source. It is expected that Firefox will have an add-on by Q1 next year. It enables video conferencing, including media capture, media encoding, signal processing (echo cancellation etc), secure transmission, and a data stream exchange.
Current discussions are around the signalling protocol and whether SIP needs to be required by the standard. Further, the codec question is under discussion with a question whether to mandate VP8 and Opus, since transcoding gateways are not desirable. Another question is how to measure the quality of the connection and how to report errors so as to allow adaptation.
What always amazes me around RTC is the sheer number of specialised protocols that seem to be required to implement this. WebRTC does not disappoint : in fact, the question was asked whether there could be a lighter alternative than to re-use dozens of years of protocol development – is it over-engineered ? Can desktop players connect to a WebRTC session ?
We are already in a second or third revision of this part of the HTML5 specification and yet it seems the requirements are still being collected. I’m quietly confident that everything is done to make the lives of the Web developer easier, but it sure looks like a huge task.
The Missing Link : Flash to HTML5
Zohar Babin (Kaltura) and myself moderated this session and I must admit that this session was the biggest eye-opener for me amongst all the sessions. There was a large number of Flash developers present in the room and that was great, because sometimes we just don’t listen enough to lessons learnt in the past.
This session gave me one of those aha-moments : it the form of the Flash appendBytes() API function.
The appendBytes() function allows a Flash developer to take a byteArray out of a connected video resource and do something with it – such as feed it to a video for display. When I heard that Web developers want that functionality for JavaScript and the video element, too, I instinctively rejected the idea wondering why on earth would a Web developer want to touch encoded video bytes – why not leave that to the browser.
But as it turns out, this is actually a really powerful enabler of functionality. For example, you can use it to :
- display mid-roll video ads as part of the same video element,
- sequence playlists of videos into the same video element,
- implement DVR functionality (high-speed seeking),
- do mash-ups,
- do video editing,
- adaptive streaming.
This totally blew my mind and I am now completely supportive of having such a function in HTML5. Together with media fragment URIs you could even leave all the header download management for resources to the Web browser and just request time ranges from a video through an appendBytes() function. This would be easier on the Web developer than having to deal with byte ranges and making sure that appropriate decoding pipelines are set up.
Standards for Video Accessibility
Philip Jagenstedt (Opera) and myself moderated this session. We focused on the HTML5 track element and the WebVTT file format. Many issues were identified that will still require work.
One particular topic was to find a standard means of rendering the UI for caption, subtitle, und description selection. For example, what icons should be used to indicate that subtitles or captions are available. While this is not part of the HTML5 specification, it’s still important to get this right across browsers since otherwise users will get confused with diverging interfaces.
Chaptering was discussed and a particular need to allow URLs to directly point at chapters was expressed. I suggested the use of named Media Fragment URLs.
The use of WebVTT for descriptions for the blind was also discussed. A suggestion was made to use the voice tag <v> to allow for “styling” (i.e. selection) of the screen reader voice.
Finally, multitrack audio or video resources were also discussed and the @mediagroup attribute was explained. A question about how to identify the language used in different alternative dubs was asked. This is an issue because @srclang is not on audio or video, only on text, so it’s a missing feature for the multitrack API.
Beyond this session, there was also a breakout session on WebVTT and the track element. As a consequence, a number of bugs were registered in the W3C bug tracker.
WebM : Testing, Metrics and New features
This session was moderated by John Luther and John Koleszar, both of the WebM Project. They started off with a presentation on current work on WebM, which includes quality testing and improvements, and encoder speed improvement. Then they moved on to questions about how to involve the community more.
The community criticised that communication of what is happening around WebM is very scarce. More sharing of information was requested, including a move to using open Google+ hangouts instead of Google internal video conferences. More use of the public bug tracker can also help include the community better.
Another pain point of the community was that code is introduced and removed without much feedback. It was requested to introduce a peer review process. Also it was requested that example code snippets are published when new features are announced so others can replicate the claims.
This all indicates to me that the WebM project is increasingly more open, but that there is still a lot to learn.
Standards for HTTP Adaptive Streaming
This session was moderated by Frank Galligan and Aaron Colwell (Google), and Mark Watson (Netflix).
Mark started off by giving us an introduction to MPEG DASH, the MPEG file format for HTTP adaptive streaming. MPEG has just finalized the format and he was able to show us some examples. DASH is XML-based and thus rather verbose. It is covering all eventualities of what parameters could be switched during transmissions, which makes it very broad. These include trick modes e.g. for fast forwarding, 3D, multi-view and multitrack content.
MPEG have defined profiles – one for live streaming which requires chunking of the files on the server, and one for on-demand which requires keyframe alignment of the files. There are clear specifications for how to do these with MPEG. Such profiles would need to be created for WebM and Ogg Theora, too, to make DASH universally applicable.
Further, the Web case needs a more restrictive adaptation approach, since the video element’s API is already accounting for some of the features that DASH provides for desktop applications. So, a Web-specific profile of DASH would be required.
Then Aaron introduced us to the MediaSource API and in particular the webkitSourceAppend() extension that he has been experimenting with. It is essentially an implementation of the appendBytes() function of Flash, which the Web developers had been asking for just a few sessions earlier. This was likely the biggest announcement of OVC, alas a quiet and technically-focused one.
Aaron explained that he had been trying to find a way to implement HTTP adaptive streaming into WebKit in a way in which it could be standardised. While doing so, he also came across other requirements around such chunked video handling, in particular around dynamic ad insertion, live streaming, DVR functionality (fast forward), constraint video editing, and mashups. While trying to sort out all these requirements, it became clear that it would be very difficult to implement strategies for stream switching, buffering and delivery of video chunks into the browser when so many different and likely contradictory requirements exist. Also, once an approach is implemented and specified for the browser, it becomes very difficult to innovate on it.
Instead, the easiest way to solve it right now and learn about what would be necessary to implement into the browser would be to actually allow Web developers to queue up a chunk of encoded video into a video element for decoding and display. Thus, the webkitSourceAppend() function was born (specification).
The proposed extension to the HTMLMediaElement is as follows :
partial interface HTMLMediaElement // URL passed to src attribute to enable the media source logic. readonly attribute [URL] DOMString webkitMediaSourceURL ;
bool webkitSourceAppend(in Uint8Array data) ;
// end of stream status codes.
const unsigned short EOS_NO_ERROR = 0 ;
const unsigned short EOS_NETWORK_ERR = 1 ;
const unsigned short EOS_DECODE_ERR = 2 ;void webkitSourceEndOfStream(in unsigned short status) ;
// states
const unsigned short SOURCE_CLOSED = 0 ;
const unsigned short SOURCE_OPEN = 1 ;
const unsigned short SOURCE_ENDED = 2 ;readonly attribute unsigned short webkitSourceState ;
;The code is already checked into WebKit, but commented out behind a command-line compiler flag.
Frank then stepped forward to show how webkitSourceAppend() can be used to implement HTTP adaptive streaming. His example uses WebM – there are no examples with MPEG or Ogg yet.
The chunks that Frank’s demo used were 150 video frames long (6.25s) and 5s long audio. Stream switching only switched video, since audio data is much lower bandwidth and more important to retain at high quality. Switching was done on multiplexed files.
Every chunk requires an XHR range request – this could be optimised if the connections were kept open per adaptation. Seeking works, too, but since decoding requires download of a whole chunk, seeking latency is determined by the time it takes to download and decode that chunk.
Similar to DASH, when using this approach for live streaming, the server has to produce one file per chunk, since byte range requests are not possible on a continuously growing file.
Frank did not use DASH as the manifest format for his HTTP adaptive streaming demo, but instead used a hacked-up custom XML format. It would be possible to use JSON or any other format, too.
After this session, I was actually completely blown away by the possibilities that such a simple API extension allows. If I wasn’t sold on the idea of a appendBytes() function in the earlier session, this one completely changed my mind. While I still believe we need to standardise a HTTP adaptive streaming file format that all browsers will support for all codecs, and I still believe that a native implementation for support of such a file format is necessary, I also believe that this approach of webkitSourceAppend() is what HTML needs – and maybe it needs it faster than native HTTP adaptive streaming support.
Standards for Browser Video Playback Metrics
This session was moderated by Zachary Ozer and Pablo Schklowsky (JWPlayer). Their motivation for the topic was, in fact, also HTTP adaptive streaming. Once you leave the decisions about when to do stream switching to JavaScript (through a function such a wekitSourceAppend()), you have to expose stream metrics to the JS developer so they can make informed decisions. The other use cases is, of course, monitoring of the quality of video delivery for reporting to the provider, who may then decide to change their delivery environment.
The discussion found that we really care about metrics on three different levels :
- measuring the network performance (bandwidth)
- measuring the decoding pipeline performance
- measuring the display quality
In the end, it seemed that work previously done by Steve Lacey on a proposal for video metrics was generally acceptable, except for the playbackJitter metric, which may be too aggregate to mean much.
Device Inputs / A/V in the Browser
I didn’t actually attend this session held by Anant Narayanan (Mozilla), but from what I heard, the discussion focused on how to manage permission of access to video camera, microphone and screen, e.g. when multiple applications (tabs) want access or when the same site wants access in a different session. This may apply to real-time communication with screen sharing, but also to photo sharing, video upload, or canvas access to devices e.g. for time lapse photography.
Open Video Editors
This was another session that I wasn’t able to attend, but I believe the creation of good open source video editing software and similar video creation software is really crucial to giving video a broader user appeal.
Jeff Fortin (PiTiVi) moderated this session and I was fascinated to later see his analysis of the lifecycle of open source video editors. It is shocking to see how many people/projects have tried to create an open source video editor and how many have stopped their project. It is likely that the creation of a video editor is such a complex challenge that it requires a larger and more committed open source project – single people will just run out of steam too quickly. This may be comparable to the creation of a Web browser (see the size of the Mozilla project) or a text processing system (see the size of the OpenOffice project).
Jeff also mentioned the need to create open video editor standards around playlist file formats etc. Possibly the Open Video Alliance could help. In any case, something has to be done in this space – maybe this would be a good topic to focus next year’s OVC on ?
Monday’s Breakout Groups
The conference ended officially on Sunday night, but we had a third day of discussions / hackday at the wonderful New York Lawschool venue. We had collected issues of interest during the two previous days and organised the breakout groups on the morning (Schedule).
In the Content Protection/DRM session, Mark Watson from Netflix explained how their API works and that they believe that all we need in browsers is a secure way to exchange keys and an indicator of protection scheme is used – the actual protection scheme would not be implemented by the browser, but be provided by the underlying system (media framework/operating system). I think that until somebody actually implements something in a browser fork and shows how this can be done, we won’t have much progress. In my understanding, we may also need to disable part of the video API for encrypted content, because otherwise you can always e.g. grab frames from the video element into canvas and save them from there.
In the Playlists and Gapless Playback session, there was massive brainstorming about what new cool things can be done with the video element in browsers if playback between snippets can be made seamless. Further discussions were about a standard playlist file formats (such as XSPF, MRSS or M3U), media fragment URIs in playlists for mashups, and the need to expose track metadata for HTML5 media elements.
What more can I say ? It was an amazing three days and the complexity of problems that we’re dealing with is a tribute to how far HTML5 and open video has already come and exciting news for the kind of applications that will be possible (both professional and community) once we’ve solved the problems of today. It will be exciting to see what progress we will have made by next year’s conference.
Thanks go to Google for sponsoring my trip to OVC.
UPDATE : We actually have a mailing list for open media developers who are interested in these and similar topics – do join at http://lists.annodex.net/cgi-bin/mailman/listinfo/foms.