Posted on 2 Comments

Intel’s reported exit from connected TVs: long live STBs

I feel like I've had the discussion on whether the future of TV includes STBs a thousand times at least. I seem to conclude yes about half the time, then no the other half.

I joined geekdom in the late eighties so my technological world vision was built around the Wintel duopoly. Remember when Microsoft brought out windows 3.1 (the first version that really worked). Most PCs needed to be changed. Then again when Intel came up with a new chip, twice as fast as the one 18 months ago, software vendors like Microsoft, Adobe, or game developers would quickly bring out « great » new features using all that power.

Asked whether the TV has an STB in its future, my answers always refer back to those simple old Wintel days: as long as people like NDS can come up with hungry UIs that require ever more processing power under the hood, then yes. Indeed the NDS latest Snowflake UI is reputed to be just one such power hungry killer-application.

Upgrading the TV’s processing power has always been harder than to swap out an STB. Traditional business models usually have it that a 500€-1000€ TV set belongs to the subscriber whereas the 50€-200€ STB belongs to the operator. In the days before connected TV and IP, the shelf life of a box was about 7 years. If the accelerated rate of change means that this has to be shortened to say 3 years, so be it. Lowering hardware prices will absorb a good part of the extra cost and the business model can take on the extra 10-15€ a year that shortened amortizations adds.

We had a changing world that I made some kind of sense out of with my Wintel analogy. But Intel then goes and exits connected TVs. How can that fit into the picture?

Despite their vested interest to sell to every part of the value chain, Intel have basically told the market that they believe their future is in the STB and companion devices, not the TV itself. The extra shelf life of TVs could be the culprit here. I'll be looking out for the roadmaps of other silicon vendors to see if they agree. But Intel carries so much weight that their analysis will affect the market even if they are wrong in the long term.

Thank you Intel for helping me to treat my split-personality disorder. I am no longer wavering and clearly do see an STB in the TV's future at least for the next couple of years. Other stakeholders were already pointing in this direction like Microsoft's with their big X-Box push in the TV space and Apple's non-entry into TV sets despite persistent rumors.

Jamie Beach of IPTV News recently pointed out to me how Google's Android strategy seems to be heading towards some kind of convergence with the TV. In the short term that will probably mean that Google's role in the TV will be played out on android companion devices only. It'll still be a couple of years before they get the lean back STB or TV OS sorted.

If anybody at Samsung, Philips, LG, Toshiba, Sony or any other set maker was worried about Intel's move, they are wrong. Their Connected TV strategies may need to be scaled down, but as soon as I have some spare cash I can now go and buy a new TV, based on its screen qualities and stop worrying about its OS, processing power or Apstore... I worry about the connectedness of my TV set itself when I next upgrade in 2 to 3 years.

Posted on Leave a comment

Have we finally reached the inexorable end unlimited data plans? Amdocs thinks we have.

During a business lunch in 2002 I had my first real conversation with my boss’s boss’s boss in France Telecom. Joining the table late, I found the discussion already heated. Jean-Jacques Damlamian, the longest standing board member the company has ever had was then the acting CTO (I say acting because there was no such title, JJD as he was known internally is now retired). He was adamant: « We have just made the biggest mistake in the telecoms industry’s history, since Graham Bell. » France Telecom had launched its first commercial ADSL packages for private subscribers just a few years previously. Damlamian was referring to the fact that these packages were limited only in speed, and unlimited in volume of data. Already it was becoming clear some users’ requirements were thousands of times higher than the average user. France Telecom was budgeting millions of Capital expenditure for just a handful of subscribers.

Last week, David Amzallag, Amdocs’ new CTO explained to me how he believes his company’s vision might contribute to getting the industry out of the fatal trap it set for itself over a decade ago. I met him in the run up to the 2011 Broadband Forum in Paris.

David just left from a 4-year stint as BT’s chief scientist. Those years spent on capacity planning, convinced him that even if networks get smarter and grow in capacity as fast as possible, current usage trends will outstrip our best efforts, leading to major bottlenecks and frustrated subscribers. David went as far as to say that T1s and T2 are in serious trouble as their bread and butter gets commoditized.

MPLS, one of those technologies supposed to make capacity management much more flexible, has delivered only part of the promises so far; capacity is running out anyway, however flexibly it’s managed. Amdocs see a metered future.

So based on the assumption that demand is going to exceed capacity, Amdocs commissioned a study on the future of data pricing from Heavy Reading. Amdocs’s core business is where Operations Support Systems (OSS) and Billing Systems (BSS) meet customer experience so they have a vested interest in the outcome. Reassuringly, the study concludes that operators believe users are willing to pay more for more and are willing to accept some kind of flexibility (Over 80% of interviewed operators said that their future plans include data plan shared between several devices e.g. tablet & smartphone. Also, over 65% said their future plans include data plans shared between several family members). Heavy Reading appropriately interviewed operators, because they are Amdocs’ customers. The research would carry more weight if it also included the opinions of real subscribers.

Amdocs don’t believe the problem will be solved with sponsored connectivity, where, for example, Facebook pay the ISP a few dollars to carry their traffic. David went on to say that the only way forward is for the network’s Operational Support Systems (OSS) to be better linked to the business issues.

He described several use cases with an overall data quota for the whole family across many devices. Parents might be prepared to pay a premium to be assured that during their single daily leisure hour, bandwidth was guaranteed. Children could swap their leftover bandwidth allowance amongst themselves. For the more tech savvy families, the hard-core gamer might even give up some bandwidth in exchange for better latency that the bandwidth hungry movie-buff sibling doesn’t need. Towards the end of the month, if the operator sends a warning message that data limits will be probably be exceeded, the family could decide either to extend existing plans for a premium or enforce lower usage until next month. Thus maybe watching a few older movies from the home NAS instead of streaming from the cloud.

For other customer segments like single adults, Amdocs sees people wanting to fulfil unique needs at specific times through different devices. Subscribers will be prepared to pay for this and data plans will need to be so flexible that David says the real name of the game will be personalization. He used the expression of “Quality of Service On Demand” and “dynamic customer profiling” to describe such cases.

These quota based premium packages could co-exist with unlimited ones, but Amzallag insists the latter would suffer much lower bandwidth. He said that net neutrality wouldn’t be completely gone as prioritization isn’t based on packet contents but on whether the customer is paying a premium or not. Of course the Net Neutrality activists would disagree saying that the corollary of prioritization is de-prioritization, which means blocking if congestion is too bad.

Other detractors can forcefully argue that there will be no turning back from “as much as I can eat” data plans. But the Amdocs vision addresses that pretty squarely saying unlimited data can co-exist with quota based plans. My remaining doubt is a central one. Do subscribers want this? The "Global Tribes" consumer research, conducted by Coleman Parkes, that Amdocs published earlier this year, addresses the question "are consumers prepared to pay more for more?" For most segments and markets it concludes reassuring that yes they are. However, having myself witnessed first hand how incredibly different markets around the world are; I avoid using patterns from one market to make deductions for another. UK and US customers are clearly being weaned away from unlimited plans. My gut feeling is that subscribers in the rest of Europe and places like Russia that benefit from fierce ISP competition might be harder to transition away from unlimited plans. Despite its recent problems, the Netflix model has proven it can fly; limited data packages could shoot the model straight out of the sky.

The context of the data plan debate will probably evolve rapidly as the boundary between the fixed-line and the mobile broadband markets gets fuzzier all the time. Quota based plans have been becoming the norm on mobile broadband. My daughter left home for a tiny 1-roomed flat in Paris last month and in looking for an ISP on a very tight budget, we concluded that using a mobile broadband subscription might be best – she only uses Facebook and email regularly and will keep away from streaming for now (which could incite illicit downloading when she’s back home, but that’s another story...) So I look forward to talking again with David maybe at next year’s Broadband World Forum, to see how things have panned out.

After writing this I found out on twitter that fierce competition on the iPhone 4S launch is pushing the big US operators back towards unlimited ... looks like the market may not be ready after all. Exciting yoyo times.

Posted on Leave a comment

IBC 2011 write up on Videonet

This blog entry was first published on Videonet.

Back in 2001, I went to the “Convergence NAB” in Las Vegas, the Internet and TV were supposed to be merging. The 2011 IBC wasn’t dubbed “convergence” because that word is out of fashion ten years on. However that’s what all the IBC 2011 demos were really about.

Converging the Web and the TV, converging open and closed models, converging live and on-demand content, converging lean-back media with social media… Multiscreen, OTT and Adaptive Bit Rate were all the rage at this year’s IBC.

3D momentum was so strong last year that some things were still happening 12 months on, out of sheer inertia. If the decline in 3D interest year-on-year is anything to go by, next year will be in negative territory for 3D. The only prominent 3D news was the award James Cameron picked up, but awards are always about past achievement, not current trends. As Panasonic pointed out to me, the hype cycle is now in the trough so something real might come out of the next 3D decade.

In many booths, Connected TV was used for demos to illustrate points, but the devices were not promoted in their own right, except of course on the set-maker stands. Hey! maybe this is reminding us that a device is a device is a device. I saw lots of industrial strength demos, none with a real Wow effect. I’m however told by reliable sources (John Moulding in particular) that the NDS ‘Surfaces’ demo was the one to see to be “blown away”. See his write-up here.

I spoke on the trade floor to Harmonic, which will be covered in this first report along with the 3D discussion with Panasonic. In the remaining two parts of this series I will cover the discussions I had with Verimatrix, the in-depth demo of Irdeto’s multiscreen offering and briefer conversations I had with Jinni, SecureMedia, Cognik and Awox as well as the demos I saw form from the aisle of PeerTV, EchoStar and Telia Sonera.

Harmonic

Harmonic’s Thierry Fautier met me on their huge, spanking new booth loaded with demos. As with Jinni, all vendors complain that the sexiest demos are confidential because operators don’t want to share what they’re working on. But there were two novelties of real significance for IBC 2011 on Harmonic’s stand.

The Electra 9000 is a multi-screen-output encoder capable of delivering 2-4 channels depending on the number of required profiles (i.e. different screens targeted, and number of bitrates per screen for Adaptive Bit Rate encoding). The 9000 is based on the previous Electra 8000, the main novelty being new software. This product illustrates the industry trend towards converged headends that should make multiscreen deployment both easier and more cost-effective for operators. Fautier went on to comment that it’s impressive that connected TV has already become a reality despite MPEG DASH not being quite ready yet. In 2012 when this and some other industry initiatives come through, there should be a further acceleration.

The other key novelty this year is that Harmonic has now fully joined the software encoding bandwagon. The complete product suite is now available as software modules. For the moment, Harmonic servers are still needed for some elements, but commoditized servers will soon be able to run the whole suite. There will soon be OTT headends within commoditized blade servers. This will bring down the Total Cost of Ownership especially for operators who already have server farms.

The Harmonic monitoring features (described in my IBC 2010 report) are now available within hosted services. This illustrates the same trend as Nagra and Ericsson and is a step towards a fully integrated customer care centre. I asked Thierry Fautier about the Mezzanine-in-the-network concept that rival Envivio is pushing. Thierry answered that an IP backbone with Mezzanine format and decentralized functionality is still a bit Science-Fiction-like. It looks like I’ll have to visit Envivio to find out. Fautier asked a savvy question concerning this topic; what do I do with my Akamai relationship with an unprotected mezzanine format in the network?

I wrapped up my short Harmonic visit with a demo of live HLS on a Samsung Connected TV. The demo was as unimpressive (basically watching TV on TV) as it was significant. Industry wisdom has it that OTT services will be enabled by devices like Connected TVs and that the distribution of live TV feeds over unmanaged networks will be enabled by Adaptive Bit Rate such as HLS. Barry Flynn posted a nice write up of his chat with Thierry about the future AVC Codec that will be available in 2014. It’s on the connectedtv web site here.

Panasonic

This year I was courted by Panasonic’s PR team (only by phone I promise), so I set up a meeting with Markus Naegele on their booth. The key 2011 message from Panasonic was “Dreams, Ideas, Reality”. Panasonic is saying that the stuff we’ve been dreaming about for the last few years actually works in affordable products now. I stuck with domains I understand so we talked just about 3D and new codecs like AVC-Ultra.

I have blogged on my scepticism for 3D in the living room before (see here for example) so we started out exploring Panasonic’s take on the issue. Nigel pointed out that until recently, a key difficulty has always been in getting all the different engineers you need to get 3D right from all parts of the production cycle. Panasonic believe the 3DA1, as the first all-in-one camcorder they brought out last year, helps solve this issue. This year at NAB, they announced the 3DP1, which can deliver five different 3D formats including AVC, 100MB per channels, etc. This camera debuted during the French Tennis Open at Rolland Garros in a joint project with the French Tennis association (FFT), EuroSport and the production company Alphacam. What a challenge to combine all the required tech systems together for a multiday event! Markus told me that in the end, the trophy ceremony was too crowded to get a whole rig on site; it could only be accessed by a singe cameraman hence the key advantage of the new Panasonic Integrated cameras.

If Rolland Garros was a 3D production challenge, the London Olympics in 2012 will be fun for the Panasonic teams! Some sporting events will be covered in 3D as well as the opening and closing ceremonies. Panasonic is doing it all. I asked Markus if sports are really appropriate for 3D as they are so unpredictable (i.e. if you plan for full 3D effect during a goal and there are 10, people will get headaches, and if there are none they will be frustrated). Naegele astutely answered that it’s more about the added emotion one gets from the more immersive 3D experience.

For most sports one addresses mainly fans, so if a camera set inside a goal doesn’t get used, so be it – the frustration of the lack of goal will only be mirrored by the frustration generated by the absence of any 3D effect occurring there. Hey, that’s what sport is all about isn’t it? Markus did agree though that Avatar had set expectations so high that we’re not yet out of the hype phase and thus, some frustration is inevitable. The whole point of Panasonic’s current 3D drive is to show just how much the industry has already caught up with those expectations: Markus believes we are now in the realistic phase. Indeed, on the display front, 3D is now getting standardized.

Panasonic believes that although 3D without glasses will be part of the future, there’s still work to do. I was shown a very light dual lens 3D camera that was announced at IFA. Panasonic is now addressing, with the introduction of the HDC-Z10000, the high-end of the consumer up to the entry level of the pro market.

On the Codec front Panasonic were showing off their latest wares. AVC-Intra is the current Codec family name and class-50 and class-100 are currently in use (the number referring to bandwidth in Mbps). IBC saw the launch of class-200 and class-400. AVC-Ultra is Panasonic’s new Codec initiative that will cater amongst others for 1080/50p and 4k formats.

But the main novelty, answering the current multi-screen bonanza, is that the codec can scale down as well as up so there will also be class-50 and class-25 versions. Also to answer customer demand, a new long-GOP version is in the works. To illustrate the point of this new long-GOP format, consider a 50 Mbps MPEG-2 using 8 bits per sample; well the new AVC long-GOP version, in spite of improving the samples to 10 bits, will be available within half the bandwidth at just 25 Mbps.

Posted on Leave a comment

The Connected Home is almost here, not quite

The exhibition
For once got to see all that was on show in the booths as there were four to visit. This was surprising as the show is already in its third year, but also gave a feel of being part of a closely guarded secret

Motorola was showing their 4HOME acquisition of October last year. This is a software platform for service providers including energy companies. The service provides a home-control and monitoring platform residing in the cloud.
The message to telecoms operators is that as VOiP is commoditized, this approach can add value. This idea was echoed during the conference by the Telco speakers like Ann Shaub of Verizon.
Use cases revolve around triggering cameras and door locks for example to let delivery man in or check if the kids haven’t left the lights on or have they arrived home after school.
4Home call this “assisted living” and hope to improve energy management. The platform’s concept is to create “scenes”, like going-to-bed or watching-a-movie.
4Home claim their Unique Selling Point is being able to link the scenes.
The demo on site was of an extremely basic Web interface to create such scenes. Clearly such a system can be useful, but needs to be integrated into an overall offering to really shine. I expect the Moto execs will get some compelling bundles together from their portfolio.

I was surprised to see T&W in another of the sparse booths. They are a Chinese hardware manufacturer of CPL, modems and the like, founded in 1991, they expect a 600M$ turnover for 2011. They have 7000 employees, 1000 of which are engineers. They were showing optical stuff: GPON (mainly for Europe), EPON (for Asia) and what they called point2point (for the connection to the home, to “replace last mile”), but their core business in DSL routers (over 20M units shipped per year). The message I got out of talking to them was that optical is the future I suppose this message comes from DSL margins having been commoditized to smithereens.

Twonky has a far sexier name than anyone else on the show and I had never seen a booth with quite so many goodies on display, but they didn’t give me much in the end. Twonky is a US company with development teams in Berlin and a presence in Basel & Tampere, they are the part of Packet video that Alcatel Lucent strangely didn’t buy almost a decade ago. They now belong to NTT DoCoMo. To position their home connectivity products, think of them as competing with Awox that I have written about a few times.
Like Awox, they are DLNA zealots and have software in the field. Western-Digital Media players constitute their largest installed base. They also equip some routers but they came to the show to target operators of all shapes & sizes.
I was impressed with Leon Chicheporte’s honesty when I told him about the inconclusive DLNA tests I had done in my home (see here). He said that 2 years ago we were only half way to usability for the end user, and that now we’re almost there. Note the ‘almost’. That kind of honesty makes me want to do business with a supplier.

3view is a new kid on the block in the STB-with-a-service-behind-it neighbourhood. Founded in 2009 and running out of central London, they are still independent and run off private funds.
They claim to be the 1st Z-wave enabled STB. The platform runs on the SD8655 chipset with a 500GB HD and twin DVB-T2 tuners. In the UK they are a consumer brand that deliver a YouView experience before YouView is ready, competing with the likes of FetchTV. The motto on the booth was just three words: Watch, Search and Interact which I thought was pretty slick marketing although I didn’t get a chance to check if they deliver on the promise.
The BBC iPlayer is already on board and they plan to have SkyPlayer by Q3 this year. Adaptive Bitrate Rate is supported over IP. Current retail price is £299 and the device has been shipping since Christmas from John Lewis and Amazon. They can deliver a white-label box to this spec for a BOM (Bill of Materials) of around £170.

The conference

I listened in to almost all of day one of this two-day conference and tweeted live from my @nebul2 account. The conference gave me the impression that we may not have peaked yet on the hype cycle i.e. there might still be more hype before this becomes really real. We also still need some more convincing use-cases to see the true value of a connected home.

Operators are taking the connected home seriously. It’s another way to avoid commoditization especially where content-based services are not delivering. Energy control is a promising feature for Verizon. It helps the operator onto the feel-good green bandwagon. But Ann Shaub underwhelmed me with the use-cases Verizon is working on for home automation. Her favourite was “Sleep mode” where you “just hit one switch for all doors to be locked and lights turned off, no more fumbling around in the dark…”. It sounded a bit like using a sledge-hammer to push a drawing pin into a soft wall. She did however make excellent business sense when pointing out that STBs are still very expensive and difficult to keep relevant over the full (accounting) lifespan of the device. Convergence between the STB and other devices still remains to be clarified.
Ann went on to tell us the key marketing messages for Verizon’s connected home initiative: it’s about “Peace of mind” & “ease of use” and not about “saving money”. The current pricing model is a $9,95 subscription with devices being slightly subsidised. Verizon doesn’t want to get into CPE business. The current trial will turn into a soft launch very soon, with a hard launch still a bit further away.

Paul Berriman, the industry veteran from PCCW in Hong Kong, gave a broad talk about the overall strategy there. The journey from Fixed line to TV to PC to Mobile phones to Tablets & Games consoles now has to go through the connected home so as to get it all to work together. I agree when he says that getting all the devices to interoperate is going to be a real challenge, but that interoperability will become a USP once it does work. He spoke of a special focus at PCCW on enabling the free flow of content throughout the home but I think that may still be out of reach for premium content as rights owners stay too picky on DRM. “NOW 360” tries to capture ALL the screen usages of PCCW customers whatever the screen. Paul finished with one of those simple “why didn’t I think of that?” questions: health has been targeted BY operators for years and they have already spent a lot on it, but as fitness requires the same kind of infrastructure and has an early-adopter target market to boot, why not start there? Makes sense, so maybe will see Runkeeper types of apps from operators …

The conference organisers then did a strange thing. A video of a presentation from IPTVWF a few months ago was shown. In the audience we all looked at each other wondering what was going on. I doubt anyone listened to much of the Telecom New Zealand presentation, we were so bemused by this strange process. I am green, but this was pushing recycling too far.

Raoul Wijgergangs of Sigma Designs came on next to say that the key to the connected home is already here in the form of the G.hn standard. He promised products with a bandwidth 2,5 times faster than today’s HomeplugAV, before the end of the year. The prototype devices he showed looked so small that even if the STB remains part of the ecosystem we’ll have to stop calling it a box. I agree with his view that home automation is the “low-hanging fruit” of the connected home, but I wish he’d told us what SD think lies on the higher branches, out of sight.

Thomas Kleist of Native made an interesting point during the next panel session: exclusivity is an OK entry point for operators, but over time openness (i.e. apps) will be the winning strategy.
On the same panel Steve Koenig of the CE Association in the US illustrated how energy is creeping into CE purchasing decisions (85% of buyers care about it compared to 95% for price & only 57% for brand). He went on to say that Americans are driven to reduce energy consumption only because of cost. Doh! That got me wondering if taxing energy, as we do in parts of Europe, will slow down the connected home…

The next panel on "Monetizing the digitized home" had just 2 panellists, which poses the question of whether the subject is addressable yet. Swisscom was pessimistic on the connected home business model because the telecoms budget is fixed for most homes, so revenues has to come out of an existing spend. The question is what’s going to give?
As a vendor, Twonky was much more upbeat on the connected home’s business prospects, pointing out that as the TV will remain the best screen, and other devices will always be better for interaction, the secret to raising ARPU will be linking these two champions CQFD.

Tim Wright, one of the two Sony speakers, told the conference that the Ultraviolet common file format (CFF) contributes to interoperability within connected home and is complementary to DLNA. He made the case that this new industry initiative will reduce costs and not increase them for operators. I have yet to witness anything from the security industry reducing costs, but I do live in hope.

We then moved on to standards for the connected home. Rami Amit from Jungo was refreshingly honest in stating that DLNA streamers almost work, but not quite yet. Guilhem Poussot of Vodafone said that the next decade is going to be about user experience. When prompted from the audience about “whether standards are for losers” he boldly retorted that standards are for winners and in any case we have to beat Apple.
Another issue raised from the audience was on the openness of standards "the more open a standard, the easier it is to hack, so do we want an open standard in the living room at all?” When the chairman pushed even further by asking the panel if he’d ever get a virus on his TV, the question was politely ignored.
Helen Anders, a lawyer on the panel, reminded us that there have ALWAYS been competing standards like Beetamax vs. VHS, and she saw no reason that this will be any different for the connected home.
On the down side for standards Roger Blakeway of the SCTE couldn’t see Sky moving away from their box to any kind of open system.
This pessimism was countered by Sony’s Renaud Di Francesco who said that barriers put up by operators would soon be overcome by Wifi or LTE or some other pervasive technology.
Karl Tempest Mitchel from AirTies seemed less committal stating that the jury is still out as to which one it’ll be, but that one device will control content in the home.

Andrew Ladbrook an Informa analyst closed day one with some figures from the latest study on the connected home that he urged us to buy.
Games consoles will not grow as fast as Connected TV or Blueray, although they will become mainstream and be VERY connected (contrary to connectable Blu-ray that will most often not be connected in the home). Informa believes that Media-streamers à la Apple TV will remain niche.
Ladbrook made an interesting and controversial point on "Video not needing QoS" as seen by OTT boxes like those from Telstra (Netgem) or Telecom Italia (Cubovision) that he says are doing fine. Less QoS means less CAPEX & less OPEX. I’m not sure if this is linked to the connected home debate, but we’ll be seeing very soon if this strategy pans out over more than a quarter or two.
Informa wrapped up with some strategic advice, on defensive opportunities with multi-room offerings that enable connected home and reduce churn towards OTT providers. He finished by suggesting that the next-best defensive opportunity is improving home networking because that too enables the connected home as seen by Apple’s Airplay.

Posted on Leave a comment

Five thoughts after chairing day one of the CDN World Summit in London this week.

1. Imminent bloodbath

There is a phoney war going one with everyone patting everyone else on the back and smiling as is if we could all stay friends. We all know that a bloodbath is coming. There is no way that all the network operators (e.g. BT), transnational operators (e.g. Level 3) content operators (e.g. Sky), existing CDN players like Akamai and new-comers like Amazon are going to all live happily ever after collaborating to deliver a great video experience. The cake is just too small.

We might just be able to cut it in up for 2 stakeholders, surely not 4 or 5. So if 2011 still sees a lot of hype and optimism, I think 2012 will see consolidation and then simplification in 2013.

2. Drowning in poor quality content

OK so lets say all the CDN ambitions flaunted at the conference are realised, what would that mean?

Well firstly you need to reinvent the TV experience. Platforms like Sky or Comcast are already struggling to offer a decent experience with hundreds of channels and thousands of videos. Add on another zero to both those numbers, then what? We will be completely lost and confused as to what to watch. Anthony Rose and the Veg 2.0 he is developing at T-Bone might be an answer we are looking for, I am eager for them to come out of stealth mode.

Secondly, there will be a quality issue. Once all the zillions of videos are at the edge, probably in some kind of rate adaptive format, who apart from the end user is going to know if the video actually played well if at all. If its all part of a paying offer that could get very sticky. The only company addressing this issue head-on at the conference, with a solution ready to roll, was Ineoquest. I will be writing more on that in a forthcoming white paper on this subject (post a comment if you would like me to include your product also, so far I have talked to Edgware and Packet Ship).

3. Content sill matters

The conference had a broad representation from the industry, but the few people from content, did not really have much to say. That may have been that we had the wrong people, but it did light up an alarm bell in my mind. Great network technology include state-of the art CDNs will indeed deliver a compelling user experience. But if we forget to get the content owners on board from the start, we may end up with a beautiful solution that has no problem to fix.

4. Standards
Standards are often late to the game. This time IETF and ETSI are both scrambling the jets to get there on time for intercept. I wish them well, but if I were a betting man that is not where I would put my money.

5. Interconnectivity
There is a worldview where Akamai continues to dominate the content delivery space so that interconnectivity eventually just becomes an issue of talking to Akamai. I no longer believe that can happen, the Akamai empire has already lived its thousand years in Internet time. So a more realistic worldview is one of interoperability. In the end, the standards guys will catch up. Getting different CDNs to talk to each other certainly seemed to be a hot topic at the CDN World Forum as. EdgeCast for one is already was saying they have it fixed interconnectivity already. As the old man said, being there first is not as important as being there at the right time

Posted on Leave a comment

Early release windows; are we ready, or is it already too late?

This blog first appeared on Videonet.

Last week I spoke to Petr Peterka, CTO of Verimatrix, about the Hollywood studio’s latest attempt to avoid following in the music industry’s footsteps and the new Verimatrix’ watermarking product.

The general context we addressed appears bleak. The musical CD is no longer relevant for mass consumption and the DVD looks as if it is following in that direction. The MP3 format has become ubiquitous for music files. Will formats like MP4 or h.264 files doing the same for video? Just as P2P seems at last to be slowing, is streaming now taking up the slack?

Book publishers are also alarmed at what their future might hold for them. Tim Bradshaw, the FT’s digital media correspondent noted on May 17th that according to a recent survey, one in eight female tablet or e-reader owners over the age of 35 admits to downloading “unauthorised” copies of e-books. It’s not so much the 1 in 8 figure as the image of women over 35, which catches the imagination. Illicit content consumption is not just from acne-ridden boys. So if we’re all potential pirates, could digital watermarking be the deterrent that might finally work where previous threats have failed to stem the tide of piracy?

With session-based, forensic digital watermarking small changes are made to the video file. These cannot be viewed or heard by the viewer, but can be extracted out again from the file with the tools provided by the watermarking solution vendor. If someone redistributes his or her copy of a film, a unique “fingerprint” remains embedded in the video file. So after investigation, a finger can be pointed at the person who first redistributed their copy.

So that’s the stick, but the movie industry is trying to put together a carrot by creating new early release windows for movies. As users are prepared to pay for convenience, it seems only natural to expect them to pay a premium to get access to a movie more conveniently. Organising a movie party at home to show friends a film that might still be in movie theatres would be worth spending a bit more than on a normal $5 VoD (current US VoD rights start 90 days after theatre debut, the new “Home Premiere” would be available 30-60 days after debut instead).

The difficult question is exactly how much more is that worth? Studios have been talking of up to US$ 25 more for a rental period. TDG recently analysed the potential demand and was not convinced that this kind of proposition can fly. According to the TDG analysis, users would only be prepared to spend an extra $5, and only if the movie were available within a week of theatre release. Further, the TDG study notes that there is still some leeway in playing with both the price and time frame so there may be room for a new offering even if studios might have to lower their expectations a bit.

However, more people are betting on ad-funded models with free or cheaper content made available. Many small transactions are more in fashion than fewer big ones. As digital watermarking requires session-based processing, it will be too expensive for micro-transactions. Time will tell if offering something better for a higher price will be going against the current in the entertainment industry or not. Sometimes bone-headed stubbornness has paid, albeit rarely.

Verimatrix is involved in enabling this early release VoD window but Petr rightly used the term “experiment” when describing this initiative from the studios.

He told me that they had been waiting for three specific requirements to be fulfilled before embarking on this project:

  • Firstly secure encryption was required. This has been achieved and consolidated in the last decade
  • Disabling analogue output was the second requirement so as to make it harder to record. This was only achieved recently in the US when, last year, the Selectable Output Control issue was finally resolved last year.
  • Finally, studios required session based forensic watermarking which is what the new Verimatrix product is all about.

Verimatrix released a product called VideoMark 5 years ago, which was able to insert a digital watermark payload into the video outputs of a set-top box in real time. This addressed a range of re-broadcast and re-distribution threats, but required device-by-device software integration. The novelty of the newly released StreamMark product is that the watermarking can now happen in Head End or in the Content Distribution Network (CDN). Indeed the watermark can be inserted into a compressed and encrypted stream. By removing the requirement of decoding and decrypting, this new solution requires only a very small processing overhead. But as this solution is technically a stateful one, scalability will still imply some cost, even if it's a small one. StreamMark is designed to insert a unique watermark for each user and is therefore primarily useful for unicast not multicast, i.e. on-demand applications.

No information is required on the end device so this kind of architecture is well adapted to multiscreen deployments that at the moment are on everyone’s mind.

The basic challenges of any watermarking technology are that the watermark must be:

  • invisible
  • easy to extract
  • difficult for pirates to see and remove.

As these challenges can be somewhat contradictory, a finely tuned combination of different algorithms is necessary, and achieving this is probably one way we will see competing solutions differentiate.

Verimatrix' solution requires three separate steps:

  • Firstly, the video assets must be pre-processed, usually when the operator ingests new content. This stage identifies and stores the location of potential marks. For each of these locations, the information that would be inserted is also prepared and stored for future use.
  • The second stage is to embed the watermarks at distribution a point where a unique session is set up. This can be in a central server or regional VOD server or in an edge CDN server. The original file has extra info added to it that embeds a unique transaction ID.
  • The final stage will only happen when pirated content is discovered. For now extracting the watermark is only done in Verimatrix’ labs.

Copyright Verimatrix

The technology was reviewed by the major studios during the soft-launch period and a third party audit was carried out by Jian Zhao, a distinguished researcher in watermarking theory and practice.

The Verimatrix pricing structure is based on unique payloads i.e the number of transactions and is volume dependent. On June 7th, StreamMark, which has been in soft launch for a year, was be released commercially.