Posted on

Ultra HD ecosystem getting organized, alliance on the way

I attended the French HD Forum meeting on UHD last week in Paris, which hosted by Eutelsat. France prides itself on being innovative, often with government or strong regulator incentive. How this actually works out is a matter for politicians as in the case of the Minitel that predated the Internet. There is still no consensus on whether it was a good thing for France, with French people becoming used to eCommerce before the term even existed or whether on the contrary it made France miss the first Internet wave.

When it comes to TV standards similar debates rage. Much ink was spilt over the terrestrial switch-over which was completed here in 2011. The transition from SD to HD was always a political hot potato and is still underway with spectrum scarcity in the current DVB-T1 setup restricting HD to 5 of the 23 FTA channels.

Unsurprisingly, when the French get talking about UHD, there’s palpable tension with all the differing agendas. Does it make more sense to finish upgrading the end-to-end environment to HD before playing around with UHD, or on the contrary would it be more economical to avoid two upgrades and go straight to the ultimate target of full UHD? Should TV stations wait for customer demand or try to stimulate it with UHD services in as early as 2015?

Beyond these legitimate debates, there is also some confusion that is artificially created by a lack of information and sharing across the Ultra HD video ecosystem.

The risk of confusion

As UHD TV gradually tips over it’s peak of inflated expectations the TV industry at large, through the diversity of its reactions, will undoubtedly lead it down to the depths of disillusionment. Some TV stations publicly doubt if 4K will ever be a sound business proposition, while satellite operators and many technology vendors have bet their future on UHD success. Sometimes, even within the same industry group UHD is being pulled in several different directions at once as for example between the different UHD specifications of ITU, EBU, Digital Europe and CEA. Some key differences and commonalities are:

Feature

ITU EBU (phase 1) CEA Digital Europe

Resolution 

4320p 2160p 2160p 2160p
Frame rate 120/60 60 60/30/24 60/30/24
Color space BT2020 BT 709
HDMI NA 2.0 2.0
Bit depth 10/12 10 8 minimum 8 minimum
HDR Under standardization Phase 2 Not mentioned Not mentioned

The standardization of UHD has so far been much less chaotic than it was for say 3D technology at a similar stage.

Some clear standards emerging from:

  • the telecoms sector with the (ITU-R) recommendation from ITU’s Study Group 6 (more at: http://www.tvtechnology.com/news/0086/itu-issues-uhdtv-standards-recommendations-/213615#sthash.DXG9J7bU.dpuf),
  • the video technology space, which is also active with MPEG-HEVC having published a specification in January 2013 that can use used for UHD and that is now looking actively at HDR,
  • the consumer electronics industry that provided a vital part of the Ultra HD requirements with the standardization of HDMI 2.0,
  • the broadcasters, with the DVB/EBU ultra high definition broadcast format (UHD-1 Phase 1) specification for example.

But UHD’s success will rely on much more than just increased bandwidth and resolution and many of the other elements are still under discussion like for example the required increases in both color sensitivity and contrast with HDR (High Dynamic Range) or refresh rates with HFR (High Frame Rate). Norms for carrying higher definition audio with a greater number of channels have been standardized by ETSI with AC-4 that is actively promoted by Dolby. The MPEG standards body is currently in the process of creating an object based audio encoding standard with MPEG-H. The IP encapsulation techniques defined by SMPTE (2022-6) are still to be universally accepted by the industry.

To succeed faster, at a lower the cost for early adopters, UHD doesn’t need yet another body defining standards, but one that explains them, helps ensure their interoperability and promotes successful business cases.

After the failure of 3D, the industry needs to regroup around UHD to ensure its success, in a similar way the DASH Industry forum (dashif.com) has rallied all the DASH energies.

The Ultra HD ecosystem is quite complex and we provide here (courtesy of Harmonic) an end to end diagram for Ultra HD:

diagram UHDTo speed up the process of getting through the trough of disillusionment or maybe is it to cross the chasm, I learnt in Paris that a few market leading companies are in talks to set up an alliance. Its intended scope is to cover all parts of the content lifecycle from production to display, encompassing contribution, distribution, post-production and play-out. The Alliance’s stated goal will be to promote interoperable specifications, propagate effective business models, provide forecasting and share all successful application models.

The alliance would identify, describe and share specifications relevant to all parts of the distribution chain in close collaboration with standardization bodies.

Interoperability will be a key driver for all the alliance’s work, defining the system level interop points, organizing interop plug fests and publishing and promoting the results.

The Alliance would also deliver business models for both live and on-demand content, sharing any industry success stories and ensuring any mistakes are only ever made once.

An Ultra HD Alliance would promote existing industry reports but also pool real market data from its members and use projections to obtain the most accurate forecasts for critical market dynamics. The number of deployed UHD capable CPE, the readiness of live TV workflows or the extent of UHD VoD assets will be closely monitored and projected. The alliance also intends to show how UHD can be used in different application domains such as VoD, Live TV, Linear play-out, Push VoD, etc. presenting the benefits of UHD over HD with operator feedback.

To successfully promote Ultra HD, the alliance would be represented at trade shows and conferences. The alliance’s website would encourage interaction with blogging and social media. Webinars and various publications including whitepapers will also shorten UHD’s time-to-market.

The alliance would be open to companies from all parts of the ecosystem. Content providers, broadcasters, production houses, operators, playout companies, encoder vendors, audio specialists, security providers, chipset makers and UHD device manufacturers would all be able to join. Other organisations such as the HD Forum, EBU, DVB, etc. would be welcome too.

The setup of the alliance is still at the stage of informal talks, but the first formal meeting will take place at the CES in Las Vegas in January 2015.

Stay tuned for an update after the show (previous 4K blog on 7 Reasons why UHD/4K makes sense here)

Posted on

Live OTT streaming – Industry feedback from CDN World Summit London 2014

Last week I led a round table on the future of live OTT TV and it’s implication for CDNs during the last session of Informa’s CDN World Summit in London.

My first point to open the debate was on QoE. I pointed out that mobile telephones are a giant step backwards in term of voice QoE and service availability compared to good old fixed lines. However we’re all happy to renew dropped calls lose coverage or ask our correspondents to repeat on our mobile phones because we gained so much more than service QoE with mobility. I then suggested users might accept a similar trade-off and embrace lower QoE for OTT TV than broadcast, in exchange for lower costs, mobility, greater choice and personalisation. The reactions around the table made me think I’d just insulted the queen. There was emphatic disagreement. TV is TV and will always be TV said the TV operators and nobody dared take issue. I guess that’s what happened in the boardrooms of railway companies when airplanes arrived. One of the non-TV-operator participants did agree that maybe – except for sports – QoE might be traded-off for greater choice. At this point, the challenge of content navigation was brought up for search and recommendation.

That got us talking about “long-tail live TV” and if it might ever makes sense, i.e. being able to watch a unique live stream that you really care about. That access might make you so grateful that even if the quality wasn’t always pristine you’d still be happy. This idea is buoyed up in an OTT rather than broadcast context. Indeed all the TV markets I’ve worked in, even if they have many hundreds of channels available, invariably have 10 or fewer channels that any one community is prepared to pay for. One of the key promises of OTT is to abolish markets, typically under a satellite footprint. All those start-ups targeting Diasporas are going to find tough competition as the big guys come into their nascent markets more and more.

From a financial modelling point of view, the satellite broadcasters around the table were pretty excited about the fact that for live OTT, if you have a tail-end channel that nobody is watching, your Opex goes down to zero. This for them was the real opportunity in live OTT.

Consensus was easy to get on the fact that live OTT TV brings mobility, however nobody was clear yet about a killer use case where this is really important. Watching videos on the tube or train is still very much a download experience and rarely legal at that.

When I brought up the question of when rather than if, Netflix starts live streaming nobody felt ready to pick up the gauntlet. I’ll keep that for another day.

Our debate wound up over an interesting discussion on the blurring of boundaries between linear and on-demand content. Typically a shopping channel can be played out from an automated server with people being able to interact and turn a multicast stream into a unicast one. The final feedback from two operators round the table was that Multicast is only really a panacea for large Telcos that own a network. For the rest of us the cost benefit analysis turns out much worse in the real world than on the drawing board of business planning.

This left me with the clear impression that there are still problems out there looking for solutions, not the other way round for a change. As many network and service operators want to build their own solutions rather than relay on the global CDN operators, we’ll probably see a major player emerge from the likes of Anevia with its edge caching, Broadpeak with its Nano-CDN, Media Melon with its QoE analysis or Octoshape.

Posted on

Disruptors are everywhere – do you see them?

Senior consultant Hugh Massam escaped from the energy sector after serving a 15-year sentence. Here he offers some thoughts post-IBC.

‘There is nought so powerful as a good idea whose time has come’. The question is, do you see the great ideas and disruptors that your competitors see?

When we’re busy developing products, jumping into new markets and maybe sitting in meetings about dense tech or legal issues, it can be all too easy to miss a competitor that’s about to eat your lunch. It was clear to me from three days at IBC that some spaces – such as middleware – are ponds filled with hungry competitors. The ones that spot a seismic shift first – or even make it happen – will be the survivors.

Netflix has been creating such a shift giving sleepless nights to many TV executives. Reliable live streaming over the Internet will probably be even more of a disruptor, and the content delivery suppliers at IBC are promising it for very soon, certainly less than five years, maybe as little as two.

At Wolfpack we are often asked to benchmark clients’ brands, and even this simple exercise can provide startling insights and opportunities pointing out gaps in the market or areas where completion is already fierce. From there we partner with them as marketeers and industry insiders, to help position products and promote them, avoiding the many sand-traps that can befall al product along the way. But what we also do is maintain an outside perspective and look for the real seismic shifts or threats.

Asleep at the wheel? …a story from the energy sector

Looking wider than IBC, the supply of electricity is a former state function and in terms of NPD, was asleep for decades. In some markets both are still true. Even applying basic marketing to it is fairly new and the marketing sophistication of some former state-owned behemoths in this space is still decades behind a sector like broadcast. And let’s face it, not many marketers are hogtied by the need to keep supplying product to a customers with an astronomical cost-to-serve (a wire to a farmhouse in the Outer Hebrides, for example) or to people who patently can’t pay (in practice most energy companies rarely cut people off).  Both happen with electricity.

But in the wider energy sector there is innovation, and it bleeds right into people’s home networks. Besides the obvious shift to local and renewable generation, there’s seismic action in storing power, and in supplying it to your laptop. For example, innovators like Moixa.

A conversation in the energy-supply space is likely to take a bunch of things for granted: Houses need to run off the mains; home electricity networks use standard 110 or 220 volt sockets; solar panels are great but they are best used through an inverter (solar panels generate DC, an inverter converts this to AC) and supplying AC in real-time to the building they are on or to the grid; once the sun goes down solar power is not relevant to evenings – the peak time of use – because batteries are too expensive. These are all fundamental assumptions about the market which players like Moixa are challenging.

Small solar phone chargers are old news. So why can’t a home’s solar panels – already generating DC – directly power small DC devices like phones, laptops and lights through a local network? Why can’t you charge a battery from your solar panels in the day, or from the grid at 1am, and store that power to peak-shave by taking demand off the grid in the evening? The grid provider will pay you handsomely to reduce peak-load on demand – ‘demand management’ is a $1.8B market in the UK alone.

The Economist last year spotted that emerging standards which will soon boost a USB cable’s power-carrying capabilities up to 100V (from the current 10) will change the game too.

Players like Moixa saw the obvious – an explosion in low-power DC devices in the home, and a rise in the number of solar panels producing DC on roofs. They are working hard on dull details like DC voltages and plugs, which have never been standardised.

And at the big end – many people still assume electricity plants are large, should be available 24/7/365,  and need to be built permanently near their fuel supply, but players like Karadeniz Energy and Agrekko are making massive inroads (and profits) supplying floating power plants and rented power generators. And the largest market for the latter may surprise you – the Middle East, not Africa.

So even in a slow-moving market like electricity generation, disruptors are there. The real question is – how often are you checking on the top three potential disruptors in your market?

 

Hugh Massam is a Senior Consultant with Wolfpack and also the Principal Consultant at E Equals Limited, an energy communications agency based in Cambridge, UK. @eequalsuk. 

Posted on

First 4 trends spotted at IBC14

I didn’t write any prediction on what would be important this year, but while it’s still fresh here are my first impressions

Last year the Cloud was a key buzzword and Amazon was, this year, it’s replaced by Virtualization, basically the same technology, but with the possibility of running all those virtual machines in a service provider’s own data center. It is supposed to lower costs eventually and make things like redundancy management easier, but I’ve yet to be convinced if it’s really such a big deal. I’ll try and stop by some of the encoding booths like Envivio, Harmonic or Elemental to check out where it’s really just a generalization of the concept of software based encoding vs. hardware based encoding.  I’ll also try to get back to the Amazon Web Services stand in hall 3 where they’re explaining how Netflix uses AWS with special tools developed to optimize service availability.

4k has of course been around for several years yet still manages to buzz. I’ve been told to go see Samsung’s giant curved display in hall 1. The main difference from last year is that there’s hardly a booth without a 4K display or two, most now at 60fps and more and more UI’s, like that on display at SoftAtHome’s booth, are now native 4K.

OTT is still very present even if it too has lost its novelty as so many commercial deployments are out there. OTT ecosystem vendors are repositioning frantically as value is eroded. Some like Piksel seem to be keeping their end-to-end positioning, while others like Siemens with its Swipe service are also bringing out specific components to sell as services. Enhanced ABR is also appearing, to help reduce Opex costs, by finding tricks to use only as much bandwidth as is required. All in the CDN crowd like for example Limelight, Anevia, Broadpeak or Media Melon (who don’t have a booth) have things to show in this area.

IoT and the connected and/or smart homes have been around for years in other shows, but have now just made it to IBC. Managing the home network is becoming more challenging for many reasons. One that struck me more is that we are seeing a greater proportion of homes with 100M+ broadband connections, but in-home effective throughputs down to just a few megabits, often not enough to stream over Wi-Fi. There were quite a few solutions at IBC, like AirTies’ home Wi-Fi meshing.

Some trends though are clearly on the way out. I noted for example that it’s already out of fashion to talk about embedded apps now that HTML5 is a no-brainer and any mention of the smart SmarTV is positively 2013.

More soon, stay tuned…

Posted on

Google still playing catch up with Android TV

In this first blog of our Android TV series, Philip Hunter looks at some of the reasons why Google doesn’t seem to have been third-time lucky with its new Foray into the sitting room.

Android TV, Google’s heavily leaked third effort to crack the living room, arrived as expected at its annual Developer conference at the end of June 2014, but once again failed to ignite the field. Google is still playing catchup in its attempts to conquer TV, with the only consolation being that its rivals such as Apple and Microsoft are also floundering in their attempts to stamp their authority on the big screen. Apple TV after all is seven years old now and can still at best be counted as only a moderate success, having failed to establish any sort of market dominance even close to that achieved by iOS for both tablets and smartphones. One reason is that Apple’s iron grip on its ecosystem has proved a handicap in keeping the device isolated and discouraging developers.

Google was determined with Android TV to avoid that fate and continue with its open approach to maximize third party app development with a variety of incentives. But this immediately begs the question for developers of whether Android TV is worth the trouble, unlike the versions for smartphones and tablets where it was always clear there would be a big market. In the case of TV it is still not obvious at all that Android will succeed, given that the market for streaming platforms is already very crowded, with Roku, Amazon Fire TV, Xbox One as well as Apple TV among leading established contenders. There is also Google’s own Chromecast HDMI dongle, representing its second attempt at TV after the abject failure of Google TV. Chromecast has been a reasonable success because of its low price tag of $35 and flexibility, encouraging users to try it alongside their existing pay TV package if they have one, rather than as their primary source of content. But it means Google is now sending mixed and confused messages to both consumers and developers. With Chromecast, Google had given the impression that developers could stop building apps and instead create webpages optimized for the TV screen that would receive commands from an Android smartphone. If Chromecast stays around as it looks like it will, developers now face having to build and support two interfaces to cover the Google TV universe, one for Chromecast and one for Android TV.

The underlying problem though may be with Google’s new strategy of shoehorning Android for TV rather than creating a new operating system. Google is obviously trying to make Android into a ubiquitous operating system with variants for all device platforms, as was evident at its recent so called I/O developers’ conference. There Google unveiled Android Wear for wearables like wristwatch computers, Android Auto for car dashboards and Android One for a new brand of affordable smartphones prices at under $150, as well as Android TV. There is every reason to expect that Android Wear and Android One will be great successes as they are still very much in the heartland of mobile handsets, but with Auto and to an even greater extent TV Google is stretching the envelope of the operating system a long way. History tells us that attempts to create an operating system of everything are doomed to failure, as Microsoft seems to be finding with Windows Phone. Apple had the sense to create a radically new operating system for mobile devices in iOS rather than attempting to adapt the MacOS from its desk top perch. It reaped the rewards with the iPhone and then the iPad.

This leads to the other problem, which is that with Android TV Google is still cast in the role of follower rather than leader, which is not how it grew up to become the world’s most valuable brand. In search Google originally rose to dominance over rivals like Yahoo, AltaVista and Microsoft because it had superior technology and was quickly able to assume a leadership position that came to be reinforced by its human and financial resources. Of course it has been able to reallocate those resources to TV, but without so far being able to conjure up any killer technology.

Android TV actually seems rather similar to Apple TV for the GUI and is modelled on Amazon’s Fire TV in its support for voice input for searching video content. Indeed Google has instructed its developers to avoid need for any text input at all if possible and to rely largely on voice. Google has also stripped out well known features of Android on smartphones, such as support for VoIP, cameras, touchscreens and NFC (Near Field Communication), which are all deemed superfluous for a streaming set top box like device. This is well and good, since it avoids an Android TV box being an over bloated version of a smartphone or tablet. But this may also expose the limitations of the platform, especially as Google is indicating that the operating system will really come of age with its next generation called Android L, which was also previewed at that I/O conference and scheduled for launch towards the end of the year. Android L is a radical rewrite, with improved animation and audio, as well as 3D and contextual awareness, which will all feed into the TV version. Developers may therefore decide to wait until this next generation has arrived before committing to Android TV. Android L will also include many features specific to mobile handsets, including a new battery saving mode, which are mostly irrelevant for the TV version. In this sense Android L will compound rather than solve the problem of becoming too bloated. It may be just a matter of time before the TV version of Android becomes divorced altogether from the mainstream of the operating system, but meanwhile it will most likely fail again to put Google on the podium for OTT TV.

Posted on

Wi-Fi offload can help mobile operators deliver network neutrality

Network neutrality has come back to the boil in 2014 following US carrier Verizon’s famous Federal court victory in January over the regulator FCC (Federal Communications Commission), allowing it to differentiate between services delivered to its broadband customers. This was followed in April by the European Union approving strict network neutrality with the message it would take a much tougher stance than the FCC in upholding the rules. Naturally this was widely interpreted as setting Europe apart from the US, but the reality is that both are taking a more nuanced approach than in the past. Even the EU proposals allow for provision of specialized services, providing they do not intrude into network capacity set aside for the general Internet. The tones may be different but the broader implication both in the US and Europe is that network neutrality can never be fully attained through legislation, any more than true equality of wealth can be achieved via measures such as progressive taxation – both are aspirations or focal points.

For mobile operators the aspiration of network neutrality has assumed a logistical and economic dimension with the great proliferation of data hitting their infrastructures. Many have opposed strict net neutrality for the simple reason that their core and backhaul networks have limited capacity and would be unable to cope without traffic engineering and the ability to differentiate between different service or application types.

But now Wi-Fi offload has entered to change the game, giving operators an option for relieving their overstretched backhaul networks and for that matter their radio access capacity as well, by taking advantage of broadband infrastructures. It was at the Mobile World Congress in 2013 that offload first seemed to have risen right up the agenda for mobile operators. Generally, particularly before deployment of 4G/LTE, broadband networks had greater capacity and crucially lower costs than the fixed backhaul networks serving radio base stations. For this reason those major Telcos with their own network of hot spots have been leading the march towards Wi-Fi offload. In the US AT&T has built large Wi-Fi hot zones in mostly urban areas with high levels of cellular traffic, specifically for offload to help relieve congestion on its core mobile network.

The implications of such offloading for network neutrality have not attracted much attention, but are likely to be profound nonetheless. The fundamental point is that by freeing up capacity on the mobile network, offloading can help mobile operators meet their net neutrality obligations as laid down by regulators in the region concerned, while still having scope to offer specialized services. An operator could say offer an OTT video service such as Netflix with guaranteed QoS over the cellular network, resorting to Wi-Fi offload for third party OTT services such as YouTube. Alternatively Wi-Fi could be used for specialized services, especially by operators like AT&T that have their own overlapping hot spots and cellular networks on a large scale.

We are already seeing this happen, with Sprint in the US now offering calling and messaging over Wi-Fi when within range of suitable hot spots. Sprint incidentally was one of the first major carriers in the world to make serious use of Wi-Fi offloading for data.

We are going to see plenty more such offerings over the coming years. It will be interesting to see the extent to which operators will align Wi-Fi and cellular within heterogeneous service offerings effectively to escape the shackles of net neutrality while obeying the basic rules as stipulated by regulators.