Posted on Leave a comment

User experience gain must trump resource consumption for UHD success

This opinion blog is about 3 things that could derail UHD if User Experience lets them.

Ok so you can already tell that I’m biased. I believe in UHD and its five components that will change user experience:

  1. Higher resolution (4K)
  2. Higher Dynamic Range (ability to see details in both brighter whites and darker blacks simultaneously)
  3. Better colour (more colours, closer to human perception)
  4. Better sound (more channels than speakers, object based surround sound)
  5. Higher refresh rate (especially for action which can otherwise look choppy at very high resolution without higher refresh rates)

My crystal ball hasn’t confirmed that this is the order in which these components will arrive, or to what extent it’ll be a big bang approach, or even if some components might get left by the wayside. I’ll delve into that in another blog. In the last 15 years or so, I’ve witnessed HD succeed and I have written several times about why I believe UHD’s time is now (recently here or here in early 2014 for example). I have nothing to sell and no vested interest in UHD, I’m simply driven by my geeky fascination with the promise of a great new experience brought to TV and IP technologies that I’ve been working with for so long.

But just in case I am wrong, here are 3 things that that some of us fret about and still could prevent UHD success.

Thing Description Issue Why it won’t stop UHD
Fragmentation Vendors pulling in different directions Device & content incompatibility Industry bodies like the Ultra HD Forum or the UHD Alliance
Energy Extra brightness, more pixels and more images consume more power Regulation, consumer reluctance, UHD perceived as not Politically Correct Technology progress has often consumed more power (e.g. HD vs SD). Better efficiency means extra power required is less than extra user experience delivered. CPE power issue is more in standby mode than peak consumption. Need not consume much more power with HD-only signal.
Bandwidth UHD can require over 4 times HD bandwidth / file size. Channel and content distribution issues. Monthly data caps will be an issue for OTT households. Networks grow in quantum leaps, UHD will help spur the next one. All-fibre connections and future 5G networks will provide more bandwidth than UHD can consume. A new generation of low-orbit satellites is also on its way.

The driving force providing the impetus to overcome challenges such as those mentioned above is User Experience. This is the part of the equation I have to rely on gut feeling or faith for. My premise is that UHD ushers in a great new User Experience with a sensation of realism and immersion.

If it actually turned out that UHD didn’t bring that “wow” effect so many of us in the industry believe in, then any one of the above “things” could alone derail UHD from becoming a market success and we’ll have to find another game changer in the TV industry.  My experience so far suggests that UHD will be that game change but also that there are still niggles that need ironing out.

As it happens I’ve been watching quite a lot of 4K TV via Amazon and Netflix in the last few months. Landscapes and close-ups are all pretty amazing, but I do have a nagging worry over some indoor scenes, which despite being shot by top-of the range pros (e.g. Amazon’s Transparent, or Breaking Bad, …) leave a strange feeling that something isn’t quite right in 4K resolution. It occurs when there is some mild camera movement yet when most of the scene is in focus. I get this counterintuitive sensation that there is something maybe amateurish in the composition. This could be due to the shooting not having been properly thought out by the director and cameraman for 4K TV playback, or maybe it’s just me not yet being used to processing so much data on screen. If either of these is true, which I suspect is the case, the issue will quickly disappear. But this highlights my only real concern over UHD’s success: will it be consistently “wow” enough to overcome resistances like the three issues stated above? If so I have no doubts that vendors, content providers and operators, as personified in the Ultra HD Forum, will be insure that the whole UHD movement is not derailed by relatively minor teething troubles.

Posted on Leave a comment

“HaLow” sets stage for multi-channel Wi-Fi

The Wi-Fi Alliance’s announcement of the low power version IEEE 802.11ah, dubbed “HaLow”, was dismissed by some analysts as being too late to make a significant impact in the fast growing Internet of Things (sector). That view is wrong and seriously discounts the power and momentum behind Wi-Fi, to the extent that HaLow has already received extensive coverage in the popular as well as technical press. It is already far closer to being a household name than other longstanding contenders as wireless protocols for IoT devices such as Zigbee and Zwave.

It is true that certification of HaLow compliant products will not begin until 2018, but with IoT surging forward on a number of fronts including the smart car, digital home and eHealth, SoC vendors such as Qualcomm are likely to bring out silicon before that. There are good reasons for expecting HaLow to succeed, some relating to its own specifications and others more to do with the overall evolution of Wi-Fi as a whole.

Another factor is the current fragmentation among existing contenders, with a number of other protocols vying alongside Zigbee and Zwave. This may seem to be a reason for not needing yet another protocol but actually means none of the existing ones have gained enough traction to repel a higher profile invader.

More to the point though HaLow has some key benefits over the others, one being its affinity to IP and Internet through being part of Wi-Fi. Zigbee has responded by collaborating with another wireless protocol developer Thread to incorporate IP connectivity. But HaLow has other advantages, including greater range and ability to operate in challenging RF environments. There is already a sense in which the others are having to play catch up even though they have been around for much longer.

It is true that Bluetooth now has its low energy version to overcome the very limited range of the main protocol, but even this is struggling to demonstrate adequate performance over larger commercial sites. The Wi-Fi Alliance claims that HaLow is highly robust and can cope with most real sites from large homes having thick walls containing metal, to concrete warehouse complexes.

 

The big picture is that Wi-Fi is looking increasingly like a multi-channel protocol operating at a range of frequencies to suit differing use cases. To date we have two variants, 2.4 GHz and 5 GHz, which tend to get used almost interchangeably, with the latter doubling up to provide capacity when the former is congested. In future though there will be four channels, still interchangeable but tending to be dedicated to different applications, combining to yield a single coherent standard that will cover all the basses and perhaps vie with LTE outdoors for connecting various embedded IoT and M2M devices.

HaLow comes in at around 900 MHz, which means it has less bandwidth but greater coverage than the higher frequency Wi-Fi bands and has been optimized to cope well with interference both from other radio sources and physical objects. Then we have the very high frequency 802.11ad or WiGig standard coming along at 60 GHz enabling theoretical bit rates of 5 Gbps or more, spearheaded by Qualcomm, Intel and Samsung. WiGig is a further trade-off between speed and coverage and it will most likely be confined to in-room distribution of decoded ultra HD video perhaps from a gateway or set top to a big screen TV or home cinema.

Then the 5 GHz version might serve premium video to other devices around the home, while 2.4 GHz delivers general Internet access. That would leave HaLow to take care of some wearables, sensors and other low power devices that need coverage but only modest bit rates. As it happens HaLow will outperform all the other contenders for capacity except Bluetooth, with which it will be on much of a par.

 

HaLow will be embraced by key vendors in the smart home and IoT arena, such as Paris based SoftAtHome, which already supports the other key wireless protocols in its software platform through its association with relevant hardware and SoC vendors. SoftAtHome can insulate broadband operators from underlying protocols so that they do not have to be dedicated followers of the wireless wars.

AirTies is another vendor with a keen interest as one of the leading providers of Wi-Fi technology for the home, already aiming to deliver the levels of coverage and availability promised by HaLow in the higher 2.4 GHz and 5 GHz bands. It does this by creating a robust mesh from multiple Access Points (APs), to make Wi-Fi work more like a wired point to point network while retaining all the flexibility of wireless.

 

All these trends are pointing towards Wi-Fi becoming a complete quad-channel wireless offering enabling operators to be one stop shops for the digital home of the future, as well as being able to address many IoT requirements outside it.

At the same time it is worth bearing in mind that the IoT and its relative M2M is a very large canvas, extending to remote outdoor locations, some of which are more far challenging for RF signals than almost any home. In any case while HaLow may well see off all-comers indoors, it will only be a contender out doors in areas close to fixed broadband networks. That is why there is so much interest in Heterogeneous Networks (HetNets) combining Wi-Fi with LTE and also why there are several other emerging wireless protocols for longer distance IoT communications.

One of these others is Long Range Wide Area Network (LoRaWAN), a low power wireless networking protocol announced in March 2015, designed for secure two way communication between low-cost battery-powered embedded devices. Like HaLow it runs at sub-GHz frequencies, but in bands reserved for scientific and industrial applications, optimized for penetrating large structures and subsurface infrastructures within a range of 2km. LoRaWAN is backed by a group including Cisco and IBM, as well as some leading Telcos like Bouygues Telecom, KPN, SingTel and Swisscom. The focus is particularly on harsh RF environments previously too challenging or expensive to connect, such as mines, underwater and mountainous terrain.

Another well backed contender is Narrowband-LTE (NB-LTE) announced in September 2015 with Nokia, Ericsson and Intel behind it, where the focus is more on long range and power efficient communications to remote embedded sensors on the ground. So it still looks like being a case of horses for courses given the huge diversity of RF environments where IoT and M2M will be deployed, with HaLow a likely winner indoors, but coexisting with others outside.

Posted on 1 Comment

@nebul2’s 14 reasons why 2015 will be yet another #UHD #IBCShow

Ultra HD or 4K has been a key topic of my pre and post IBC blogs for over 5 years. I’ve recently joined the Ultra HD Forum, serving on the communications working group. That’s a big commitment and investment, as I don’t have any large company paying my bills. I’m making it because I believe the next 18 months will see the transition from UHD as the subject of trials for big operators and precursor launches to something no operator can be without. Time to get off the fence. I once wrote that the 3D emperor didn’t have any clothes on; well, the UHD emperor is fully clothed.

Of course much still needs to be achieved before we see mass adoption. I don’t know if HDR and 4K resolution will reach market acceptance one at a time or both together, and yes, I don’t know which HDR specification will succeed. But I know it’s all coming.

Below is a list of 14 key topics ordered by my subjective (this is a blog remember) sense of comfort on each. I start with areas where the roadmap to industrial strength UHD delivery is clear to me and end with those where I’m the most confused.

Note on vocabulary: 4K refers to a screen resolution for next gen TV whereas UHD includes that spatial resolution (one even sees UHD phase 2 documents refer to an 8K resolution) but also frame rate, HDR and next generation Audio.

So as I wander round IBC this year, or imagine I’m doing that, as probably won’t have time, I’ll look into the following 14 topics with growing interest.

1. Broadcast networks (DVB)

I doubt I’ll stop by the big satellite booths for example, except of course for free drinks and maybe to glimpse the latest live demos. The Eutelsat, Intelsat or Astras of this world have a pretty clear UHD story to tell. Just like the cableCos, they are the pipe and they are ready, as long as you have what it takes to pay.

2. Studio equipment (cameras etc.)

As a geek, I loved the Canon demos at NAB, both of affordable 4K cameras and their new ultra sensitive low-light capabilities. But I won’t be visiting any of the studio equipment vendors, simply because I don’t believe they are on the critical path for UHD success. The only exception to this is the HDR issues described below.

 3. IP network; CDN and Bandwidth

Bandwidth constricts UHD delivery; it would be stupid to claim otherwise. All I’m saying is that by putting this issue so high on the list everything is clear in the mid-term. We know how fast High-Speed Broadband (over 30MPS) is arriving in most markets. In the meantime, early adopters without access can buy themselves a UHD Blu-ray by Christmas this year and use progressive download services. The Ultra HD Alliance has already identified 25 online services, several of which support PDL. Once UHD streams get to the doorstep or the living room, there is still the issue of distributing them around the home. But several vendors like AirTies are addressing that specific issue, so again, even if it isn’t fixed, I can see how it will be.

 4. Codecs (HEVC)

The angst around NAB this year when V-nova came out with a bang has subsided. It seems now that even if such a disruptive technology does come through in the near-term, it will complement not replace HEVC for UHD delivery.

The codec space dropped from a safe 2 in my list down to 4 with the very recent scares on royalties from the HEVC Advance group that wants 0.5% of content owner & distributor's gross revenue. Industry old-timers have reassured me that this kind of posturing is normal and that the market will settle down naturally at acceptable rates.

 5. Head-ends (Encoders, Origins, etc.)

I always enjoy demos and discussion on the booths of the likes of Media Excel, Envivio, Harmonic, Elemental or startup BBright and although I’ll try to stop by, I won’t make a priority of them because here again, the mid-term roadmaps seem relatively clear.

I’ve been hearing contradictory feedback on the whole cloud-encoding story that has been sold to us for a couple of years already. My theory – to be checked at IBC – is that encoding in the cloud really does make sense for constantly changing needs and where there is budget. But for T2 operators running on a shoestring – and there are a lot of them – the vendors are still mainly shifting appliances. It’s kind of counterintuitive because you’d expect the whole cloud concept of mutualizing resources to work better for the smaller guys. I must have something missing here, do ping me with info so I can update this section.

 6. 4K/UHD resolutions

While there is no longer any concern on what the screen resolutions will be, I am a little unclear as to the order in which they will arrive. With heavyweights like Ericsson openly pushing for HDR before 4K, I’m a little concerned that lack of industry agreement on this could confuse the market.

 7. Security for UHD

Content owners and security vendors like Verimatrix have all agreed that better security is required for UHD content. I see no technical issues here - just that if the user experience is adversely affected in any way (remember the early MP3 years), we could see incentive for illegal file transfer grow, just when legal streaming seems to be taking of at last.

 8. TV sets & STBs

Well into second half of my list, we’re getting into less clear waters.

When it’s the TV set that is doing the UHD decoding, we’re back at the product cycle issue that has plagued smart TVs. It’s all moving too fast for a TV set that people still would like to keep in the living room for over 5 years.

On the STB side, we’ve seen further consolidation since last year’s IBC. Pace for example is no longer; Cisco is exiting STBs etc. It seems that only players with huge scale will survive. Operators like Swisscom or Orange can make Hardware vendors’ lives harder by commoditizing their hardware using software-only vendors such as SoftAtHome to deliver advanced features.

 9. Frame rates

This is a really simple one but for which consensus is needed. At a 4K screen resolution the eye/brain is more sensitive to artifacts. Will refresh rates standardize at 50Hz or 60Hz? Will we really ever need 120Hz?

It’s clear that doubling a frame rate does not double the required bandwidth as clever compression techniques come to play. But but I haven’t seen a consensus on what the bandwidth implication of greater frame rate will actually be.

10. Next Gen Audio

There are only a few contenders out there, and all have compelling solutions. I’m pretty keyed up on DTS’s HeadphoneX streamed with Unified Streaming packagers because I’m helping them write an eBook on the subject. Dolby is, of course, a key player here but for me it’s not yet clear how multiple solutions will cohabit. It isn’t yet clear how if and when we’ll move from simple channel-based to scene based or object based audio. Will open source projects like Ambiophonics play a role and what about binaural audio.

11. HDR

High Dynamic Range is about better contrast. Also, the brain perceives more detail when contrast is improved, so it’s almost like getting more pixels for free. But the difficulty with HDR and why it’s near the bottom of my list is that there are competing specifications. And even once a given specification is adopted, its implementation on a TV set can vary from one CE manufacturer to another. I final reservation I have is the extra power consumption it will entail that goes against current CE trends.

12. Wide Color Gamut

As HDR brings more contrast to pixels WCG brings richer and truer colors. Unlike with HDR, the issue isn’t about which spec to follow, as it is already catered for in HEVC for example. No, it’s more about when to implement it and how the color mapping will be unified across display technologies and vendors.

 13. Work flows

Workflow from production through to display is a sensitive issue because it is heavily dependant on skills and people. So it’s not just a mater of choosing the right technology. To produce live UHD content including HDR, there is still no industry standard way of setting up a workflow.

 14. UHD-only content

The pressure to recoup investments in HD infrastructure makes the idea of UHD content that is unsuitable for HD downscaling taboo. From a business perspective, most operators consider UHD as an extension or add-on rather than something completely new. There is room for a visionary to coma and change that.

Compelling UHD content, where the whole screen is in focus (video rather than cinema lenses) gives filmmakers a new artistic dimension to work on. There is enough real estate on screen to offer multiple user experiences.

In the world of sports a UHD screen could offer a fixed view on a whole football pitch for example. But if that video were seen on an HD screen, the ball probably wouldn’t be visible. Ads that we have to watch dozens of times could be made more fun in UHD as their could be different storied going on in different parts of the screen, it would almost be an interactive experience …

Posted on Leave a comment

Operators unhappy over Wi-Fi and unlicensed cellular coexistence plans

Controversy has raged for well over a year now over plans by some mobile network operators (MNOs) to extend their spectrum into unlicensed 5GHz bands currently occupied by Wi-Fi. The arguments have been both commercial and technical, centering on the rights of MNOs to compete with established Wi-Fi networks and at the same time the efficiency or fairness of mechanisms for coexistence between the two.

LTE-U enables 4G/LTE cellular services to be extended into the 5GHz unlicensed bands, which is obviously attractive for MNOs because it gives extra precious spectrum without having to pay for it while making it easier to support high bandwidth applications like premium live video streaming. But the initiative, initially proposed by Qualcomm and Ericsson, has gained some traction within the 3rd Generation Partnership Project (3GPP) primarily because many MNOs want to gain full control of heterogeneous networks combining licensed and unlicensed spectrum, so there is a major commercial force here.

MNOs have expressed frustration over Wi-Fi offload, which is necessary to avoid overload on their networks and give their subscribers the best quality experience, but means they have less control over end-to-end traffic. Not surprisingly though those Telcos with extensive Wi-Fi hot spot networks take a different line and are opposed to LTE-U. Therefore we find that operators like AT&T and BT with huge investment in Wi-Fi hotspots but smaller presence in cellular are opposed to LTE-U. On the other hand Telcos that have not bet so much on Wi-Fi but have major cellular operations now support LTE-U, including big hitters like Verizon, China Mobile, NTT DoCoMo, Deutsche Telekom and TeliaSonera.

Notably though some of the world’s biggest providers of mobile services are ambivalent about LTE-U, which some of them see as complicating rather than simplifying the drive towards heterogeneous services combining licensed and unlicensed spectrum. The view there is that Wi-Fi is best placed to occupy the unlicensed spectrum with a lot of momentum and investment behind it. The LTE-U camp counter that the technology can carry twice as much data as Wi-Fi in a given amount of 5 GHz spectrum through use of carrier aggregation via LTE-LAA. This was already defined in the LTE standards and enables multiple individual RF carrier frequencies, either in the same or different frequency bands, to be combined to provide a higher overall bit rate.

This may be true as far as it goes but is largely irrelevant for users wanting to access broadband services in their homes or public hot spots, according to the Wi-Fi community, a view shared by some MNOs as well. Birdstep, a leading Swedish based provider of smart mobile data products enabling heterogeneous services combining cellular and Wi-Fi, argues that the story is not just about the wireless domain itself but also the backhaul infrastructures behind it. Any spectral efficiency advantage offered by LTE-U would be more than cancelled out by inherent inefficiencies in the backhaul. By offering access to the world’s broadband infrastructures Wi-Fi offers greater overall scale and redundancy.

Another Wi-Fi specialist, Turkey based AirTies, contends that LTE-U is just a spectrum grabbing bid by MNOs and should be resisted. Air Ties has developed mesh and routing technologies designed to overcome the problems encountered by Wi-Fi in the real world and these are only going to get worse as unlicensed spectrum reaches even higher frequencies. The next generation of Wi-Fi based on the emerging IEEE 802.11ad standard will run in the much higher frequency band of 60 GHz, which will potentially yield a lot more capacity and performance but increase susceptibility to physical obstacles and interference. It will only work with further developments in the sort of intelligent beam forming, meshing and steering technologies that AirTies has invested in.

It is true that LTE-U proponents have worked hard to mitigate any impact of coexistence with LTE-U on Wi-Fi. In Europe and also Japan they were forced to do so anyway by regulations that required LTE-U to adhere to similar rules over fair access to spectrum as Wi-Fi. These rules insist on incorporation of LBT (Listen Before Talk) into LTE-U, a mechanism originally developed for fixed line Ethernet networks where there was a shared collision domain (it was called Carrier Sense Multiple Access or CSMA). Stakeholders that are not in favor of rapid LTE-U deployment point out that in the old Ethernet days before 10BaseT/switching, CSMA proved inefficient when there were to many devices trying to get onto the same collision domain. Total capacity could drop drastically and this issue could be reborn into the wireless world.

The European Union specified two options for LBT, one the scheme called DCF/EDCA already adopted for Wi-Fi standards and a newer scheme known as Load Base Equipment (LBE), differing in the procedure for backing off when detecting traffic in a given channel.

Naturally enough there has been an assumption in the LTE-U camp that any deployments will be safe if they do adhere to the EU’s LBE LBT standard. But this assumption has recently been challenged by CableLabs in a simulation modeling a million transmission attempts on sets of nodes following the EU LBE LBT rules. The EU LBE turned out to scale badly with increased numbers of devices, with growing numbers of collisions. This will only amplify concerns expressed by broadcasters such as Sky, as well as by some major vendors like Cisco with feet in both the Wi-Fi and LTE camps, that LTE-U poses a threat to quality of service for premium video especially.

There are no signs yet of the LTE-U camp giving up on their efforts to infiltrate the 5 GHz domain, arguing correctly that by definition unlicensed spectrum is free for all and cannot be owned by any one wireless technology. But there is a strong case for holding off from LTE-U deployments until further extensive tests and simulations have been carried out to assess the impact on capacity and QoS in real life situations.

Posted on 1 Comment

Google getting it right at last with Android TV

It may still be too early to be sure, but there are signs that with Android TV Google’s connected video strategy is at last starting to look joined up. There is not yet any killer blow in sight, which could only really come from the content angle, but evidence is mounting that Google now has a clearer and more focused strategy that is winning over TV and device makers as well as app developers. The key lies partly in a much mature Android based ecosystem better geared towards TV than was the case when the abortive first attempt called Google TV was launched with much fanfare in 2010.

Then Android was little more than a mobile OS optimized for smartphones and subsequently tablets, with TV supported as a cumbersome add-on that was hard to develop apps for. But the latest version 5.0 codenamed Android Lollipop, first unveiled during the Google I/O developer conference in June 2014, has been revamped for TV with a completely redesigned user interface. This is a significant enhancement based on Google’s own language called Material Design incorporating tools for easy layout of screens with responsive animations, transitions, padding, and depth effects such as lighting and shadow. It comes with new guidelines for developers that make it easier to create a consistent look and feel across the whole Android device constellation, including big smart TVs down through tablets and smartphones to diminutive smartwatch screens. To encourage app creation further, Google sent out developer units, dubbed “ADT-1”, to those that signed up for a test unit at Google I/O 2014.

These nuances were initially missed by many commentators, myself included, at the time of that conference, perhaps partly because the new Lollipop version was still shrouded in a little mystery. My initial reaction to Android TV was therefore quite negative, suggesting it sent confused messages given that Google was also promoting Chromecast and that it offered little more than already existing competitive offerings such as Apple TV, Roku and Amazon Fire TV.

The reason for being more sanguine about Android TV now is not so much that Google has raised its game. If anything it is the opposite in that the horizons have been narrowed to the confines of an operating platform for TV but crucially now aligned with Chromecast as well as with its developer community. What Google has succeeded in doing is strike a balance between encouraging innovation and yet exercising some control over the environment with an emphasis on a consistent UI across all devices, which is something its competitors have not quite matched yet. We have already seen the fruits of this approach through a few OEMs such as Razer, which has announced Forge TV, a set top for Android TV that throws in some of its gaming streaming.

Casting is now at the center of Android TV and pivotal to delivery of content, with the various new boxes, including Google’s own Nexus, Player, being the first dedicated hardware units to support it. This does though pinpoint the challenge of persuading consumers to pay the extra for the full Android TV experience when they can get Netflix and all the basic content they want from Chromecast. Effectively then Chromecast is the entry level version of Android TV with the full monte running on set tops as well as smart TVs, including models announced by Sony, Philips and Sharp at CES 2015.

The killer feature though would be premium live content and all that can be said at this stage is that Google has prepared the ground with its dummy app called ‘Live Channels for Android TV’.  It remains to be seen what will be on it and how far this goes beyond the content currently available either via Chromecast or YouTube. But at least Google is much better placed to strike a major blow in the intensifying connected TV wars.

Posted on Leave a comment

My Ultra High Definition #NAB15

There is always plenty to see at NAB and if I liked Las Vegas, I’d surely come more often. But the last time I was here was in 2001 when the show was dubbed the convergence NAB.

I’m going again this year for one main reason: Ultra HD. Sure 4K has been a key topic for at least the last five NABs, but now is different and here’s why.

Gut feelings aren’t very useful in making business decisions, but sometimes, once all evidence has been considered, that’s all you have to go by. Intuitions can even be dangerous when you don’t have enough information, like in 1993 when I advised my cousin not to join an Internet start-up because I “felt” that the Web was going to stay the realm of geeks and early adopters. Thank god he didn’t heed my stupid advice. I also missed the SMS boat with a strong gut feeling that such a barbaric user experience would never make it mainstream. But those were spot decisions made without enough background knowledge let alone understanding of what was really going on under the surface, driving user behaviour. Daniel Kanneman, in his excellent book Thinking Fast and Slow explains how any experts intuitions are mainly bunk anyway.

So I’ll attempt o explain why UHD will be real and will start now with as much argument and fact. I will own up to what is no more than opinion. To prove that this isn’t just consulting BS (yes I admit I’m also a consultant) I’m spending a few grand of my own money and taking 5 days off to go to NAB, which represents over my annual investment as one man band. Now is the time when the industry will really launch UHD, and I want to be there. The table below lists the main reasons given by those who advocate waiting for the time being.

Issue

Description

Facts

Opinion

for

against

Content creation, postproduction & workflows 4K content requires powerful hardware and four times the storage space. Studios already shoot in 4K, as do high-end smartphones. 4K cameras already used to produce several feeds in many studios. Cost, lack of standards, creates risk of needing to re-invest. To produce the best possible HD it’s already better to work in UHD then downscale. Be first out of the stalls for UHD deployment.
Content availability Most libraries are not UHD Anything shot on 70mm film can be re-mastered.

Up-scaling any content improves the HD experience.

It will take time to reach critical mass of native UHD content. This is also a chicken and egg situation. Offer and demand will prod each other forwards. Shorter content shelf life means more new content.
Colour depth and refresh rates UHD shows everything better including flaws. 30 fps looks jittery in UHD. These issues will be addressed irrespective of UHD. HD needs them too. UHD hardware may not support future HDR and HFR specs. Can cause legitimate delays. However, improved HD creates awareness of picture quality and fuels desire for UHD.
Device readiness Require more power to decode. With Moore’s law still in effect this problem will disappear shortly Impossible to leverage existing hardware in the field Smartphones shoot 4K that users will want to watch (Apples 5K iMac sets the scene). UHD is a premium feature consumers will pay for.
Distribution and Bandwidth Require 3 to 4 times the HD bandwidth Fibre and 4G deployments in full swing, HEVC is here Volumes required for ABR file storage will explode. Beyond HEVC, a new wave will come (I’l keeps tabs on the buzzing V-Nova).
Screen size and viewing distance To be at least 6 feet away from screen it must be at least 55” In urban homes shorter viewing distances make sense. Huge screen sizes are only popular in some markets As long as consumers perceive benefits they will adapt, they always do.

I wont dwell in this blog on the benefits of UHD, but unlike with other technical (r)evolutions such as 3D, all content will benefit from UHD. I also see an opportunity for a new kind of video story telling. High resolution content with shorter viewing distances lets different parts of the screen tell a different story depending on how and where you watch. Video can become more immersive. Choosing what part of the screen to watch is almost an interactive experience. Whether 3D technology is present or not will become a technical detail.

All the issues discussed above will benefit from branding, standardization and end-to-end interoperability testing which is why I will be reporting from the launch of the UHD Forum in Vegas. I’ll also look into the UHD alliance which has already launched a consumer facing Web site.

I’ll write a post-NAB blog, so far I intend to meet up with:

  • the Ultra HD Forum gang,
  • The Ultra HD Alliance (if I find them),
  • V-Nova that boasts UHD stream at HD bitrates,
  • BBright that offers an entry-level UHD play-out system that simplifies trials,
  • Verimatrix, that is launching a new UHD focussed security suite,
  • Sony to get a feel of the latest 4K cameras and TVs,
  • Please comment if you have other suggestions.

You only need to see UHD twice for it to make sense UHDUHD ;o)

[Update, just got back rather that a new blog here is the word cloud of my impressions walking around the halls and listening to conferences (see my twitter for more details)] : wordle 7