Publié par Laisser un commentaire

(English) @nebul2’s NAB 2016 Journal (UHD, HDR, VR, All-IP)

Désolé, cet article est seulement disponible en Anglais Américain. Pour le confort de l’utilisateur, le contenu est affiché ci-dessous dans une autre langue. Vous pouvez cliquer le lien pour changer de langue active.

Las Vegas was again focused on UHD in 2016, at least through my eayes. The four Keywords I came away with were 1: UHD (again), 2: HDR, but also 3: VR and 4: All-IP production. Of course other things like drones were important, but I'm not a real journalist, I don't know how to write about things I don't know.

NAB Parking Day1

We got in from Europe on the Saturday evening and this year I was on a budget so we stayed in an Airbnb apartment with my colleague Marta. It turned out to be just behind the main LVCC parking lot. On Sunday morning, you can see on thE right what the parking looked like when you arrive before the show is really underway.

Size and growth of the industry

On the Sunday I sat for a moment through the "Media Technology Business Summit" run by Devoncroft and learned abit about the industry trends:

  • Starting with radio shows this year’s NAB is the 94th annual Show, so I suppose in 6 years we’ll have a big bonanza, I wonder if we’ll have something like Augmented Reality in 8K by then.
  • Devoncroft sees the global media being market worth 49bn in 2015 with the US Media industry having pushed revenue per user to the limit. 3000 vendors make up their industry panel and 2009-2015 CAGR was 1,9% with 2014-2015 OpEx spend at -4.2% and CapEx spend at -4.4%.
  • Despite the OTT craze and losing traditional subs, ESPN still gets 7$/Month from linear subscriptions, but only 0,42$/Month from OTT viewers, so hold your hats, linear pay-TV ain’t dead quite yet. Beyond sports, Devoncroft argues that even though there is growth, digital revenues are insufficient to replace linear ones. The big issue is how the ad market can transition.
  • 4K and UHD make up the third most import topic for respondents of Devoncroft's 2016 Big Broadcast Survey the results of which will soon be released. But Demand for UHD is less for “more pixels” than one for “better pixels”. So according to Devoncroft, like Ericsson, the HDR Vs. 4K debate is all but over.

Virtual and Augmented Reality

I then popped into an Augmented Reality (AR) conference where Gary Acock and Juan Salvo were discussing how to add live content to the UnReal video gaming engine. AR is seen as bringing the real world into Virtual Reality (VR). Stitching 360° video is still apparently a “pretty unpleasant experience” and French startup VideoStitch was mentioned as one of the key players working on fixing this. Currently 360° production design is limited by how effectively you can stitch video. But with AR there are also Inherent UX limitations like parallax issues with head movement or camera movement when there’s no head movement. With AR one needs to always know where the head is and how it's positioned as head movements affect the content that is being created.

The amount of data to process for VR can be well over 1TB / hour so the coming (?) VR/AR revolution needs powerful GPU and CPU.

AR, VR and any immersive experience are still moving targets in 2016. But neither AR nor VR are isolated from the broadcast experience anymore. Indeed VR is less of an isolating and lonely experience, but a new way of engaging, a bit like coming to a conference and interacting with social media on a smartphone at the same time. Content is still king and creating compelling content remains the goal where AR & VR are just other tools. As we still don't have toolsets like an « Adobe for AR/VR » we need to jerry-rig existing tools.

A VR demo that was not at NAB intrigued me. Frauhoffer’s Stephan Steglich told me about FAME. It’s the simple idea of navigating the 360 video with a remote control. 2 key advantages are removing the isolation aspect of having to wear something over the eyes and moving all the processing to the cloud, allowing for future-proof deployments. It sounded convincing but I’ll wait for a compelling demo before making an opinion.

Showstoppers

Sennheiser Mic

I had been told great things about the CES Showstoppers being a big event, at my first experience at NAB, it was a very focused affair where great food and wine seemed to be as attractive for the media as the companies to visit.

German manufacturer Sennheiser was showing off their latest MKE440 DSLR microphone, which they say is the first mini-shotgun for HQ stereo sound image in one take. I was more taken by the beautiful design of the prototype VR microphone that goes under VR camera.

I met up with V-Nova’s Fabio Murra who was showing their two OTT deployments based on their Perseus codec. FastFilmz launched on March 26 in India offering SVoD to a mobile-only Tamil customer base with a potential of 120m subs. There were 350 titles at launch and according to V-Nova, Perseus made the business case possible in southern India where only 2G is available in some areas, offering a 64-128 kbps bandwidth. The demo I saw was watchable at 120kbps using 14 fps (I had to point that out though). The Perseus codec is described as “hybrid on top of H264” with a metadata stream on top of H264. I’ll be looking to dig into this a bit more as I no longer understand exactly what this means after a heated discussion several analysts. Content is protected with DRM I couldn’t find out by who.

I only glimpsed the other demo of a 4K STB using OTT delivery. It was showing Tears of Steel at 4mbps and looked fine but without any wow effect at least for what was on screen then, or maybe it was just that I was too far away for the small screen.

V-Nova had already announced a contribution deal with Eutelsat and promised another one for the next day (which turned out to be Sky Italia).

brother

The Japanese company Brother that I wrongly thought of as a printer maker (does any Japanese company do only one thing?) was displaying « Airscouter », a surprising head-mounted monitor designed for cameramen in difficult positions. You see a 720p resolution image in the corner of one eye. It was a bit disconcerting and I guess limited to some very specific use cases. I felt a bit nauseous with it on my head but it does really work and felt maybe like what Iron Man might feel.

Ultra HD Forum

Monday was taken up with Ultra HD Forum activities for me. We had our own press conference in the morning and in the after noon I made a tiny presentation during the Pilot press conference in the Futures Park. I discussed, the forum’s reason for being, it’s history, our Plugfest #1, the Guidelines 2016 and the general « Work in Progress » aspect of live UHD.

« Pilot » is new name for « NAB Labs » that was started in 2012. We were among 30 exhibitors in Futures Park, which aims to promote « Edge of the art » concepts that are not yet commercialized. ATSC 3.0 was the star with 15 companies focusing on that alone. Other stuff is very diverse ranging from commercial R&D, government to academic research. NHK 8k Super High Vision was prominent as usual and the Nippon public broadcaster is still scheduled to launch commercially in 2018 « so people can enjoy in 2020 Japanese Olympics » in glorious 8K HDR with HFR.

Security and analytics

Monday night was over-booked and I chose the Verimatrix media dinner. I had some animated discussions on UHD and the extent to which HDR might be the only big game-changer (I still believe in 4K but am feeling more and more lonely on that front). Tom Munro the CEO gave me a great update on the company strategy and how the move towards analytics, which I now understand can be a logical progression for a security vendor. If the financial transactions are precious enough to secure, then private usage data is worthy of the same efforts. More on that in a dedicated blog soon.

Satellite industry on edge of a cliff and might UHD save it?

On Tuesday I got myself to the Satellite industry day. I have this vision on the industry (at least the broadcast and the Telecoms parts of it) sitting on the edge of a cliff wondering when fiber, 5G and delinearization will push the off the edge.

Despite a great lineup with Caleb Henry of Via Sat Mag, Steve Corda VP Bizdev SES, Markus Fritz Eutelsat, Dan Miner AT&T and Peter Ostapiuk of Intelsat, the opening panel didn’t really give me any new ideas to tackle that problem.

AT&T in particular sees similarities between the move from SD to HD and that from HD to UHD, but IntelSat sobered the audience asking how the content industry will make money from upgrade to UHD. SES’s Steve Corda made it scarier still reminding the audience that during the upgrade from SD to HD we didn't have competition from OTT as we do now with most early UHD coming from OTT suppliers.

The satellite industry panel agreed that demand for UHD channels is growing especially from their cable operator clients and that the bottleneck is still available content. AT&T's Dan Miner noted that a key change in OTT delivery in the coming 18 months is that US data plans will enable the TV Everywhere on cellular networks.

The consensus is that to have a monetizable UHD offering you need a bouquet of at least 2 channels, ideally at least to 5 including sports.

When the panel went round enumerating their live 4K services, I counted about a dozen UHD linear channels and as many demo channels as well as a few events based channels.

One of Viasat’s founders Mark Dankberg gave an inspirational talk reassuring the audience that the satellite industry’s future is safe, at least if they copy Viasat. The merger of AT&T and DirecTV is an indicator to him that Satellite without broadband is no longer viable in the long term. Viasat started 1986 in defense, during the 90's they got into VSAT (Data Networking) just on the B2B side. Dankberg believes high –orbit geostationary is still the way to go (instead of mid of low-orbit (LEO)) because it’s the best way to optimize resources with thousands of beams. He points out that as 95% of demand is in 15% of geography; LEO that orbit the earth can't do that. I was enthused by his talk and hoping to get home and write a blog about it, but when I looked through my notes I realized that in the end there wasn’t any new information, just the charisma and communicative beliefs of an industry veteran.

TV Middleware on Android

Beeniuis, the middleware guys from Slovenia that I’ve written about a few time caught me in the south hall so I went to have a look.

In demonstrating their new version 4.2 core product, Beenius told me that the EPG is dead but still went ahead to show me theirs. Navigation is via genres with favorite channels on top of a carousel that mixes live and VoD. Recommendation currently uses their own algorithms but can be based on Think Analytics with « Trending » content on second line.

beenius

The company is very Google-centric, although they still have a Linux offering with a Hybrid DVB solution. They clarified to me how GooglePlay apps can be controlled by the TV operator with three different approaches:

  1. 1. Preinstalled apps and an open GooglePlay
  2. 2. « Walled Garden » where the user chooses apps from the operator’s list typically among a dozen including YouTube, Netflix, etc.
  3. 3. Apps already embedding into the UI, which is also a closed model.

VoD also benefits from integrated recommendation but is open to extra info from the Web such IMDB content.

Beenius haven’t had much interaction with 4K yet, although they say they are ready. As with any competitive TV middleware you can fling content from screen to screen.

The operator-controlled UI can be updated from a central server so that a new version of the App gets automatically pushed to STB via GooglePlay as soon as it's closed and reopened. Playing in the google arena has enabled a full-featured app for Android powered smart TVs, Beanies just needs Google to finally get it right in the living room.

Automatically generated HDRB-COM

Ludovic Noblet of French institute of research B<>Com showed me a tool to up-convert SDR content to HDR. He sees it as a gap-filler for legacy setups which is already available for offline, with a real-time version planed for IBC 2016. The current version introduced a latency of just 3 images and was convincing even if it didn’t carry that amazing wow-effect of some native HDR content. He was very secretive about the first customers but seemed very confident.

The pull of social media

On the last day I had a quick stop at Texas Instrument’s tiny booth, simply because they engaged with me on twitter ;o)

The LMH1219 is a 12G SDI card shown above enables SDI cables to be up to 110m without any signal attenuation, instead of the usual 20-30m. Its UltraScale processing equalizes and Improves the signal. The TI chip is agnostic to metadata so should work fine with HDR for example.

Another hardware innovation they showed me was a single chip for receive (Cable EQ) or drive mode (TX) that makes BNC connectors more versatile as they needn't be just IN or OUT but can be either. The device isn’t available yet nor does it have a product name. Launch is expected in Q1 2017.

Note that I didn’t interact with any of the All-IP production vendors, but just noted it as a buzzing theme in conferences and on booth signage.

NAB Day 3

Oh and the Convention Centre car park looked like this from our apartment window by 9:30 am Monday through Wednesday:

That’s all for now folks.

Publié par Laisser un commentaire

(English) Virtualization approaches final frontiers in the home

Désolé, cet article est seulement disponible en Anglais Américain. Pour le confort de l’utilisateur, le contenu est affiché ci-dessous dans une autre langue. Vous pouvez cliquer le lien pour changer de langue active.

Virtualization has been around almost as long as business computing after being invented by IBM in the 1970s so that “big iron” mainframes could mimic smaller machines for economies of scale. Later after personal computers arrived it reached the desktop with products like Soft PC allowing Apple Mac computers to run Microsoft’s Windows operating system and associated applications.

Another 10 years on the scope of virtualization expanded during the noughties to allow separation of hardware and software outside the data center in networking equipment such as routers and firewalls and then finally the TV industry joined the party. Even that was not the end of the story since now virtualization is beating a path into the home, not just for the gateway or set top boxes, but right to the ultimate client, whether a user’s PC or even an Internet of Things (IoT) device like a thermostat.

Over time the motivations have evolved subtly, so that virtualization became more about being able to exploit lower cost and more flexible commodity hardware than getting the best value out of a few large computers and exploiting their superior capabilities in various areas such as resilience and security. But now as virtualization comes together with the cloud there is another dimension, which is to enable much greater flexibility over where both hardware and software are deployed.

This shift to virtualization around the cloud has been aided by major standardization efforts, especially the open source initiative OpenFlow, which defines the interface between the control and forwarding layers of an SDN (Software Defined Network) architecture. SDN enables traditional networking functions, notably routing from node to node across IP networks, to be split between packet forwarding, which can be done locally on commodity hardware, and the higher level control logic, which can run remotely somewhere in the cloud if desired. OpenFlow then enables a physical device in the home, such as a gateway, to be “bridged” to its virtual counterpart within the network.

The key point here is that not all home gateway functions should be hived off to the cloud, since for example sensitive personal data may be best stored at home perhaps on a NAS (Network Attached Storage) device. It may also be that some processes will run more effectively locally for performance or security reasons, including some associated with the IoT. Virtualization combined with the cloud via OpenFlow allows this flexibility such that functions as well as underlying hardware can be located optimally for given services without incurring a cost penalty.

Just as IBM broke the ground for virtualization in the data center, we are now seeing virtualization reach into the home. Orange founded the French software company SoftAtHome in 2007 so it could deploy hardware independent home gateways. Other vendors have joined the fray since with Alcatel Lucent (now Nokia) among the leaders with its vRGWs (virtualized Residential Gateway) portfolio. Nokia like SoftAtHome argue that with their products operators can turn up new and innovative services faster, while reducing CAPEX and OPEX for existing and new services. Updates can be applied centrally without having to replace hardware or visit the homes, as has been common practice in the data center for some years.

Not surprisingly then some technology vendors have come into the virtualized home gateway area from the enterprise arena. One of these is Japanese IT giant NEC with its networking software subsidiary NetCracker, which helped Austrian incumbent Telekom Austria over an in-depth trial of virtualized customer premises equipment (vCPE). This integrated SDN technology with virtual network functions (VNFs) through a common service and network orchestration platform which also involved technology from other vendors. The Telco cited as a key benefit the ability to have one single point of delivery for home media and entertainment content.

Now virtualization is approaching its next frontier in the IoT arena where the motivation shifts yet again. One challenge for IoT is to be able to configure generic devices for a range of applications rather than having to make dedicated hardware for each one. This is again about being able to use off the shelf hardware for a range of services but this time the commoditization must occur down at the chip level. This calls for embedded virtualization so that small single chip devices such as sensors can be remotely programmed and repurposed in the field. Apart from flexibility and cost reduction, embedded virtualization will confer greater security and real time performance since operations are executed within a single SoC (System on Chip). Even this is not entirely new since embedded virtualization has emerged in other sectors such as the automotive industry  where again there is a need for field upgradeability, given that vehicles as a whole now have a longer life cycle than many of the underlying software based components.

The real challenge for broadband operators will be to capitalize on end to end virtualization extending across the home network, which presents an opportunity to key vendors like Nokia and SoftAtHome to smooth the path.

Publié par Laisser un commentaire

(English) Measurement key to monetizing mobile video

Désolé, cet article est seulement disponible en Anglais Américain. Pour le confort de l’utilisateur, le contenu est affiché ci-dessous dans une autre langue. Vous pouvez cliquer le lien pour changer de langue active.

Measuring mobile video audiences and associated ad engagement is one of the greatest challenges facing the pay TV industry, with big rewards for getting it right. Mobile video has surged over the last year, with phones and tablets accounting for 46 per cent of all online viewing globally during Q4 2016, up from 34 per cent a year earlier, according to video technology vendor Ooyala. Ad spending is moving with the eyeballs and in the UK for example more of it will be on mobile than mainstream TV for the first time this year, £4.58 billion ($7 billion) against £4.18 billion ($6.39 billion), according to eMarketer.

While some pay TV operators may have reasonable visibility over viewing on desktops, mobile devices raise complexity to another dimension. On desktops access to web sites and services is almost all via browsers, but on mobiles these only account for a minority of viewing. It is true that the majority of web sites are accessed from mobiles too via the browser, for obviously individual users only have room for a certain number of apps on their devices. But apps account for the great majority of time spent on mobiles and also for most traffic, because users tend to hang out in just a few places. Those places are accessed via apps rather than the browser, including the likes of Facebook, Google Maps and WeChat. However an interesting and relevant trend for operators during 2016, which has been highlighted by analyst group Forrester, is that users are increasingly turning towards aggregation apps to access the content they want.

When access is predominantly via a browser as on the desktop PC cookies can be used to track viewing activity and measure ad engagement. But cookies do not work well in the mobile world because activity is partitioned between the mobile browser and the various apps isolated from each other via sandboxing, which is a fundamental property of both the dominant mobile OSs, Android and Apple iOS. Web sites accessed within apps open via dedicated custom browsers which means that they cannot interact with persistent cookies on the device, which precludes use of proven desk top measurement tools. In the case of iOS devices, the situation is just as bad even for sites accessed via the mobile browser because Apple prohibits use of third party cookies.

There are also higher level challenges for mobile TV advertising such as defining how long people should watch an ad for it to count as having been viewed, given that attention spans are shorter on small screens. The situation is similar for the actual TV content, where the value of mobile viewing can depend on context, being particularly high when there is synergy with the big screen for example to resume watching something started earlier.

The overall challenge then is to integrate audience measurement and analytics across all screens including mobile to deliver consistent information that takes account of differences in context and engagement across the different platforms. There are now plenty of tools available for tracking activity on the mobile side, but integrating them within a coherent end to end measurement and analytics system is highly complex. Some big operators are attempting to do this in-house but increasingly even they are turning to specialist TV audience companies to enable the integration.

One example is UK based TV analytics firm Genius Digital, offering two services which can be combined or stand alone, one being Real Time Data Collection for reporting viewing data across all devices. This is based on multiscreen libraries that can be embedded into mobile or web applications to enable monitoring of video consumption, profile management, performance and quality management on JavaScript, iOS and Android devices. Secondly Genius Digital offers Multiscreen Data Service (MDS), designed to extract viewing data from apps, even those from third parties. A key benefit of this approach lies in marrying viewing information from these different apps, each of which will normally use different metrics, to provide consistent information about engagement with channels or specific programs for integration with traditional set top box return path data.

Another TV analytics company TVbeat, also UK based, has moved in a similar direction, in this case through a partnership with a dedicated TV app company Metrological. This has enabled TVbeat to meld set top data with mobile device return path and app consumption information from Metrological’s Application Platform.

Such developments ease the pain of mobile audience measurement for pay TV operators and we expect to see more that have previously relied solely on in-house development to at least consider working with one of the specialist analytics companies that are in a better position to aggregate data from many sources. With mobiles accounting for a rapidly increasing proportion of both viewing and ad budgets, operators need to embrace that with their existing actionable data analytics.

Publié par 3 commentaires

(English) The State of #HDR in Broadcast and OTT – CES 2016 update

Désolé, cet article est seulement disponible en Anglais Américain. Pour le confort de l’utilisateur, le contenu est affiché ci-dessous dans une autre langue. Vous pouvez cliquer le lien pour changer de langue active.

By Yoeri Geutskens

This article was first published in December 2015, but has been updated post-CES 2016 (corrections on Dolby Vision, UHD Alliance's "Ultra HD Premium" specification and the merging of Technicolor and Philips HDR technologies).

A lot has been written about HDR video lately, and from all of this perhaps only one thing becomes truly clear – that there appear to be various standards to choose from. What’s going on in this area in terms of technologies and standards? Before looking into that, let’s take a step back and look at what HDR video is and what’s the benefit of it.

Since 2013, Ultra HD or UHD has come up as a major new consumer TV development. UHD, often also referred to as ‘4K’, has a resolution of 3,840 x 2,160 – twice the horizontal and twice the vertical resolution of 1080p HDTV, so four times the pixels. UHD has been pushed above all by TV manufacturers looking for new ways to entice consumers to buy new TV sets. To appreciate the increased resolution of UHD, one needs to have a larger screen or a smaller viewing distance but it serves a trend towards ever larger TV sizes.

While sales of UHD TV sets are taking off quite prosperously, the rest of the value chain isn’t following quite as fast. Many involved feel the increased spatial resolution alone is not enough to justify the required investments in production equipment. Several other technologies promising further enhanced video are around the corner however. They are:

  • High Dynamic Range or HDR
  • Deep Color Resolution: 10 or 12 bits per subpixel
  • Wide Color Gamut or WCG
  • High Frame Rate or HFR: 100 or 120 frames per second (fps)

As for audio, a transition from conventional (matrixed or discrete) surround sound to object-based audio is envisaged for the next generation of TV.

Of these technologies, the first three are best attainable in the short term. They are also interrelated.

So what does HDR do? Although it’s using rather different techniques, HDR video is often likened to HDR photography as their aims are similar: to capture and reproduce scenes with a greater dynamic range than traditional technology can, in order to offer a more true-to-life experience. With HDR, more detail is visible in images that would otherwise look either overexposed, showing too little detail in bright areas, or underexposed, showing too little detail in dark areas.

HDR video is typically combined with a feature called Wide Color Gamut or WCG. Traditional HDTVs use a color space referred to as Rec.709, which was defined for the first generations of HDTVs which used CRT displays. Current flat panel display technologies like LCD and OLED can produce a far wider range of colors and greater luminance, measured in ‘nits’. A nit is a unit for brightness, equal to candela per square meter (cd/m2). To accommodate this greater color gamut, Rec.2020 color space was defined. No commercial display can fully cover this new color space but it provides room for growth. The current state of the art of color gamut for displays in the market is a color space called DCI-P3 which is smaller than Rec.2020 but substantially larger than Rec.709.

To avoid color banding issues that could otherwise occur with this greater color gamut, HDR/WCG video typically uses a greater sampling resolution of 10 or 12 bits per subpixel (R, G and B) instead of the conventional 8 bits, so 30 or 36 bits per pixel rather than 24.

Sony-Color-space

Color/luminance volume: BT.2020 (10,000 nits) versus BT.709 (100 nits); Yxy
Image credit: Sony

The problem with HDR isn’t so much on the capture side nor on the rendering side – current professional digital cameras can handle a greater dynamic range and current displays can produce a greater contrast than the content chain in between can handle. It’s the standards for encoding, storage, transmission and everything else that needs to happen in between that are too constrained to support HDR.

So what is being done about this? A lot, in fact. Let’s look at the technologies first. A handful of organizations have proposed technologies for describing HDR signals for capture, storage, transmission and reproduction. They are Dolby, SMPTE, Technicolor, Philips, and BBC together with NHK. Around the time of CES 2016, Technicolor and Philips have announced they are going to merge their HDR technologies.

Dolby’s HDR technology is branded Dolby Vision. One of the key elements of Dolby Vision is the Perceptual Quantizer EOTF which has been standardized by SMPTE as ST 2084 (see box: SMPTE HDR Standards) and mandated by the Blu-ray Disc Association for the new Ultra HD Blu-ray format. The SMPTE ST 2084 format can actually contain more picture information than TVs today can display but because the information is there as manufacturers build better TVs the content has the potential to look better as the new, improved display technologies come to market. Dolby Vision and HDR10 use the same SMPTE 2084 standard making it easy for studios and content producers to master once and deliver to either HDR10 or, with the addition of dynamic metadata, Dolby Vision. The dynamic metadata is not an absolute necessity, but using it guarantees the best results when played back on a Dolby Vision-enabled TV. HDR10 uses static metadata which ensures it will still look good – far better than Standard Dynamic Range (SDR). Even using no metadata at all, SMPTE 2084 can work at an acceptable level just as other proposed EOTFs without metadata do.

For live broadcast Dolby supports both single and dual layer 10-bit distribution methods and has come up with a single workflow that can simultaneously deliver an HDR signal to the latest generation and future TVs and a derived SDR signal to support all legacy TVs. The signal can be encoded in HEVC or AVC. Not requiring dual workflows will be very appealing to all involved in content production and the system is flexible to let the broadcaster choose where to derive the SDR signal.  If it’s done at the head-end they can choose to simply simulcast it as another channel or convert the signal to dual-layer single stream signal at the distribution encoder for transmission.  Additionally the HDR-to-SDR conversion can be built into set-top boxes for maximum flexibility without compromising the SDR or HDR signals. Moreover, the SDR distribution signal that’s derived from the HDR original using Dolby’s content mapping unit (CMU) is significantly better in terms of detail and color than one that’s captured natively in SDR, as Dolby demonstrated side by side at IBC 2015. The metadata is only produced and multiplexed into the stream at the point of transmission, just before or in the final encoder – not in the baseband workflow. Dolby uses 12-bit color depth for cinematic Dolby Vision content to avoid any noticeable banding but the format is actually agnostic to different color depths and works with 10-bit video as well. In fact, Dolby recommends 10-bit color depth for broadcast.

High-level overview of Dolby Vision dual-layer transmission for OTT VOD

High-level overview of Dolby Vision dual-layer transmission for OTT VOD;
other schematics apply for OTT live, broadcast, etc. 
Image credit: Dolby Labs Dolby Vision white paper

Technicolor has developed two HDR technologies. The first takes a 10-bit HDR video signal from a camera and delivers a video signal that is compatible with SDR as well as HDR displays. The extra information that is needed for the HDR rendering is encoded in such a way that it builds on top of the 8-bit SDR signal but SDR devices simply ignore the extra data.

Technicolor

Image credit: Technicolor

The second technology is called Intelligent Tone Management and offers a method to ‘upscale’ SDR material to HDR, using the extra dynamic range that current-day capture devices can provide but traditional encoding cannot handle, and providing enhanced color grading tools to colorists. While it remains to be seen how effective and acceptable the results are going to be, this technique has the potential to greatly expand the amount of available HDR content.

Having a single signal that delivers SDR to legacy TV sets (HD or UHD) and HDR to the new crop of TVs is also the objective of what BBC’s R&D department and Japan’s public broadcaster NHK are working on together.  It’s called Hybrid Log Gamma or HLG. HLG’s premise is an attractive one: a single video signal that renders SDR on legacy displays but HDR on displays that can handle this. HLG, BBC and NHK say, is compatible with existing 10-bit production workflows and can be distributed using a single HEVC Main 10 Profile bitstream.

Depending on whom you ask HLG is the best thing since sliced bread or a clever compromise that accommodates SDR as well as HDR displays but gives suboptimal results and looks great on neither. The Hybrid Log Gamma name refers to the fact that the OETF is a hybrid that applies a conventional gamma curve for low-light signals and a logarithmic curve for the high tones.

HLG

Hybrid Log Gamma and SDR OETFs; image credit: T. Borer and A. Cotton, BBC R&D

Transfer functions:

  • OETF: function that maps scene luminance to digital code value; used in HDR camera;
  • EOTF: function that maps digital code value to displayed luminance; used in HDR display;
  • OOTF: function that maps scene luminance to displayed luminance; a function of the OETF and EOTF in a chain. Because of the non-linear nature of both OETF and EOTF, the chain’s OOTF also has a non-linear character.

 BBC_WorkFlow

Image credit: T. Borer and A. Cotton, BBC R&D

The EOTF for Mastering Reference Displays, conceived by Dolby and standardized by SMPTE as ST 2084 is ´display-referred'.  With this approach, the OOTF is part of the OETF, requiring implicit or explicit metadata.

Hybrid Log Gamma (HLG), proposed by BBC and NHK, is a 'scene-referred' system which means the OOTF is part of the EOTF. HLG does not require mastering metadata so the signal is display-independent and can be displayed unprocessed on an SDR screen.

The reasoning is simple: bandwidth is scarce, especially for terrestrial broadcasting but also for satellite and even cable, so transmitting the signal twice in parallel, in SDR and HDR, is not an attractive option. In fact, most broadcasters are far more interested in adding HDR to 1080p HD channels than in launching UHD channels, for exactly the same reason. Adding HDR is estimated to consume up to 20% extra bandwidth at most, whereas a UHD channel gobbles up the bandwidth of four HD channels. It’s probably no coincidence HLG technology has been developed by two broadcast companies that have historically invested a lot in R&D. Note however that the claimed backwards compatibility of HLG with SDR displays only applies to displays working with Rec.2020 color space, i.e. Wide Color Gamut. This more or less makes its main benefit worthless.

ARIB, the Japanese organization that’s the equivalent of DVB in Europe and ATSC in North America, has standardized upon HLG for UHD HDR broadcasts.

The DVB Project meanwhile has recently announced that UHD-I phase 2 will actually include a profile that adds HDR to 1080p HD video – a move advocated by Ericsson  and supported by many broadcasters. Don’t expect CE manufacturers to start producing HDTVs with HDR however. Such innovations are likely to end up only in the UHD TV category, where the growth is and any innovation outside of cost reductions takes place.

This means consumers will need a HDR UHD TV to watch HD broadcasts with HDR. Owners of such TV sets will be confronted with a mixture of qualities – plain HD, HD with HDR, plain UHD and UHD with HDR (and WCG), much in the same way HDTV owners may watch a mix of SD and HD television, only with more variations.

The SMPTE is one of the foremost standardization bodies active in developing official standards for the proposed HDR technologies. See box ‘SMPTE HDR standards’.

SMPTE HDR Standards

ST 2084:2014 - High Dynamic Range EOTF of Mastering Reference Displays

  • defines 'display referred' EOTF curve with absolute luminance values based on human visual model
  • called Perceptual Quantizer (PQ)

ST 2086:2014 - Mastering Display Color Volume Metadata supporting High Luminance and Wide Color Gamut images

  • specifies mastering display primaries, white point and min/max luminance

Draft ST 2094:201x - Content-dependent Metadata for Color Volume Transformation of High-Luminance and Wide Color Gamut images

  • specifies dynamic metadata used in the color volume transformation of source content mastered with HDR and/or WCG imagery, when such content is rendered for presentation on a display having a smaller color volume

One other such body is the Blu-ray Disc Association (BDA). Although physical media have been losing some popularity with consumers lately, few people are blessed with a fast enough broadband connection to be able to handle proper Ultra HD video streaming, with or without HDR. Netflix requires at least 15 Mbps sustained average bitrate for UHD watching but recommends at least 25 Mbps. The new Ultra HD Blu-ray standard meanwhile offers up to 128 Mpbs peak bit rate. Of course one can compress Ultra HD signals but the resulting quality loss would defy the entire purpose of Ultra High Definition.

Ultra HD Blu-ray may be somewhat late to the market, with some SVOD streaming services having beat them to it, but the BDA deserves praise for not rushing the new standard to launch without HDR support. Had they done that, the format may very well have been declared dead on arrival. The complication, of course, was that there was no single agreed-upon standard for HDR yet. The BDA has settled on the HDR10 Media Profile (see box) as mandatory for players and discs with Dolby Vision and Philips’ HDR format as optional for players as well as discs.

HDR10 Media Profile

  • EOTF: SMPTE ST 2084
  • Color sub-sampling: 4:2:0 (for compressed video sources)
  • Bit depth: 10 bit
  • Color primaries: ITU-R BT.2020
  • Metadata: SMPTE ST 2086, MaxFall (Maximum Frame Average Light Level), MaxCLL (Maximum Content Light Level)

Referenced by:

  1. Ultra HD Blu-ray spec (Blu-Ray Disc Association)
  2. HDR-compatible display spec (CTA; former CEA)

UHD Alliance ‘Ultra HD Premium’ definition Display Content Distribution
Image resolution 3840×2160 3840×2160 3840×2160
Color Bit Depth 10-bit signal Minimum 10-bit signal depth Minimum 10-bit signal depth
Color Palette Signal input: BT.2020 color representation

Display reproduction: More than 90% of P3 color space

BT.2020 color representation BT.2020 color representation
High Dynamic Range SMPTE ST 2084 EOTF

A combination of peak brightness and black level either:

More than 1000 nits peak brightness and less than 0.05 nits black level
or

More than 540 nits peak brightness and less than 0.0005 nits black level

SMPTE ST 2084 EOTF

Mastering displays recommended to exceed 1000 nits in brightness, less than 0.03 black level, minimum of DCI-P3 color space

SMPTE ST 2084 EOTF

The UHD Alliance mostly revolves around Hollywood movie studios and is focused on content creation and playback, guidelines for CE devices, branding and consumer experience). At CES 2016, the UHDA has announced a set of norms for displays, content end ‘distribution’ to deliver UHD with HDR, and an associated logo program. The norm is called ‘Ultra HD Premium’ (see box). Is it a standard? Arguably, yes. Does it put an end to any potential confusion over different HDR technologies? Not quite – while the new norm guarantees a certain level of dynamic range it does not specify any particular HDR technology, so all options are still open. 

The Ultra HD Forum meanwhile focuses on the end-to-end content delivery chain including production workflow and distribution infrastructure.

In broadcasting we’ve got ATSC in North America defining how UHD and HDR should be broadcast over the air with the upcoming ATSC 3.0 standard (also used in South Korea) and transmitted via cable. Here, the SCTE comes into play as well. Japan has the ARIB (see above) and for most of the rest of the world, including Europe, there’s the DVB Project, part of the EBU, specifying how UHD and HDR should fit into the DVB standards that govern terrestrial, satellite and cable distribution.

In recent news, the European Telecommunications Standards Institute (ETSI) has launched a new Industry Specification Group (ISG) “to work on a standardized solution to define a scalable and flexible decoding system for consumer electronics devices from UltraHD TVs to smartphones” which will look at UHD, HDR and WCG. Founding members include telcos BT and Telefónica. The former already operates a UHD IPTV service; the latter is about to launch one.

Then there are CTA (Consumer Technology Association, formerly known as CEA) in the US and DigitalEurope dealing with guidelines and certification programs for consumer products. What specifications does a product have to support to qualify for ‘Ultra HD’ branding? Both have formulated answers to that question. It has not been a coordinated effort but fortunately they turn out to almost agree on the specs. Unity on a logo was not as feasible, sadly. The UHD Alliance has just announced they’ve settled on a definition of Ultra HD they’ll announce at CES, January 4th, 2016. One can only hope this will not lead to yet more confusion (and more logos) but I’m not optimistic.

By now, the CTA has also issued guidelines for HDR. DigitalEurope hasn’t yet. It’d be great for consumers, retailers and manufacturers alike if the two organizations could agree on a definition as well as a logo this time.

Ultra HD display definition CTA definition DigitalEurope definition
Resolution At least 3840x2160 At least 3840x2160
Aspect ratio 16:9 or wider 16:9
Frame rate Supporting 24p, 30p and 60p 24p, 25p, 30p, 50p, 60p
Chroma subsampling Not specified 4:2:0 for 50p, 60p

4:2:2 for 24p,25p, 30p

Color bit depth Minimum 8-bit Minimum 8-bit
Colorimetry BT.709 color space; may support wider colorimetry standards Minimum BT.709
Upconversion Capable of upscaling HD to UHD Not specified
Digital input One or more HDMI inputs supporting HDCP 2.2 or equivalent content protection. HDMI with HDCP 2.2
Audio Not specified PCM 2.0 Stereo
Logo  CTA Logo UHD  DigitalEur Logo UHD

CTA definition of HDR-compatible:

A TV, monitor or projector may be referred to as a HDR Compatible Display if it meets the following minimum attributes:

  1. Includes at least one interface that supports HDR signaling as defined in CEA-861-F, as extended by CEA-861.3.
  2. Receives and processes static HDR metadata compliant with CEA-861.3 for uncompressed video.
  3. Receives and processes HDR10 Media Profile* from IP, HDMI or other video delivery sources. Additionally, other media profiles may be supported.
  4. Applies an appropriate Electro-Optical Transfer Function (EOTF), before rendering the image.

CEA-861.3 references SMPTE ST 2084 and ST 2086.

What are consumers, broadcasters, TV manufacturers, technology developers and standardization bodies to do right now?

I wouldn’t want to hold any consumer back but I couldn’t blame them if they decided to postpone purchasing a new TV a little longer until standards for HDR have been nailed. Similarly, for broadcasters and production companies it only seems prudent to postpone making big investments in HDR production equipment and workflows.

For all parties involved in technology development and standardization, my advice would be as follows. It’s inevitable we’re going to see a mixture of TV sets with varying capabilities in the market – SDR HDTVs, SDR UHD TVs and HDR UHD TVs, and that’s not even taking into consideration near-future extensions like HFR.

Simply ignoring some of these segments would be a very unwise choice: cutting off SDR UHD TVs from a steady flow of UHD content for instance would alienate the early adopters who bought into UHD TV already. The CE industry needs to cherish these consumers. It’s bad enough that those Brits who bought a UHD TV in 2014 cannot enjoy BT Sport’s Ultra HD service today because the associated set-top box requires HDCP 2.2 which their TV doesn’t support.

It is not realistic to cater to each of these segments with separate channels either. Even if the workflows can be combined, no broadcaster wants to spend the bandwidth to transmit the same channel in SDR HD and HDR HD, plus potentially SDR UHD and HDR UHD.

Having separate channels for HD and UHD is inevitable but for HDR to succeed it’s essential for everyone in the production and delivery chain that the HDR signal be an extension to the broadcast SDR signal and the SDR signal be compatible with legacy Rec.709 TV sets.

Innovations like Ultra HD resolution, High Dynamic Range, Wide Color Gamut and High Frame Rate will not come all at once with a big bang but (apart from HDR and WCG which go together) one at a time, leading to a fragmented installed base. This is why compatibility and ‘graceful degradation’ are so important: it’s impossible to cater to all segments individually.

What is needed now is alignment and clarity in this apparent chaos of SDOs (Standards Defining Organizations). Let’s group them along the value chain:

Domain Production Compression Broadcast Telecom Media/ Streaming CE
SDO SMPTE, ITU-R MPEG , VCEG ATSC, SCTE, EBU/DVB, ARIB, SARFT ETSI BDA, DECE (UV), MovieLabs CTA, DigitalEurope, JEITA

Within each segment, the SDOs need to align because having different standards for the same thing is counterproductive. It may be fine to have different standards applied, for instance if broadcasting uses a different HDR format than packaged media; after all, they have differing requirements. Along the chain, HDR standards do not need to be identical but they have to be compatible. Hopefully organizations like the Ultra HD Forum can facilitate and coordinate this between the segments of the chain.

If the various standardization organizations can figure out what HDR flavor to use in which case and agree on this, the future is looking very bright indeed.

Further reading:

Yoeri Geutskens has worked in consumer electronics for more than 15 years. He writes about high-resolution audio and video. You can follow him on Ultra HD and 4K on twitter @UHD4k.

Publié par Laisser un commentaire

(English) User experience gain must trump resource consumption for UHD success

Désolé, cet article est seulement disponible en Anglais Américain. Pour le confort de l’utilisateur, le contenu est affiché ci-dessous dans une autre langue. Vous pouvez cliquer le lien pour changer de langue active.

This opinion blog is about 3 things that could derail UHD if User Experience lets them.

Ok so you can already tell that I’m biased. I believe in UHD and its five components that will change user experience:

  1. Higher resolution (4K)
  2. Higher Dynamic Range (ability to see details in both brighter whites and darker blacks simultaneously)
  3. Better colour (more colours, closer to human perception)
  4. Better sound (more channels than speakers, object based surround sound)
  5. Higher refresh rate (especially for action which can otherwise look choppy at very high resolution without higher refresh rates)

My crystal ball hasn’t confirmed that this is the order in which these components will arrive, or to what extent it’ll be a big bang approach, or even if some components might get left by the wayside. I’ll delve into that in another blog. In the last 15 years or so, I’ve witnessed HD succeed and I have written several times about why I believe UHD’s time is now (recently here or here in early 2014 for example). I have nothing to sell and no vested interest in UHD, I’m simply driven by my geeky fascination with the promise of a great new experience brought to TV and IP technologies that I’ve been working with for so long.

But just in case I am wrong, here are 3 things that that some of us fret about and still could prevent UHD success.

Thing Description Issue Why it won’t stop UHD
Fragmentation Vendors pulling in different directions Device & content incompatibility Industry bodies like the Ultra HD Forum or the UHD Alliance
Energy Extra brightness, more pixels and more images consume more power Regulation, consumer reluctance, UHD perceived as not Politically Correct Technology progress has often consumed more power (e.g. HD vs SD). Better efficiency means extra power required is less than extra user experience delivered. CPE power issue is more in standby mode than peak consumption. Need not consume much more power with HD-only signal.
Bandwidth UHD can require over 4 times HD bandwidth / file size. Channel and content distribution issues. Monthly data caps will be an issue for OTT households. Networks grow in quantum leaps, UHD will help spur the next one. All-fibre connections and future 5G networks will provide more bandwidth than UHD can consume. A new generation of low-orbit satellites is also on its way.

The driving force providing the impetus to overcome challenges such as those mentioned above is User Experience. This is the part of the equation I have to rely on gut feeling or faith for. My premise is that UHD ushers in a great new User Experience with a sensation of realism and immersion.

If it actually turned out that UHD didn’t bring that “wow” effect so many of us in the industry believe in, then any one of the above “things” could alone derail UHD from becoming a market success and we’ll have to find another game changer in the TV industry.  My experience so far suggests that UHD will be that game change but also that there are still niggles that need ironing out.

As it happens I’ve been watching quite a lot of 4K TV via Amazon and Netflix in the last few months. Landscapes and close-ups are all pretty amazing, but I do have a nagging worry over some indoor scenes, which despite being shot by top-of the range pros (e.g. Amazon’s Transparent, or Breaking Bad, …) leave a strange feeling that something isn’t quite right in 4K resolution. It occurs when there is some mild camera movement yet when most of the scene is in focus. I get this counterintuitive sensation that there is something maybe amateurish in the composition. This could be due to the shooting not having been properly thought out by the director and cameraman for 4K TV playback, or maybe it’s just me not yet being used to processing so much data on screen. If either of these is true, which I suspect is the case, the issue will quickly disappear. But this highlights my only real concern over UHD’s success: will it be consistently “wow” enough to overcome resistances like the three issues stated above? If so I have no doubts that vendors, content providers and operators, as personified in the Ultra HD Forum, will be insure that the whole UHD movement is not derailed by relatively minor teething troubles.