Publié par Laisser un commentaire

(English) « HaLow » sets stage for multi-channel Wi-Fi

Désolé, cet article est seulement disponible en Anglais Américain. Pour le confort de l’utilisateur, le contenu est affiché ci-dessous dans une autre langue. Vous pouvez cliquer le lien pour changer de langue active.

The Wi-Fi Alliance’s announcement of the low power version IEEE 802.11ah, dubbed “HaLow”, was dismissed by some analysts as being too late to make a significant impact in the fast growing Internet of Things (sector). That view is wrong and seriously discounts the power and momentum behind Wi-Fi, to the extent that HaLow has already received extensive coverage in the popular as well as technical press. It is already far closer to being a household name than other longstanding contenders as wireless protocols for IoT devices such as Zigbee and Zwave.

It is true that certification of HaLow compliant products will not begin until 2018, but with IoT surging forward on a number of fronts including the smart car, digital home and eHealth, SoC vendors such as Qualcomm are likely to bring out silicon before that. There are good reasons for expecting HaLow to succeed, some relating to its own specifications and others more to do with the overall evolution of Wi-Fi as a whole.

Another factor is the current fragmentation among existing contenders, with a number of other protocols vying alongside Zigbee and Zwave. This may seem to be a reason for not needing yet another protocol but actually means none of the existing ones have gained enough traction to repel a higher profile invader.

More to the point though HaLow has some key benefits over the others, one being its affinity to IP and Internet through being part of Wi-Fi. Zigbee has responded by collaborating with another wireless protocol developer Thread to incorporate IP connectivity. But HaLow has other advantages, including greater range and ability to operate in challenging RF environments. There is already a sense in which the others are having to play catch up even though they have been around for much longer.

It is true that Bluetooth now has its low energy version to overcome the very limited range of the main protocol, but even this is struggling to demonstrate adequate performance over larger commercial sites. The Wi-Fi Alliance claims that HaLow is highly robust and can cope with most real sites from large homes having thick walls containing metal, to concrete warehouse complexes.

 

The big picture is that Wi-Fi is looking increasingly like a multi-channel protocol operating at a range of frequencies to suit differing use cases. To date we have two variants, 2.4 GHz and 5 GHz, which tend to get used almost interchangeably, with the latter doubling up to provide capacity when the former is congested. In future though there will be four channels, still interchangeable but tending to be dedicated to different applications, combining to yield a single coherent standard that will cover all the basses and perhaps vie with LTE outdoors for connecting various embedded IoT and M2M devices.

HaLow comes in at around 900 MHz, which means it has less bandwidth but greater coverage than the higher frequency Wi-Fi bands and has been optimized to cope well with interference both from other radio sources and physical objects. Then we have the very high frequency 802.11ad or WiGig standard coming along at 60 GHz enabling theoretical bit rates of 5 Gbps or more, spearheaded by Qualcomm, Intel and Samsung. WiGig is a further trade-off between speed and coverage and it will most likely be confined to in-room distribution of decoded ultra HD video perhaps from a gateway or set top to a big screen TV or home cinema.

Then the 5 GHz version might serve premium video to other devices around the home, while 2.4 GHz delivers general Internet access. That would leave HaLow to take care of some wearables, sensors and other low power devices that need coverage but only modest bit rates. As it happens HaLow will outperform all the other contenders for capacity except Bluetooth, with which it will be on much of a par.

 

HaLow will be embraced by key vendors in the smart home and IoT arena, such as Paris based SoftAtHome, which already supports the other key wireless protocols in its software platform through its association with relevant hardware and SoC vendors. SoftAtHome can insulate broadband operators from underlying protocols so that they do not have to be dedicated followers of the wireless wars.

AirTies is another vendor with a keen interest as one of the leading providers of Wi-Fi technology for the home, already aiming to deliver the levels of coverage and availability promised by HaLow in the higher 2.4 GHz and 5 GHz bands. It does this by creating a robust mesh from multiple Access Points (APs), to make Wi-Fi work more like a wired point to point network while retaining all the flexibility of wireless.

 

All these trends are pointing towards Wi-Fi becoming a complete quad-channel wireless offering enabling operators to be one stop shops for the digital home of the future, as well as being able to address many IoT requirements outside it.

At the same time it is worth bearing in mind that the IoT and its relative M2M is a very large canvas, extending to remote outdoor locations, some of which are more far challenging for RF signals than almost any home. In any case while HaLow may well see off all-comers indoors, it will only be a contender out doors in areas close to fixed broadband networks. That is why there is so much interest in Heterogeneous Networks (HetNets) combining Wi-Fi with LTE and also why there are several other emerging wireless protocols for longer distance IoT communications.

One of these others is Long Range Wide Area Network (LoRaWAN), a low power wireless networking protocol announced in March 2015, designed for secure two way communication between low-cost battery-powered embedded devices. Like HaLow it runs at sub-GHz frequencies, but in bands reserved for scientific and industrial applications, optimized for penetrating large structures and subsurface infrastructures within a range of 2km. LoRaWAN is backed by a group including Cisco and IBM, as well as some leading Telcos like Bouygues Telecom, KPN, SingTel and Swisscom. The focus is particularly on harsh RF environments previously too challenging or expensive to connect, such as mines, underwater and mountainous terrain.

Another well backed contender is Narrowband-LTE (NB-LTE) announced in September 2015 with Nokia, Ericsson and Intel behind it, where the focus is more on long range and power efficient communications to remote embedded sensors on the ground. So it still looks like being a case of horses for courses given the huge diversity of RF environments where IoT and M2M will be deployed, with HaLow a likely winner indoors, but coexisting with others outside.

Publié par Un commentaire

(English) @nebul2’s 14 reasons why 2015 will be yet another #UHD #IBCShow

Désolé, cet article est seulement disponible en Anglais Américain. Pour le confort de l’utilisateur, le contenu est affiché ci-dessous dans une autre langue. Vous pouvez cliquer le lien pour changer de langue active.

Ultra HD or 4K has been a key topic of my pre and post IBC blogs for over 5 years. I’ve recently joined the Ultra HD Forum, serving on the communications working group. That’s a big commitment and investment, as I don’t have any large company paying my bills. I’m making it because I believe the next 18 months will see the transition from UHD as the subject of trials for big operators and precursor launches to something no operator can be without. Time to get off the fence. I once wrote that the 3D emperor didn’t have any clothes on; well, the UHD emperor is fully clothed.

Of course much still needs to be achieved before we see mass adoption. I don’t know if HDR and 4K resolution will reach market acceptance one at a time or both together, and yes, I don’t know which HDR specification will succeed. But I know it’s all coming.

Below is a list of 14 key topics ordered by my subjective (this is a blog remember) sense of comfort on each. I start with areas where the roadmap to industrial strength UHD delivery is clear to me and end with those where I’m the most confused.

Note on vocabulary: 4K refers to a screen resolution for next gen TV whereas UHD includes that spatial resolution (one even sees UHD phase 2 documents refer to an 8K resolution) but also frame rate, HDR and next generation Audio.

So as I wander round IBC this year, or imagine I’m doing that, as probably won’t have time, I’ll look into the following 14 topics with growing interest.

1. Broadcast networks (DVB)

I doubt I’ll stop by the big satellite booths for example, except of course for free drinks and maybe to glimpse the latest live demos. The Eutelsat, Intelsat or Astras of this world have a pretty clear UHD story to tell. Just like the cableCos, they are the pipe and they are ready, as long as you have what it takes to pay.

2. Studio equipment (cameras etc.)

As a geek, I loved the Canon demos at NAB, both of affordable 4K cameras and their new ultra sensitive low-light capabilities. But I won’t be visiting any of the studio equipment vendors, simply because I don’t believe they are on the critical path for UHD success. The only exception to this is the HDR issues described below.

 3. IP network; CDN and Bandwidth

Bandwidth constricts UHD delivery; it would be stupid to claim otherwise. All I’m saying is that by putting this issue so high on the list everything is clear in the mid-term. We know how fast High-Speed Broadband (over 30MPS) is arriving in most markets. In the meantime, early adopters without access can buy themselves a UHD Blu-ray by Christmas this year and use progressive download services. The Ultra HD Alliance has already identified 25 online services, several of which support PDL. Once UHD streams get to the doorstep or the living room, there is still the issue of distributing them around the home. But several vendors like AirTies are addressing that specific issue, so again, even if it isn’t fixed, I can see how it will be.

 4. Codecs (HEVC)

The angst around NAB this year when V-nova came out with a bang has subsided. It seems now that even if such a disruptive technology does come through in the near-term, it will complement not replace HEVC for UHD delivery.

The codec space dropped from a safe 2 in my list down to 4 with the very recent scares on royalties from the HEVC Advance group that wants 0.5% of content owner & distributor's gross revenue. Industry old-timers have reassured me that this kind of posturing is normal and that the market will settle down naturally at acceptable rates.

 5. Head-ends (Encoders, Origins, etc.)

I always enjoy demos and discussion on the booths of the likes of Media Excel, Envivio, Harmonic, Elemental or startup BBright and although I’ll try to stop by, I won’t make a priority of them because here again, the mid-term roadmaps seem relatively clear.

I’ve been hearing contradictory feedback on the whole cloud-encoding story that has been sold to us for a couple of years already. My theory – to be checked at IBC – is that encoding in the cloud really does make sense for constantly changing needs and where there is budget. But for T2 operators running on a shoestring – and there are a lot of them – the vendors are still mainly shifting appliances. It’s kind of counterintuitive because you’d expect the whole cloud concept of mutualizing resources to work better for the smaller guys. I must have something missing here, do ping me with info so I can update this section.

 6. 4K/UHD resolutions

While there is no longer any concern on what the screen resolutions will be, I am a little unclear as to the order in which they will arrive. With heavyweights like Ericsson openly pushing for HDR before 4K, I’m a little concerned that lack of industry agreement on this could confuse the market.

 7. Security for UHD

Content owners and security vendors like Verimatrix have all agreed that better security is required for UHD content. I see no technical issues here - just that if the user experience is adversely affected in any way (remember the early MP3 years), we could see incentive for illegal file transfer grow, just when legal streaming seems to be taking of at last.

 8. TV sets & STBs

Well into second half of my list, we’re getting into less clear waters.

When it’s the TV set that is doing the UHD decoding, we’re back at the product cycle issue that has plagued smart TVs. It’s all moving too fast for a TV set that people still would like to keep in the living room for over 5 years.

On the STB side, we’ve seen further consolidation since last year’s IBC. Pace for example is no longer; Cisco is exiting STBs etc. It seems that only players with huge scale will survive. Operators like Swisscom or Orange can make Hardware vendors’ lives harder by commoditizing their hardware using software-only vendors such as SoftAtHome to deliver advanced features.

 9. Frame rates

This is a really simple one but for which consensus is needed. At a 4K screen resolution the eye/brain is more sensitive to artifacts. Will refresh rates standardize at 50Hz or 60Hz? Will we really ever need 120Hz?

It’s clear that doubling a frame rate does not double the required bandwidth as clever compression techniques come to play. But but I haven’t seen a consensus on what the bandwidth implication of greater frame rate will actually be.

10. Next Gen Audio

There are only a few contenders out there, and all have compelling solutions. I’m pretty keyed up on DTS’s HeadphoneX streamed with Unified Streaming packagers because I’m helping them write an eBook on the subject. Dolby is, of course, a key player here but for me it’s not yet clear how multiple solutions will cohabit. It isn’t yet clear how if and when we’ll move from simple channel-based to scene based or object based audio. Will open source projects like Ambiophonics play a role and what about binaural audio.

11. HDR

High Dynamic Range is about better contrast. Also, the brain perceives more detail when contrast is improved, so it’s almost like getting more pixels for free. But the difficulty with HDR and why it’s near the bottom of my list is that there are competing specifications. And even once a given specification is adopted, its implementation on a TV set can vary from one CE manufacturer to another. I final reservation I have is the extra power consumption it will entail that goes against current CE trends.

12. Wide Color Gamut

As HDR brings more contrast to pixels WCG brings richer and truer colors. Unlike with HDR, the issue isn’t about which spec to follow, as it is already catered for in HEVC for example. No, it’s more about when to implement it and how the color mapping will be unified across display technologies and vendors.

 13. Work flows

Workflow from production through to display is a sensitive issue because it is heavily dependant on skills and people. So it’s not just a mater of choosing the right technology. To produce live UHD content including HDR, there is still no industry standard way of setting up a workflow.

 14. UHD-only content

The pressure to recoup investments in HD infrastructure makes the idea of UHD content that is unsuitable for HD downscaling taboo. From a business perspective, most operators consider UHD as an extension or add-on rather than something completely new. There is room for a visionary to coma and change that.

Compelling UHD content, where the whole screen is in focus (video rather than cinema lenses) gives filmmakers a new artistic dimension to work on. There is enough real estate on screen to offer multiple user experiences.

In the world of sports a UHD screen could offer a fixed view on a whole football pitch for example. But if that video were seen on an HD screen, the ball probably wouldn’t be visible. Ads that we have to watch dozens of times could be made more fun in UHD as their could be different storied going on in different parts of the screen, it would almost be an interactive experience …

Publié par Laisser un commentaire

(English) Operators unhappy over Wi-Fi and unlicensed cellular coexistence plans

Désolé, cet article est seulement disponible en Anglais Américain. Pour le confort de l’utilisateur, le contenu est affiché ci-dessous dans une autre langue. Vous pouvez cliquer le lien pour changer de langue active.

Controversy has raged for well over a year now over plans by some mobile network operators (MNOs) to extend their spectrum into unlicensed 5GHz bands currently occupied by Wi-Fi. The arguments have been both commercial and technical, centering on the rights of MNOs to compete with established Wi-Fi networks and at the same time the efficiency or fairness of mechanisms for coexistence between the two.

LTE-U enables 4G/LTE cellular services to be extended into the 5GHz unlicensed bands, which is obviously attractive for MNOs because it gives extra precious spectrum without having to pay for it while making it easier to support high bandwidth applications like premium live video streaming. But the initiative, initially proposed by Qualcomm and Ericsson, has gained some traction within the 3rd Generation Partnership Project (3GPP) primarily because many MNOs want to gain full control of heterogeneous networks combining licensed and unlicensed spectrum, so there is a major commercial force here.

MNOs have expressed frustration over Wi-Fi offload, which is necessary to avoid overload on their networks and give their subscribers the best quality experience, but means they have less control over end-to-end traffic. Not surprisingly though those Telcos with extensive Wi-Fi hot spot networks take a different line and are opposed to LTE-U. Therefore we find that operators like AT&T and BT with huge investment in Wi-Fi hotspots but smaller presence in cellular are opposed to LTE-U. On the other hand Telcos that have not bet so much on Wi-Fi but have major cellular operations now support LTE-U, including big hitters like Verizon, China Mobile, NTT DoCoMo, Deutsche Telekom and TeliaSonera.

Notably though some of the world’s biggest providers of mobile services are ambivalent about LTE-U, which some of them see as complicating rather than simplifying the drive towards heterogeneous services combining licensed and unlicensed spectrum. The view there is that Wi-Fi is best placed to occupy the unlicensed spectrum with a lot of momentum and investment behind it. The LTE-U camp counter that the technology can carry twice as much data as Wi-Fi in a given amount of 5 GHz spectrum through use of carrier aggregation via LTE-LAA. This was already defined in the LTE standards and enables multiple individual RF carrier frequencies, either in the same or different frequency bands, to be combined to provide a higher overall bit rate.

This may be true as far as it goes but is largely irrelevant for users wanting to access broadband services in their homes or public hot spots, according to the Wi-Fi community, a view shared by some MNOs as well. Birdstep, a leading Swedish based provider of smart mobile data products enabling heterogeneous services combining cellular and Wi-Fi, argues that the story is not just about the wireless domain itself but also the backhaul infrastructures behind it. Any spectral efficiency advantage offered by LTE-U would be more than cancelled out by inherent inefficiencies in the backhaul. By offering access to the world’s broadband infrastructures Wi-Fi offers greater overall scale and redundancy.

Another Wi-Fi specialist, Turkey based AirTies, contends that LTE-U is just a spectrum grabbing bid by MNOs and should be resisted. Air Ties has developed mesh and routing technologies designed to overcome the problems encountered by Wi-Fi in the real world and these are only going to get worse as unlicensed spectrum reaches even higher frequencies. The next generation of Wi-Fi based on the emerging IEEE 802.11ad standard will run in the much higher frequency band of 60 GHz, which will potentially yield a lot more capacity and performance but increase susceptibility to physical obstacles and interference. It will only work with further developments in the sort of intelligent beam forming, meshing and steering technologies that AirTies has invested in.

It is true that LTE-U proponents have worked hard to mitigate any impact of coexistence with LTE-U on Wi-Fi. In Europe and also Japan they were forced to do so anyway by regulations that required LTE-U to adhere to similar rules over fair access to spectrum as Wi-Fi. These rules insist on incorporation of LBT (Listen Before Talk) into LTE-U, a mechanism originally developed for fixed line Ethernet networks where there was a shared collision domain (it was called Carrier Sense Multiple Access or CSMA). Stakeholders that are not in favor of rapid LTE-U deployment point out that in the old Ethernet days before 10BaseT/switching, CSMA proved inefficient when there were to many devices trying to get onto the same collision domain. Total capacity could drop drastically and this issue could be reborn into the wireless world.

The European Union specified two options for LBT, one the scheme called DCF/EDCA already adopted for Wi-Fi standards and a newer scheme known as Load Base Equipment (LBE), differing in the procedure for backing off when detecting traffic in a given channel.

Naturally enough there has been an assumption in the LTE-U camp that any deployments will be safe if they do adhere to the EU’s LBE LBT standard. But this assumption has recently been challenged by CableLabs in a simulation modeling a million transmission attempts on sets of nodes following the EU LBE LBT rules. The EU LBE turned out to scale badly with increased numbers of devices, with growing numbers of collisions. This will only amplify concerns expressed by broadcasters such as Sky, as well as by some major vendors like Cisco with feet in both the Wi-Fi and LTE camps, that LTE-U poses a threat to quality of service for premium video especially.

There are no signs yet of the LTE-U camp giving up on their efforts to infiltrate the 5 GHz domain, arguing correctly that by definition unlicensed spectrum is free for all and cannot be owned by any one wireless technology. But there is a strong case for holding off from LTE-U deployments until further extensive tests and simulations have been carried out to assess the impact on capacity and QoS in real life situations.