Internet of Things At A Strategic Inflection Point

This post focuses on a particularly important technology market, the Internet of Things. IoT is at a strategic inflection point, due to explosive projected market growth and unresolved problems of wireless data throughput and energy-efficiency needs. The IoT market is projected to grow to 75 Billion devices by 2025. This growth is predicated on very high throughput wireless networks combined with high energy-efficiency which are not yet available.  Existing wireless technologies, including 5G, will not meet this market need. Also, the extreme diversity of IoT applications will require both small sensors that operate using minimal energy and bandwidth and virtual reality applications with very high Gigabit per second data rates and substantial power requirements.


IoT Technology And Market Requirements Convergence

Current Long-Term Market Projections Are Based On The Emergence Of Technology Solutions

This Mayo615 YouTube Channel video focuses on a particularly important technology market, the Internet of Things. IoT is at a strategic inflection point, due to explosive projected market growth and unresolved problems of wireless data throughput and energy-efficiency needs. The IoT market is projected to grow to 75 Billion devices by 2025. This growth is predicated on very high throughput wireless networks combined with high energy-efficiency which are not yet available.  Existing wireless technologies, including 5G, will not meet this market need. Also, the extreme diversity of IoT applications will require both small sensors that operate using minimal energy and bandwidth and virtual reality applications with very high Gigabit per second data rates and substantial power requirements. For example, Intel estimates that one autonomous vehicle will generate 4 Terabytes of data daily.

The good news is that through my work evaluating advanced research proposals in IoT, I can report that a solution may already be at the laboratory “proof of concept” stage.

The proposed solution that is emerging is the development of innovative software-hardware architectures in which all network layers are jointly designed, combining a millimeter wave high-throughput wireless network and a battery-free wireless network into a single integrated wireless solution.

This is no small feat of engineering but it does appear to be feasible. There are many challenges to successfully demonstrating a millimeter wave wireless network integrated with the Tesla-like concept of radio-wave backscatter energy harvesting. However, collaboration among universities and large Internet companies’ research units are nearing the demonstration of such a network. The likely horizon for this becoming an industry standard is probably three to five years, with prototype products appearing sooner.

You can also read my earlier website posts on the Internet of Things here on mayo615.com.  Links to related posts on IoT are also shown below on this post.

Integration of AI, IoT and Big Data: The Intelligent Assistant

Five years ago, I wrote a post on this blog disparaging the state of the Internet of Things/home automation market as a “Tower of Proprietary Babble.” Vendors of many different home and industrial product offerings were literally speaking different languages, making their products inoperable with other complementary products from other vendors.  The market was being constrained by its immaturity and a failure to grasp the importance of open standards. A 2017 Verizon report concluded that “an absence of industry-wide standards…represented greater than 50% of executives concerns about IoT. Today I can report that finally, the solutions and technologies are beginning to come together, albeit still slowly. 


The Evolution of These Technologies Is Clearer

The IoT Tower of Proprietary Babble Is Slowly Crumbling

The Rise of the Intelligent Assistant

Five years ago, I wrote a post on this blog disparaging the state of the Internet of Things/home automation market as a “Tower of Proprietary Babble.” Vendors of many different home and industrial product offerings were literally speaking different languages, making their products inoperable with other complementary products from other vendors.  The market was being constrained by its immaturity and a failure to grasp the importance of open standards. A 2017 Verizon report concluded that “an absence of industry-wide standards…represented greater than 50% of executives concerns about IoT.” Today I can report that finally, the solutions and technologies are beginning to come together, albeit still slowly. 

 

One of the most important factors influencing these positive developments has been the recognition of the importance of this technology area by major corporate players and a large number of entrepreneurial companies funded by venture investment, as shown in the infographic above. Amazon, for example, announced in October 2018 that it has shipped over 100 Million Echo devices, which effectively combine an intelligent assistant, smart hub, and a large-scale database of information. This does not take into account the dozens of other companies which have launched their own entries. I like to point to Philips Hue as such an example of corporate strategic focus perhaps changing the future corporate prospects of Philips, based in Eindhoven in the Netherlands. I have visited Philips HQ, a company trying to evolve from the incandescent lighting market. Two years ago my wife bought me a Philips Hue WiFi controlled smart lighting starter kit. My initial reaction was disbelief that it would succeed. I am eating crow on that point, as I now control my lighting using Amazon’s Alexa and the Philips Hue smart hub. The rise of the “intelligent assistant” seems to have been a catalyst for growth and convergence. 

The situation with proprietary silos of offerings that do not work well or at all with other offerings is still frustrating, but slowly evolving. Amazon Firestick’s browser is its own awkward “Silk” or alternatively Firefox, but excluding Google’s Chrome for alleged competitive advantage. When I set up my Firestick, I had to ditch Chromecast because I only have so many HDMI ports. Alexa works with Spotify but only in one room as dictated by Spotify. Alexa can play music from Amazon Music or Sirius/XM on all Echo devices without the Spotify limitation. Which brings me to another point of aggravation: alleged Smart TV’s. Not only are they not truly “smart,” they are proprietary silos of their own, so “intelligent assistant” smart hubs do not work with “smart” TV’s. Samsung, for example, has its own competing intelligent assistant, Bixby, so of course, only Bixby can control a Samsung TV. I watched one of those YouTube DIY videos on how you could make your TV work with Alexa using third-party software and remotes. Trust me, you do not want to go there. But cracks are beginning to appear that may lead to a flood of openness. Samsung just announced at CES that beginning in 2019 its Smart TV’s will work with Amazon Echo and Google Home, and that a later software update will likely enable older Samsung TV’s to work with Echo and Home. However, Bixby will still control the remote.  Other TV’s from manufacturers like Sony and LG have worked with intelligent assistants for some time. 

The rise of an Internet of Everything Everywhere, the recognition of the need for greater data communication bandwidth, and battery-free wireless IoT sensors are heating up R&D labs everywhere. Keep in mind that I am focusing on the consumer side, and have not even mentioned the rising demands from industrial applications.  Intel has estimated that autonomous vehicles will transmit up to 4 Terabytes of data daily. AR and VR applications will require similar throughput. Existing wireless data communication technologies, including 5G LTE, cannot address this need. In addition, an exploding need for IoT sensors not connected to an electrical power source will require more work in the area of “energy harvesting.” Energy harvesting began with passive RFID, and by using kinetic, pizeo, and thermoelectric energy and converting it into a battery-free electrical power source for sensors. EnOcean, an entrepreneurial spinoff of Siemens in Munich has pioneered this technology but it is not sufficient for future market requirements.  

Fortunately, work has already begun on both higher throughput wireless data communication using mmWave spectrum, and energy harvesting using radio backscatter, reminiscent of Nikola Tesla’s dream of wireless electrical power distribution. The successful demonstration of these technologies holds the potential to open the door to new IEEE data communication standards that could potentially play a role in ending the Tower of Babble and accelerating the integration of AI, IoT, and Big Data.  Bottom line is that the market and the technology landscape are improving. 

READ MORE: IEEE Talk: Integrated Big Data, The Cloud, & Smart Mobile: One Big Deal or Not? from David Mayes

My IEEE Talk from 2013 foreshadows the development of current emerging trends in advanced technology, as they appeared at the time. I proposed that in fact, they represent one huge integrated convergence trend that has morphed into something even bigger, and is already having a major impact on the way we live, work, and think. The 2012 Obama campaign’s sophisticated “Dashboard” application is referenced, integrating Big Data, The Cloud, and Smart Mobile was perhaps the most significant example at that time of the combined power of these trends blending into one big thing. 

READ MORE: Blog Post on IoT from July 20, 2013
homeautomation

The term “Internet of Things”  (IoT) is being loosely tossed around in the media.  But what does it mean? It means simply that data communication, like Internet communication, but not necessarily Internet Protocol packets, is emerging for all manner of “things” in the home, in your car, everywhere: light switches, lighting devices, thermostats, door locks, window shades, kitchen appliances, washers & dryers, home audio and video equipment, even pet food dispensers. You get the idea. It has also been called home automation. All of this communication occurs autonomously, without human intervention. The communication can be between and among these devices, so-called machine to machine or M2M communication.  The data communication can also terminate in a compute server where the information can be acted on automatically, or made available to the user to intervene remotely from their smart mobile phone or any other remote Internet-connected device.

Another key concept is the promise of automated energy efficiency, with the introduction of “smart meters” with data communication capability, and also achieved in large commercial structures via the Leadership in Energy & Environmental Design program or LEED.  Some may recall that when Bill Gates built his multi-million dollar mansion on Lake Washington in Seattle, he had “remote control” of his home built into it.  Now, years later, Gates’ original home automation is obsolete.  The dream of home automation has been around for years, with numerous Silicon Valley conferences, and failed startups over the years, and needless to say, home automation went nowhere. But it is this concept of effortless home automation that has been the Holy Grail.

But this is also where the glowing promise of The Internet of Things (IoT) begins to morph into a giant “hairball.”  The term “hairball” was former Sun Microsystems CEO, Scott McNealy‘s favorite term to describe a complicated mess.  In hindsight, the early euphoric days of home automation were plagued by the lack of “convergence.”  I use this term to describe the inability of available technology to meet the market opportunity.  Without convergence, there can be no market opportunity beyond early adopter techno geeks. Today, the convergence problem has finally been eliminated. Moore’s Law and advances in data communication have swept away the convergence problem. But for many years the home automation market was stalled.

Also, as more Internet-connected devices emerged it became apparent that these devices and apps were a hacker’s paradise.  The concept of IoT was being implemented in very naive and immature ways and lacking common industry standards on basic issues: the kinds of things that the IETF and IEEE are famous for.  These vulnerabilities are only now very slowly being resolved, but still in a fragmented ad hoc manner. The central problem has not been addressed due to classic proprietary “not invented here” mindsets.

The problem that is currently the center of this hairball, and from all indications is not likely to be resolved anytime soon.  It is the problem of multiple data communication protocols, many of them effectively proprietary, creating a huge incompatible Tower of Babbling Things.  There is no meaningful industry and market wide consensus on how The Internet of Things should communicate with the rest of the Internet.  Until this happens, there can be no fulfillment of the promise of The Internet of Things. I recently posted Co-opetition: Open Standards Always Win,” which discusses the need for open standards in order for a market to scale up.

Read more: Co-opetition: Open Standards Always Win

A recent ZDNet post explains that home automation currently requires that devices need to be able to connect with “multiple local- and wide-area connectivity options (ZigBee, Wi-Fi, Bluetooth, GSM/GPRS, RFID/NFC, GPS, Ethernet). Along with the ability to connect many different kinds of sensors, this allows devices to be configured for a range of vertical markets.” Huh?  This is the problem in a nutshell. You do not need to be a data communication engineer to get the point.  And this is not even close to a full discussion of the problem.  There are also IoT vendors who believe that consumers should pay them for the ability to connect to their proprietary Cloud. So imagine paying a fee for every protocol or sensor we employ in our homes. That’s a non-starter.

The above laundry list of data communication protocols, does not include the Zigbee “smart meter” communications standards war.  The Zigbee protocol has been around for years, and claims to be an open industry standard, but many do not agree. Zigbee still does not really work, and a new competing smart meter protocol has just entered the picture.  The Bluetooth IEEE 802.15 standard now may be overtaken by a much more powerful 802.15 3a.  Some are asking if 4G LTE, NFC or WiFi may eliminate Bluetooth altogether.   A very cool new technology, energy harvesting, has begun to take off in the home automation market.  The energy harvesting sensors (no batteries) can capture just enough kinetic, peizo or thermoelectric energy to transmit short data communication “telegrams” to an energy harvesting router or server.  The EnOcean Alliance has been formed around a small German company spun off from Siemens, and has attracted many leading companies in building automation. But EnOcean itself has recently published an article in Electronic Design News, announcing that they have a created “middleware” (quote) “…to incorporate battery-less devices into networks based on several different communication standards such as Wi-Fi, GSM, Ethernet/IP, BACnet, LON, KNX or DALI.”  (unquote).  It is apparent that this space remains very confused, crowded and uncertain.  A new Cambridge UK startup, Neul is proposing yet another new IoT approach using the radio spectrum known as “white space,”  becoming available with the transition from analog to digital television.  With this much contention on protocols, there will be nothing but market paralysis.

Is everyone following all of these acronyms and data comm protocols?  There will be a short quiz at the end of this post. (smile)

The advent of IP version 6, strongly supported by Intel and Cisco Systems has created another area of confusion. The problem with IPv6 in the world of The IoT is “too much information” as we say.  Cisco and Intel want to see IPv6 as the one global protocol for every Internet connected device. This is utterly incompatible with energy harvesting, as the tiny amount of harvested energy cannot transmit the very long IPv6 packets. Hence, EnOcean’s middleware, without which their market is essentially constrained.

Then there is the ongoing new standards and upgrade activity in the International Standards Organization (ISO), The Institute of Electrical and Electronics Engineers (IEEE), Special Interest Groups (SIG’s”), none of which seem to be moving toward any ultimate solution to the Tower of Babbling Things problem in The Internet of Things.

The Brave New World of Internet privacy issues relating to this tidal wave of Big Data are not even considered here, and deserve a separate post on the subject.  A recent NBC Technology post has explored many of these issues, while some have suggested we simply need to get over it. We have no privacy.

Read more: Internet of Things pits George Jetson against George Orwell

Stakeholders in The Internet of Things seem not to have learned the repeated lesson of open standards and co-opetition, and are concentrating on proprietary advantage which ensures that this market will not effectively scale anytime in the foreseeable future. Intertwined with the Tower of Babbling Things are the problems of Internet privacy and consumer concerns about wireless communication health & safety issues.  Taken together, this market is not ready for prime time.

 

Google Buys a D-Wave Quantum Computer

Earlier this week, I was advised by a VC friend in Vancouver to expect another blockbuster announcement from D-Wave. And so it has happened. As if to stem any further skepticism and debate about D-Wave’s quantum computing technology, Google today announced that it has bought a D-Wave quantum computing system, in a partnership with NASA and Lockheed Martin Aerospace. This is the second major sale of a D-Wave system, and further evidence that this is not simple tire kicking by a group of ivory tower scientists.


dwave chip

D-Wave 512-Qubit Bonded Processor – Recent Generation

Earlier this week, I was advised by a VC friend in Vancouver to expect another blockbuster announcement from D-Wave. And so it has happened. As if to stem any further skepticism and debate about D-Wave’s quantum computing technology, Google today announced that it has bought a D-Wave quantum computing system, in a partnership with NASA and Lockheed Martin Aerospace. This is the second major sale of a D-Wave system, and further evidence that this is not simple tire kicking by a group of ivory tower scientists.

Of particular note to me personally, was the growing significance of Vancouver as a site for a exceedingly advanced startup like D-Wave. In my previous post on this subject, I questioned whether a company in such a rarified area could attract the necessary personnel here.  Twenty years ago, when Mobile Data International started, I was one of four Americans to cast our fates to the wind and move to Canada to join MDI. At that time, we were seen as completely out of our minds. Vancouver had no attraction or other high tech industry companies worthy of note.  Today, Vancouver is seen as an World Class City, and one of the most livable. This may be one of the most important issues in favour of a growing high tech industry in Vancouver.

By way of example, it was also announced in parallel today that two key people from Silicon Graphics, the precursor in some respects to D-Wave, Bo Ewald, former SGI CEO, and Steve Cakebread, former SGI financial officer, have joined D-Wave.  Apparently, Ewald will lead D-Wave’s U.S. subsidiary company, and Cakebread has relocated to Vancouver.  If you have ever seen a bottle of Cakebread Cellars Chardonnay in a BC Liquor store, it is the same Steve Cakebread that is responsible.  More importantly, Vancouver may now be able to attract the kind of talent needed for companies like D-Wave.

Google and NASA are forming a laboratory to study artificial intelligence by means of computers that use the unusual properties of quantum physics. Their quantum computer, which performs complex calculations thousands of times faster than existing supercomputers, is expected to be in active use in the third quarter of this year.

The Quantum Artificial Intelligence Lab, as the entity is called, will focus on machine learning, which is the way computers take note of patterns of information to improve their outputs. Personalized Internet search and predictions of traffic congestion based on GPS data are examples of machine learning. The field is particularly important for things like facial or voice recognition, biological behavior, or the management of very large and complex systems.

“If we want to create effective environmental policies, we need better models of what’s happening to our climate,” Google said in a blog postannouncing the partnership. “Classical computers aren’t well suited to these types of creative problems.”

Google said it had already devised machine-learning algorithms that work inside the quantum computer, which is made by D-Wave Systems of Burnaby, British Columbia. One could quickly recognize information, saving power on mobile devices, while another was successful at sorting out bad or mislabeled data. The most effective methods for using quantum computation, Google said, involved combining the advanced machines with its clouds of traditional computers.

Google and NASA bought in cooperation with the Universities Space Research Association, a nonprofit research corporation that works with NASA and others to advance space science and technology. Outside researchers will be invited to the lab as well.

This year D-Wave sold its first commercial quantum computer to Lockheed Martin. Lockheed officials said the computer would be used for the test and measurement of things like jet aircraft designs, or the reliability of satellite systems.

The D-Wave computer works by framing complex problems in terms of optimal outcomes. The classic example of this type of problem is figuring out the most efficient way a traveling salesman can visit 10 customers, but real-world problems now include hundreds of such variables and contingencies. D-Wave’s machine frames the problem in terms of energy states, and uses quantum physics to rapidly determine an outcome that satisfies the variables with the least use of energy.

In tests last September, an independent researcher found that for some types of problems the quantum computer was 3,600 times faster than traditional supercomputers. According to a D-Wave official, the machine performed even better in Google’s tests, which involved 500 variables with different constraints.

“The tougher, more complex ones had better performance,” said Colin Williams, D-Wave’s director of business development. “For most problems, it was 11,000 times faster, but in the more difficult 50 percent, it was 33,000 times faster. In the top 25 percent, it was 50,000 times faster.” Google declined to comment, aside from the blog post.

The machine Google and NASA will use makes use of the interactions of 512 quantum bits, or qubits, to determine optimization. They plan to upgrade the machine to 2,048 qubits when this becomes available, probably within the next year or two. That machine could be exponentially more powerful.

Google did not say how it might deploy a quantum computer into its existing global network of computer-intensive data centers, which are among the world’s largest. D-Wave, however, intends eventually for its quantum machine to hook into cloud computing systems, doing the exceptionally hard problems that can then be finished off by regular servers.

Potential applications include finance, health care, and national security, said Vern Brownell, D-Wave’s chief executive. “The long-term vision is the quantum cloud, with a few high-end systems in the back end,” he said. “You could use it to train an algorithm that goes into a phone, or do lots of simulations for a financial institution.”

Mr. Brownell, who founded a computer server company, was also the chief technical officer at Goldman Sachs. Goldman is an investor in D-Wave, with Jeff Bezos, the founder of Amazon.com. Amazon Web Services is another global cloud, which rents data storage, computing, and applications to thousands of companies.

This month D-Wave established an American company, considered necessary for certain types of sales of national security technology to the United States government.

5 Ways Big Data Is Going To Blow Your Mind

Call it whatever you want — big data, data science, data intelligence — but be prepared to have your mind blown. Imagination and technology are on a collision course that will change the world in profound ways. Some people say big data is wallowing in the trough of disillusionment, but that’s a limited worldview. If you only look at it like an IT issue it might be easy to see big data as little more than business intelligence on steroids. If you only see data science as a means to serving better ads, it might be easy to ask yourself what all the fuss is about. If you’re like me, though, all you see are the bright lights ahead. They might be some sort of data nirvana, or they might be a privacy-destroying 18-wheeler bearing down on us. They might be both. But we’re going to find out, and we’re we’re going to find out sooner rather than later. This is because there are small pockets of technologists who are letting their imaginations lead the way. In a suddenly cliché w


Big Data, Big Deal or Not?” Debate Continues

The Gigaom Structure Data Conference has just concluded in New York City. It has added significantly to the discussion and the debate on the significance of this phenomenon.  The author, Derek Harris, has summarized my own view on the issue in his first paragraph.  The notion that Big Data is little more than business intelligence on steriods is just wrong, and those who fail to understand its importance and exploit it, may well be the losers.

5 ways Big Data is going to blow your mind and change your world

SUMMARY:
Call it whatever you want — big data, data science, data intelligence — but be prepared to have your mind blown. Imagination and technology are on a collision course that will change the world in profound ways.

Some people say big data is wallowing in the trough of disillusionment, but that’s a limited worldview. If you only look at it like an IT issue it might be easy to see big data as little more than business intelligence on steroids. If you only see data science as a means to serving better ads, it might be easy to ask yourself what all the fuss is about.

If you’re like me, though, all you see are the bright lights ahead. They might be some sort of data nirvana, or they might be a privacy-destroying 18-wheeler bearing down on us. They might be both. But we’re going to find out, and we’re we’re going to find out sooner rather than later.

This is because there are small pockets of technologists who are letting their imaginations lead the way. In a suddenly cliché way of saying it, they’re aiming for 10x improvement rather than 10 percent improvement. They can do that because they now have a base set of analytic technologies and techniques that are well positioned to solve, with relatively little effort, whatever data problems are thrown their way.

Here are some themes from our just-concluded Structure: Data conference that I think highlight the promise of data, but also the challenges that lie ahead.

Man and machine unite

Machine learning is already infiltrating nearly every aspect of our digital lives, but its ultimate promise will only be realized when it becomes more human. That doesn’t necessarily mean making machines think like human brains (although, granted, that’s a vision currently driving billions of research dollars), but just letting people better interact with the systems and models trying to discover the hidden patterns in everything around us.

Whatever shape it takes, the results will be revolutionary. We’ll treat diseases once thought untreatable, tackle difficult socio-economic and cultural issues, and learn to experience the world around in entirely new ways. Maybe that consumer-experience scourge known as advertising might actually become helpful rather than annoying.

That would really be something.

Man and Machine Unite

Data science, or data intelligence?

I’m not sure there needs to be a distinction between data science and data intelligence, but the latter does connote a grander goal. It’s about trying to solve meaningful problems rather than just serving ads; about trying to understand why things happen just as well as when they’ll happen. This means learning to work with smaller, messier data than we might like — certainly smaller and messier than the data sets underneath most of the massive web-company data science undertakings.

But just think about being able to go beyond predictive models and into a world of preventative — or even professorial — models. If you know what I like, where I go and who my friends are, it might be fairly easy to predict what I want to buy. Figuring out how my decision to buy something might affect my overall well-being and then telling me why? That’s a little more difficult and a lot more beneficial.

Telling stories with data

Have you ever looked at a chart and wondered what the heck it was supposed to be telling you? Or downloaded a report of your Facebook activity only to ask yourself if all the disparate data points come together to paint a bigger picture? Or tried — and failed — to stop a terrorist before his movement to recruit an army of followers gained critical mass?

A big problem with a lot data analysis right now is that it still treats data points as entities unto themselves, largely disconnected from those around them. However, data needs context in order to be really useful; it’s context that turns disparate data points into a story. Don’t just tell me how many steps I took today or the time of day I’m most active on Facebook, but tell me how that relates to the rest of my life.

And don’t just tell me that someone said he wants to kill Americans. Rather, tell me a story about how much more frequently he’s saying it and how much more inciteful his words are becoming.

The internet of things knows all

The mobile phone in your pocket is tracking your every movement and can also monitor the sounds that are surrounding you. That fitness tracker you’re wearing is identifying you by how you walk. Your smart meter data shows when you’re home, when you’re away and when you’re in the shower. Sensors in everything from toothbrushes to cars are quantifying every aspect of our lives.

This volume of data can still be a lot to deal with in terms of its volume, velocity and variety, and we’re still not quite sure what to do with it even if the right tools were in place. But all sorts of entrepreneurs, powerful institutions and intelligence agents have ideas. The technological pieces are coming along nicely, too. Just sayin’ …

This semantic life

The semantic web lives on; only it’s spreading well beyond our search engines and even our web browsers. Soon enough, we’ll be able to surface relevant content and people simply by highlighting passage of text in whatever we’re reading — web page or not — on any type of device. When we speak to our devices, they’ll not only know what we’re saying, but also what we really want even without the help of specific commands or keywords.

That’s a powerful proposition in a world where we increasingly expect our interactions to be hands-free and our answers to come as fast as our questions. Of course, what’s powerful in the hands of consumers driving in their cars or sitting on their couches iseven more powerful in the hands of doctors trying to diagnose difficult diseases or aid workers trying lend a helping hand in places where they don’t know the customs or even speak the language.