Year: 2018

25 Apr 2018

German insurance ‘robo-advisor’ Clark scores $29 million Series B

Clark, one of a plethora of so-called ‘insurtech’ startups offering something akin to a digital insurance brokerage all delivered through a convenient mobile app, has closed a hefty $29 million in Series B funding.

The round was led by fintech investor Portag3 Ventures, and VC fund White Star Capital, with participation from a number of existing investors including Coparion, Kulczyk Investments, and Yabeo Capital. It brings Clark’s total funding to $45 million.

Founded in July 2015 — and originally out of fintech company builder Finleap — Frankfurt and Berlin-based Clark has built what it describes as an “insurance­ robo-­advisor”. Once you’ve given the startup a mandate to act as your insurance broker, the Clark iOS, Android and web apps let you manage and purchase various insurance products, spanning the full gamut of life, health, and property insurance.

Specifically, its algorithms analyze your current insurance situation and automatically propose ways to improve your coverage or get a better deal than the one you are currently on. It makes the majority of its revenue from management and admin fees paid by insurance companies on its platform, but also via commission on any new policy taken out.

To date, Clark says it has acquired close to 100,000 customers for its digital insurance services, making it one of the largest digital insurance players in Europe. This, we’re told, translates to $310 million in contract volume, which the insurtech startup says is a ten-fold increase from the contract volume it managed in 2016 at the time of its Series A.

Some of that growth appears to have come from partnerships with a number of banks in Germany, including challenger N26, and incumbents ING-DiBa, and DKB. I’m also told Clark has started working on a B2B line, offering Clark technology to banks and other insurance companies as a white-label product. Four deals with leading companies have been signed and are “in development”.

“Over the next few years, we will continue to focus on growth to cement our digital insurance management as the standard in Europe,” says Dr. Christopher Oster, CEO and co-founder of Clark, in a statement. “To drive Clark’s development, we will invest in our team in both Frankfurt and Berlin, especially in technology and marketing”.

25 Apr 2018

Looks like Google is changing Android’s gun emoji into a water gun

Back in 2016, Apple swapped out the graphic used for its gun emoji, replacing the realistically drawn handgun with a bright green water gun.

Just a few days ago, Twitter followed suit.

And now, it seems, so will Google . The gun emoji on Android will likely soon appear as a bright orange and yellow super soaker lookalike.

As first noted by Emojipedia, Google has just swapped the graphics in its open Noto Emoji library on GitHub. These are the Emoji that Android uses by default, so the same change will presumably start to roll out there before too long.

At this point, Google making this change seemed inevitable. It seemed likely to happen as soon Apple made the jump; once others started following suit (Twitter earlier this week, and Samsung with the release of the Galaxy S9) it became a certainty.

It’s a matter of clarity in communication. If a massive chunk of people (iOS users) can send a cartoony water toy in a message that another massive chunk of people (Android users) receive as a realistically drawn handgun, there’s room for all sorts of trouble and confusion. Apple wasn’t going to reverse course on this one — and now that others have made the change, Google would’ve been the odd one out.

25 Apr 2018

Facebook shuffle brings a new head of US policy and chief privacy officer

Trying times in Menlo Park, it seems: amid assaults from all quarters largely focused on privacy, Facebook is shifting some upper management around to better defend itself. Its head of policy in the U.S., Erin Egan, is returning to her chief privacy officer role, and a VP (and former FCC chairman) is taking her spot.

Kevin Martin, until very recently VP of mobile and global access policy, will be Facebook’s new head of policy. He was hired in 2015 for that job; he was at the FCC from 2001 to 2009, Chairman for the last four of those years. So whether you liked his policies or not, he clearly knows his way around a roll of red tape.

Erin Egan was chief privacy officer when Martin was hired, and at that time also took on the role of U.S. head of policy. “For the last couple years, Erin wore both hats at the company,” said Facebook spokesperson Andy Stone in a statement to TechCrunch.

“Kevin will become interim head of US Public Policy while Erin Egan focuses on her expanded duties as Chief Privacy Officer,” Stone said.

No doubt both roles have grown in importance and complexity over the last few years; one person performing both jobs doesn’t sound sustainable, and apparently it wasn’t.

Notably, Martin will now report to Joel Kaplan, with whom he worked previously during the Bush-Cheney campaign in 2000 and for years under the subsequent administration. Deep ties to Republican administrations and networks in Washington are probably more than a little valuable these days, especially to a company under fire from would-be regulators.

24 Apr 2018

Particle brings an LTE cellular model to market for networked devices working off of 2G and 3G

Particle, a developer of networking hardware and software for connected devices, has released an LTE-enabled module for product developers.

The new device specifically targets folks whose devices were reliant on retiring 2G and 3G networks, according to the company, and include built-in cloud and SIM support.

Even as big telecom companies and vendors move ahead with 4G and now 5G networking equipment, those technologies aren’t necessarily the best for most networked devices, according to Particle .

LTE hardware is cheaper, has better battery life, and ranges that are more appropriate for industrial devices that may need to communicate across distances or through obstacles (like walls, other machines, doors, or floors).

Particularly, Particle sees demand for its devices in hard-to-reach or widely dispersed sensor networks — like industrial factory floors or in an agricultural monitoring setting for a farm or field.

“As US carriers are quickly moving to end 2G and 3G support, and global carriers plan for LTE network rollouts, the timing for an LTE strategy is more critical than ever,” according to a statement Bill Kramer, EVP of IoT Solutions at KORE, which provides managed IoT networks, application enablement, location based services

The new LTE product is part of a suite of offerings from Particle — including a device cloud, operating system, and developer toolkit, the company said.

By providing a pre-integrated solution, Particle said that its hardware represents a faster, far less complicated path to market.

“We launched our cellular development kit, the Electron, to give our developer community access to the power of cellular,” said Zach Supalla, Co-Founder and CEO of Particle, in a statement. “The following industrial E Series line made go-to-market with 2G/3G scalable for enterprises. Now with our LTE module, businesses will evolve alongside the quickly-changing cellular landscape without missing a beat.”

Particle’s new lineup now includes two LTE CAT-M1 models (LTE B13 and LTE B2/4/5/12) and is fully certified, low profile, surface mountable for industrial environments, and powered by Qualcomm’s MDM9206 IoT Modem and u-blox’s Sara-R410-02B module.

The new LTE hardware evaluation kit ships for $89 with an evaluation board, a sample temperature sensor, and accessories to build out a proof of concept, the company said. Individual modules are priced at $69.

Particle counts 8,500 customers and more than 140,000 developers among its customers building networking technologies for consumer and industrial devices. The company says its customers range from global energy provider Engie and design studio Ideo to indoor crops provider Grow Labs and coffee pioneer Keurig .

 

24 Apr 2018

Hidden Amazon page drops hints about a ‘Fire TV Cube’

Rumors have been floating around for a few months now of a new device from Amazon that would mash-up the media streaming capabilities of its Fire TV line with the voice assistant abilities of the Echo. After leaked images turned up showing a cube-shaped device that seemed to fit the bill, people started referring to this still as-of-yet unannounced device as the “Fire TV Cube.”

Sure enough: a seemingly official page has been found tucked away on Amazon.com that mentions a Fire TV Cube, and promises “details coming soon.”

As found by AFTVNews, the placeholder splash page offers up little beyond the promise of eventual details. It’s got a big ol’ header that says “What is Fire TV Cube?”, a button to let you sign up for more details and… well, that’s about it.

There’s also a mention of a “Fire TV Cube” on this page, tucked away in Amazon’s account management backend to let folks toggle their subscriptions to any one of the dozens of newsletters/email campaigns that Amazon sends out.

According to the original leaks, the Fire TV Cube would have the speaker, far-field microphones and LED light bar of an Echo and the 4K video-capable guts of a Fire TV, allowing you to hook it up to your TV and have one device doing double the duties.

In other words: While there’s still no official word on when (or if!) this thing will actually ship, it definitely looks like they’re prepping for something behind the scenes.

24 Apr 2018

Facebook and the perils of a personalized choice architecture

The recent Facebook-Cambridge Analytica chaos has ignited a fire of awareness, bringing the risks of today’s data surveillance culture to the forefront of mainstream conversations.

This episode and the many disturbing prospects it has emphasized have forcefully awakened a sleeping giant: people seeking information about their privacy settings and updating their apps permissions, a “Delete Facebook” movement has taken off and the FTC launched an investigation into Facebook, causing Facebook’s stocks to drop. A perfect storm.   

The Facebook-Cambridge Analytica debacle is composed of pretty simple facts: Users allowed Facebook to collect personal information, and Facebook facilitated third-party access to the information. Facebook was authorized to do that pursuant to its terms of service, which users formally agreed to but rarely truly understood. The Cambridge Analytica access was clearly outside the scope of what Facebook, and most of its users, authorized. Still, this story has turned into an iconic illustration of the harms generated by massive data collection.

While it is important to discuss safeguards for minimizing the prospects of unauthorized access, the lack of consent is the wrong target. Consent is essential, but its artificial quality has been long-established. We already know that our consent is, more often than not, meaningless beyond its formal purpose. Are people really raging over Facebook failing to detect the uninvited guest who crashed our personal information feast when we’ve never paid attention to the guest list? Yes, it is annoying. Yes, it is wrong. But it is not why we feel that this time things went too far.

In their 2008 book, “Nudge,” Cass Sunstein and Richard Thaler coined the term “choice architecture.”  The idea is simple and pretty straightforward: the design of the environments in which people make decisions influences their choices. Kids’ happy encounters with candies in the supermarket are not serendipitous: candies are commonly located where children can see and reach them.

Tipping options in restaurants are usually tripled because individuals tend to go with the middle choice, and you must exit through the gift shop because you might be tempted to buy something on your way out. But you probably knew that already because choice architecture has been here since the dawn of humanity and is present in any human interaction, design and structure. The term choice architecture is 10 years old, but choice architecture itself is way older.

The Facebook-Cambridge Analytica mess, together with many preceding indications before it, heralds a new type of choice architecture: personalized, uniquely tailored to your own individual preferences and optimized to influence your decision.

We are no longer in the familiar zone of choice architecture that equally applies to all. It is no longer about general weaknesses in human cognition. It is also not about biases that are endemic to human inferences. It is not about what makes humans human. It is about what makes you yourself.

When the information from various sources coalesces, the different segments of our personality come together to present a comprehensive picture of who we are. Personalized choice architecture is then applied to our datafied curated self to subconsciously nudge us to choose one course of action over another.

The soft spot at which personalized choice architecture hits is that of our most intimate self. It plays on the dwindling line between legitimate persuasion and coercion disguised as voluntary decision. This is where the Facebook-Cambridge Analytica story catches us — in the realization that the right to make autonomous choices, the basic prerogative of any human being, might soon be gone, and we won’t even notice.

Some people are quick to note that Cambridge Analytica did not use the Facebook data in the Trump campaign and many others question the effectiveness of the psychological profiling strategy. However, none of this matters. Personalized choice architecture through microtargeting is on the rise, and Cambridge Analytica is not the first nor the last to make successful use of it.

Jigsaw, for example, a Google -owned think tank, is using similar methods to identify potential ISIS recruits and redirect them to YouTube videos that present a counter-narrative to ISIS propaganda. Facebook itself was accused of targeting at-risk youth in Australia based on their emotional state. The Facebook-Cambridge Analytica story may have been the first high profile-incident to survive numerous news cycles, but many more are sure to come.

We must start thinking about the limits of choice architecture in the age of microtargeting. Like any technology, personalized choice architecture can be used for good and evil: It may identify individuals at risk and lead them to get help. It could motivate us into reading more, exercising more and developing healthy habits. It could increase voter turnout. But when misused or abused, personalized choice architecture can turn into a destructive manipulative force.

Personalized choice architecture can frustrate the entire premise behind democratic elections — that it is we, the people, and not a choice architect, who elect our own representatives. But even outside the democratic process, unconstrained personalized choice architecture can turn our personal autonomy into a myth.

Systematic risks such as those induced by personalized choice architecture would not be solved by people quitting Facebook or dismissing Cambridge-Analytica’s strategies.

Personalized choice architecture calls for systematic solutions that involve a variety of social, economic, technical, legal and ethical considerations. We cannot let individual choice die out in the hands of microtargeting. Personalized choice architecture must not turn into nullification of choice.

 

24 Apr 2018

‘Avengers: Infinity War’ is an overstuffed adventure with a terrific villain

When I saw the first trailer for Avengers: Infinity War, I was really excited and really worried.

Excited because holy crap, there were so many characters. Iron Man! Captain America! Thor! Black Panther! Black Widow! The Vision! The Guardians of the Galaxy! And they were all going to be in a movie together!

Worried because, holy crap, there so many characters. How could you squeeze all of them into a single film?

The answer is, with great difficulty. To be fair, Infinity War isn’t the giant mess that it could have been — in fact, it’s a lot of fun. But there’s simply not enough movie to do justice to the enormous cast.

Infinity War

Marvel Studios’ AVENGERS: INFINITY WAR..L to R: Spider-Man/Peter Parker (Tom Holland), Iron Man/Tony Stark (Robert Downey Jr.), Drax (Dave Bautista), Star-Lord/Peter Quill (Chris Pratt) and Mantis (Pom Klementieff)..Photo: Film Frame..©Marvel Studios 2018

Some of those characters fare better than others. For most of Infinity War, the “cosmic” side of the Marvel Cinematic Universe is well-represented by Thor (Chris Hemsworth) and the Guardians of the Galaxy (Chris Pratt, Zoe Seldana and team), who end up working together. Screenwriters Christopher Markus and Stephen McFeely seem more willing to spend time with them, even when they’re not involved in a giant battle, and that pays off with the movie’s funniest moments — as well as also scenes with real weight and melancholy.

Meanwhile, Iron Man (Robert Downey Jr.) and Spider-Man (Tom Holland) also get some good jokes in, recapturing the fun of their relationship in Spider-Man: Homecoming.

Everyone else? Well, they usually get introduced with a nice quip or a badass moment, designed to remind you of how much you liked them in their own movies. But afterwards, they tend to fade into the background, becoming just another moving part in the big action set pieces (and yes, this includes Marvel’s new MVP Black Panther). That’s probably about as good as any filmmaker could do when trying to stuff the entire Marvel Universe into a single movie, but it’s still a little disappointing after the first Avengers film managed to give us five distinct and memorable heroes (sorry, Hawkeye), and it got so much mileage out of throwing those heroes together.

Infinity War

Marvel Studios’ AVENGERS: INFINITY WAR..Thanos (Josh Brolin)..Photo: Film Frame..©Marvel Studios 2018

Luckily, the film’s real strength isn’t on the heroic side. Instead, as in Black Panther (and virtually no other Marvel movie), Infinity War‘s most memorable character is actually the villain, Thanos. Previous films have reduced Thanos to a purple guy who utters a few threatening lines while sitting in his silly-looking space throne.

In Infinity War, Thanos is at the center of the action. His quest to acquire the superpowered infinity stones that drives the story, as all of our heroes scramble to stop him, leading to big battles on Earth and in space. He even gets to kill off a surprisingly large number of those heroes (though I don’t expect all of those deaths to stick).

Over the course of the film, Thanos emerges as an dangerous and powerful alien who’s absolutely devoted to his mission of destroying half of the life in the universe. As portrayed by Josh Brolin (via voice acting and motion capture), he doesn’t come off as a cackling villain, but rather a weary soldier at the end of a long quest.

I shouldn’t say too much about where that quest leads, but I will note that Infinity War feels very much like the first half of a two-part film, with an ending that sets up the still-untitled Avengers 4 (due on May 3, 2019).

I do think Infinity War falls short of Marvel’s best movies, including Black Panther and Captain America: The Winter Soldier (which, like Infinity War, was directed by Anthony and Joe Russo). But here’s one simple measure of the film’s success: Despite my reservations, that cliffhanger worked, and I cannot wait to find out what happens next. It feels like it’s going to be a long wait till 2019.

24 Apr 2018

Vacation rental management service Guesty raises $19.75M

As the vacation rental sector heats up — with Airbnb making even more moves to expand its portfolio of services to include multiple tiers of rentals — there’s going to be more and more of a need for people who manage a large number of properties.

Guesty is one service that aims to do that, and today a filing with the Securities and Exchange Commission notes that it’s raised $19.75 million in a new Series B round of financing. While Airbnb may be the dominant home vacation rental service, there are others like VRBO, and managing those properties across multiple different platforms could require handling all of that information in something more analog like an Excel sheet. It’s a kind of CRM tool for property management, ranging from tracking guest check-ins to the amount of revenue a property owner. Guesty also helps property owners by providing tools to manage operations beyond just the tracking.

Airbnb earlier this year started rolling out more tiers of home categories that are geared toward different kinds of travelers. That included high-end tiers called Airbnb Plus and Beyond by Airbnb. While these new categories potentially offer a more granular set of choices for consumers, it might make managing those properties a little more difficult — especially if it’s across multiple different services like Airbnb and VRBO, or even more analog channels. Tools like Guesty can help owners of multiple different properties (that might span multiple tiers) turn those homes into an actual business.

There are also plenty of platforms that are looking for additional services for people managing multiple properties on vacation rental sites. There are startups like Beyond Pricing, which look to help property managers figure out how to best price their homes. Airbnb has its own pricing algorithms, but there’s clear demand for tools that cross multiple platforms. Guesty was party of Y Combinator’s winter 2014 class, and raised $3 million in May last year.

While Airbnb continues to try to expand into new categories and offer home owners a way to rent out their homes — or for owners of multiple properties to run a side business — it’s not the only approach to vacation rentals. One startup, Selina, is looking to convert existing properties into kinds of campuses that cater to different tiers of travelers, ranging from travelers looking to stay in a hostel to ones that are willing to pay for their own rooms. Selina earlier this month said it raised $95 million. Selina is more of a hotel-ish model as it expands from geography to geography, but it also shows that there’s demand for an experience that can cater to a wide variety of guests.

24 Apr 2018

IoT ‘conversation’ and ambient contextuality

A few years back, I wrote about the way we communicate with our technology. It was obvious even then that a big game-changer would be enabling a reliable conversational interaction with technology in order to overcome the friction humans experience when we use our modern tools, be they apps, phones, cars or semi-autonomous coffee makers. Too much typing and swiping and app management crowds our experiences with our connected “things.”

To some degree, this game-changer has come to pass.

Voice interaction is now a big part of technology interface in everything from smartphones to virtual assistant/smart speaker products to connected home and vehicle solutions — and so it will be going forward. While this is marked progress, it is not really “conversation.”

For the most part, the state of voice interaction is more akin to commanding a four-year-old to do your bidding than having a useful, rich conversation with a friend or assistant. As we continue to minimize friction and advance usability of technology via voice, it is clear that more is needed. I’ll predict right here that the next big game-changer in technology interface is ambient contextuality.

Ambient contextuality hinges on the idea that there is information hidden all around us that helps clarify our intent in any given conversation. Answering the simple questions of who, what, where and when is now easier than ever as IoT continues to mine and mind the data of our lives. I once sketched out a derivative needs pyramid for IoT devices using the example of Maslow’s hierarchy of needs pyramid to chart a course for “thing-actualization,” whereby our technology could use analytics, learned logic and predictive behavior to establish groups and networks of things and enable other more “complex” things. The voice interfaces and natural-language processing technology on display in interactive speakers such as Amazon’s Alexa or Apple’s Homepod are examples of this actualization in action — predictive analytics and machine learning imbued into objects and interfaces to technology that collect data and collectively power progressively complex functions, often in real time.

But it is still not conversation. There is a new, nascent communications triangle between people, processes and things that fuels usability, and it still has a bit of its own growing up to do.

Deeper questions like how and why are also key to conversation for humans. To achieve truly conversational interactions, one or many of the answers to these questions not only need to be captured, but also learned and retained. Recently, Google has made some good strides into this for targeted types of online search. But we have to do much more before something akin to natural conversation emerges.

Establishing ambient contextuality to enable the kinds of conversations we do want to have is the actual end goal of all this connected stuff.

Most human conversation is abridged. Known quantities may not even be discussed, but they are deeply factored into interaction. A simple example is shifting from nouns and proper names to pronouns. “I asked about Dave’s vacation and Jen said she’d take him to the airport to kick it off right.” This may seem like a small thing, but think about how unnatural a conversation is when you cannot use human “shorthand.” Referring to every subject in every sentence by its proper name quickly becomes as uncomfortable as it is unnatural.

A simple definition of a conversation is an informal exchange of sentiment and ideas, and it’s the way people naturally communicate with each other. Informal conversation is contextual, cohesive and comprehensive. It involves a lot of storytelling. It ebbs and flows, jumps around in time and tense, references shared experience or knowledge to exchange new experiences and knowledge. It is inference infused and doesn’t require adherence to strict conventions. But this is pretty much the exact opposite of the way “things” are designed to communicate. Machine communication is specific to whatever technology drives it and is based on code. It is binary, resource-constrained, inflexible, standalone, purely informational and lacks context. It is rigid and formal. It is very much not storytelling.

This elemental difference in communication creates a usability gap, which we have traditionally bridged by forcing people to learn to “speak” machine — download a new app to control every new device, use this set of wake words or language constructs for one device and an entirely different set for another, update, update, update, and if-this-then-that for everything. It’s why so many “things” end up thrown in a drawer after two weeks, never to be used again. This is not the kind of conversation humans want to have.

Putting aside the creepiness factor and important privacy issues surrounding devices that constantly collect information about us, establishing ambient contextuality to enable the kinds of conversations we do want to have is the actual end goal of all this connected stuff. The aim is to smooth our experiences with our technology throughout the day and blur the seams enough to feel natural to us.

The challenge now is to make our machines “speak” human — to imbue them with context and inference and informality so that conversation flows naturally. DARPA has been working on it. So, too, Amazon and Google. In fact, most technology efforts are concerned with reducing interface friction. Improving the quality of our conversation is key to achieving that goal.

Development on IoT, augmented and mixed reality, Assistive Intelligence (my term for AI, but that’s an entirely different conversation) and even the miniaturization and extension properties on display in mobility and power advancements are all examples of the quest for that quality. Responsibly developed ambient contextuality, and ultimately natural conversation, will be better enabled by these technologies, and our lives will become much more conversational soon. Once we experience reliable and useful conversations with our technological world, I think we will all be hooked.

24 Apr 2018

IoT ‘conversation’ and ambient contextuality

A few years back, I wrote about the way we communicate with our technology. It was obvious even then that a big game-changer would be enabling a reliable conversational interaction with technology in order to overcome the friction humans experience when we use our modern tools, be they apps, phones, cars or semi-autonomous coffee makers. Too much typing and swiping and app management crowds our experiences with our connected “things.”

To some degree, this game-changer has come to pass.

Voice interaction is now a big part of technology interface in everything from smartphones to virtual assistant/smart speaker products to connected home and vehicle solutions — and so it will be going forward. While this is marked progress, it is not really “conversation.”

For the most part, the state of voice interaction is more akin to commanding a four-year-old to do your bidding than having a useful, rich conversation with a friend or assistant. As we continue to minimize friction and advance usability of technology via voice, it is clear that more is needed. I’ll predict right here that the next big game-changer in technology interface is ambient contextuality.

Ambient contextuality hinges on the idea that there is information hidden all around us that helps clarify our intent in any given conversation. Answering the simple questions of who, what, where and when is now easier than ever as IoT continues to mine and mind the data of our lives. I once sketched out a derivative needs pyramid for IoT devices using the example of Maslow’s hierarchy of needs pyramid to chart a course for “thing-actualization,” whereby our technology could use analytics, learned logic and predictive behavior to establish groups and networks of things and enable other more “complex” things. The voice interfaces and natural-language processing technology on display in interactive speakers such as Amazon’s Alexa or Apple’s Homepod are examples of this actualization in action — predictive analytics and machine learning imbued into objects and interfaces to technology that collect data and collectively power progressively complex functions, often in real time.

But it is still not conversation. There is a new, nascent communications triangle between people, processes and things that fuels usability, and it still has a bit of its own growing up to do.

Deeper questions like how and why are also key to conversation for humans. To achieve truly conversational interactions, one or many of the answers to these questions not only need to be captured, but also learned and retained. Recently, Google has made some good strides into this for targeted types of online search. But we have to do much more before something akin to natural conversation emerges.

Establishing ambient contextuality to enable the kinds of conversations we do want to have is the actual end goal of all this connected stuff.

Most human conversation is abridged. Known quantities may not even be discussed, but they are deeply factored into interaction. A simple example is shifting from nouns and proper names to pronouns. “I asked about Dave’s vacation and Jen said she’d take him to the airport to kick it off right.” This may seem like a small thing, but think about how unnatural a conversation is when you cannot use human “shorthand.” Referring to every subject in every sentence by its proper name quickly becomes as uncomfortable as it is unnatural.

A simple definition of a conversation is an informal exchange of sentiment and ideas, and it’s the way people naturally communicate with each other. Informal conversation is contextual, cohesive and comprehensive. It involves a lot of storytelling. It ebbs and flows, jumps around in time and tense, references shared experience or knowledge to exchange new experiences and knowledge. It is inference infused and doesn’t require adherence to strict conventions. But this is pretty much the exact opposite of the way “things” are designed to communicate. Machine communication is specific to whatever technology drives it and is based on code. It is binary, resource-constrained, inflexible, standalone, purely informational and lacks context. It is rigid and formal. It is very much not storytelling.

This elemental difference in communication creates a usability gap, which we have traditionally bridged by forcing people to learn to “speak” machine — download a new app to control every new device, use this set of wake words or language constructs for one device and an entirely different set for another, update, update, update, and if-this-then-that for everything. It’s why so many “things” end up thrown in a drawer after two weeks, never to be used again. This is not the kind of conversation humans want to have.

Putting aside the creepiness factor and important privacy issues surrounding devices that constantly collect information about us, establishing ambient contextuality to enable the kinds of conversations we do want to have is the actual end goal of all this connected stuff. The aim is to smooth our experiences with our technology throughout the day and blur the seams enough to feel natural to us.

The challenge now is to make our machines “speak” human — to imbue them with context and inference and informality so that conversation flows naturally. DARPA has been working on it. So, too, Amazon and Google. In fact, most technology efforts are concerned with reducing interface friction. Improving the quality of our conversation is key to achieving that goal.

Development on IoT, augmented and mixed reality, Assistive Intelligence (my term for AI, but that’s an entirely different conversation) and even the miniaturization and extension properties on display in mobility and power advancements are all examples of the quest for that quality. Responsibly developed ambient contextuality, and ultimately natural conversation, will be better enabled by these technologies, and our lives will become much more conversational soon. Once we experience reliable and useful conversations with our technological world, I think we will all be hooked.