Year: 2018

08 May 2018

iOS will soon disable USB connection if left locked for a week

In a move seemingly designed specifically to frustrate law enforcement, Apple is adding a security feature to iOS that totally disables data being sent over USB if the device isn’t unlocked for a period of 7 days. This spoils many methods for exploiting that connection to coax information out of the device without the user’s consent.

The feature, called USB Restricted Mode, was first noticed by Elcomsoft researchers looking through the iOS 11.4 code. It disables USB data (it will still charge) if the phone is left locked for a week, re-enabling it if it’s unlocked normally.

Normally when an iPhone is plugged into another device, whether it’s the owner’s computer or another, there is an interchange of data where the phone and computer figure out if they recognize each other, if they’re authorized to send or back up data, and so on. This connection can be taken advantage of if the computer being connected to is attempting to break into the phone.

USB Restricted Mode is likely a response to the fact that iPhones seized by law enforcement or by malicious actors like thieves essentially will sit and wait patiently for this kind of software exploit to be applied to them. If an officer collects a phone during a case, but there are no known ways to force open the version of iOS it’s running, no problem: just stick it in evidence and wait until some security contractor sells the department a 0-day.

But what if, a week after that phone was taken, it shut down its own Lightning port’s ability to send or receive data or even recognize it’s connected to a computer? That would prevent the law from ever having the opportunity to attempt to break into the device unless they move with a quickness.

On the other hand, had its owner simply left the phone at home while on vacation, they could pick it up, put in their PIN and it’s like nothing ever happened. Like the very best security measures, adversaries will curse its name while users may not even know it exists. Really, this is one of those security features that seems obvious in retrospect and I would not be surprised if other phone makers copy it in short order.

Had this feature been in place a couple of years ago, it would have prevented that entire drama with the FBI. It milked its ongoing inability to access a target phone for months, reportedly concealing its own capabilities all the while, likely to make it a political issue and manipulate lawmakers into compelling Apple to help. That kind of grandstanding doesn’t work so well on a seven-day deadline.

It’s not a perfect solution, of course, but there are no perfect solutions in security. This may simply force all iPhone-related investigations to get high priority in courts, so that existing exploits can be applied legally within the seven-day limit (and, presumably, every few days thereafter). All the same, it should be a powerful barrier against the kind of eventual, potential access through undocumented exploits from third parties that seems to threaten even the latest models and OS versions.

08 May 2018

Google brings its visual assistant to Android devices with Google Assistant

Google said it is rolling out its visual assistant, which brings ups information as well as ways to interact with apps with a Google Assistant voice request in a full-screen experience, to Android phones this summer.

When an Android user makes a query through Google Assistant, Google will provide a more interactive visual experience on the phone. That includes ways to interact with smart home products, like thermostats, or interacting directly with apps like the Starbucks app. Google’s visual assistant is coming to iOS devices this year. You can make a voice query such as “what is the temperature right now,” and a display shows up with a way to change the temperature.

Users will also be able to swipe up to get access to a visual snapshot of what’s happening that day, including navigation to work, reminders, and other services like that. All this aims to provide users a way to quickly get access to their daily activities without having to string together a series of sentences or taps in order to get there.

Google’s visual assistant on phones more of an evolution of how users can interact with Google’s services and integrations in a more seamless way. Voice interfaces have become increasingly robust with the emergence of Google Home and Alexa, allowing users to interact with devices and other services by just saying things to their phones or devices at home. But sometimes there are more complex interactions, such as tweaking the temperature slightly, and having a visual display makes more sense.

Each new touch point developers get — such as a full-screen display after a voice query — gives companies more and more ways to keep the attention of potential customers and users. While Alexa offers developers a way to get a voice assistant in their tools, as well as SoundHound with its Houndify platform, Google is trying to figure out what the next step is for a user after asking a question for Google. That makes more sense on the phone, where they can quickly get a new interface that requires some kind of visual element.

08 May 2018

A Google Assistant update will teach kids to say ‘please’

No more rudely yelling at your Google Home smart speaker, kids. Google today announced at its I/O developer conference a new Google Assistant setting for families called “Pretty Please.” The feature will teach children to use polite language when interacting with the Google Assistant, and will receive thanks from the virtual assistant in response.

For example, when children say “please,” the Assistant will respond with some sort of positive reinforcement while performing the requested task.

During a brief demo, the Assistant was shown interacting with kids, and saying things like “thanks for saying please,” “thanks for asking so nicely, or “you’re very polite.”

The feature arrives at a time when parents were growing concerned that kids were learning to treat the virtual assistants in smart speakers rudely, which would translate into their interactions with people.

Amazon recently addressed this problem with an Alexa update called Magic Word, which is just now rolling out.

Google says its Pretty Please feature will launch later this year.

 

08 May 2018

Here’s how much Uber’s flying taxi service will cost

Uber’s holding a big conference today in Los Angeles, Uber Elevate, to share more about its ambitions to launch a flying taxi service. For those unfamiliar, Uber’s goal is to start testing these vertical takeoff and landing vehicles in 2020 and have its first official ride in 2023. At the summit today, Uber Head of Elevate Eric Allison shed some light on the cost of uberAIR, Uber’s aerial ridesharing service, for consumers.

The passenger cost per mile, Allison said, needs to be competitive with the variable cost of car ownership. Car ownership, on a per mile basis, costs between $0.464 to $0.608, according to AAA.

However, uberAIR will not be cheaper on a cost per passenger mile at launch. Initially, uberAIR will cost $5.73 per passenger mile. In the near-term, Uber says it will get the cost down to $1.86 per passenger mile before ideally getting to $0.44 per passenger mile. At that point, it would actually be cheaper to use uberAIR.

Allison said to envision a world where your commute will be faster and cheaper through the air, to the point where it wouldn’t economically make sense to own your own car. Meanwhile, Allison said there’s “surprisingly large demand for aerial ridesharing.” That’s based on a study Uber conducted that found if uberAIR were available today, 700 million people would choose to use it.

I’m here at Elevate, so be on the lookout for more news over the next couple of days.

08 May 2018

Google’s first Smart Displays will launch in July

Smart Displays were the talk of Google’s big push at CES this year — but there’s been nary a peep in the intervening months. As expected, we got a little more insight into the company’s Echo Show competitor at today’s big I/O kickoff — though the actual devices are still a few months out, officially launching in July.

The company walked through a demo of Lenovo’s devices, easily the best looking of the bunch. It’s clear that the company’s invested some resources into building a visual-first version of Assistant, justifying the addition of a screen to the experience.

The key to the offering, naturally, is YouTube, which was at the center of a tug of war between Google and Amazon around the Echo Show’s launch. Google notably pulled its video offering from Amazon’s device, with the company now rumored to be working on its own video offering specifically for the product.

Along with Lenovo, JBL, Lenovo, LG and Sony have all announced plans to get in on the category, offering a much broader selection than Amazon’s first-party Show and Spot offerings.  The new devices will no doubt be on-hand at today’s event. We’ll be taking a deeper dive shortly after the keynote.

08 May 2018

Gemini’s Tyler Winklevoss informs Bill Gates that actually he can short bitcoin

While plenty of the scathing criticism aimed at the cryptocurrency world is well-deserved, sometimes a jab fails to land. In the midst of the mostly grossly worded slew of insults that the institutionally rich lobbed at the bitcoin crowd over the weekend, Bill Gates may have landed the furthest from the mark.

“Bitcoin and other cryptocurrencies are kind of a pure ‘greater fool theory’ type of investment,” Gates told CNBC on Monday. “I would short it if there was an easy way to do it.”

It wasn’t clear if Gates was aware that there is indeed a way to short bitcoin and that he finds it cumbersome or if he just doesn’t really follow this whole crazy cryptocurrency thing at all. Still, that didn’t keep singular Winklevii Tyler Winklevoss from weighing in with what is, dare we say, almost an own.

As Winklevoss noted, last year, the Chicago-based CBOE and CME launched their respective bitcoin futures markets, enabling short selling for bitcoin. Technically, there are other ways too, though the process might be a bit less straightforward than it would be with a more traditional asset. Maybe that’s what Gates meant all along.

Winklevoss, a noted bitcoin bull, runs the Gemini exchange together with his brother Cameron. The pair have been working on getting their plans for cryptocurrency ETFs off the ground after regulatory setbacks last year.

08 May 2018

Google makes talking to the Assistant feel more natural

At its I/O developer conference today, Google announced a major update to the Google Assistant. The main idea here is to allow you to have more natural conversations with the Google Assistant. Instead of having to say “Hey Google” or “OK Google” every time you want to say a command, you’ll only have to do this the first time and then you can have a conversation with the Assistant.

Google calls this feature “continued conversation” and it’ll roll out in the coming week.

The company also is adding a new feature that allows you to ask multiple questions within the same request. Google’s Scott Huffman noted that it may seem like a simple feature — just listen for the “and” — but is actually quite difficult. Thanks to this new feature, you can now ask about the recent scores from a game and then how well a specific player did within one query. No two “OK Google’s” needed.

All of this will work everywhere the Google Assistant works, including the car, where Google is introducing the Google Assistant to Google Maps.

All of this will introduce a far more natural way to interact with the Google Assistant. Huffman admitted how annoying the constant “Hey Google” requests are — and if you have a Google Home, that’ll definitely sound familiar to you.

08 May 2018

Google adds Morse code input to Gboard

Google is adding morse code input to its mobile keyboard. It’ll be available as a beta on Android later today. The company announced that new feature at Google I/O after showing a video of Tania Finlayson.

Finlayson has been having a hard time communicating with other people due to her condition. She found a great way to write sentences and talk with people using Morse code.

Her husband developed a custom device that analyzes her head movements and transcodes them into Morse code. When she triggers the left button, it adds a short signal, while the right button triggers a long signal. Her device then converts the text into speech.

Google’s implementation will replace the keyboard with two areas for short and long signals. There are multiple word suggestions above the keyboard just like on the normal keyboard. The company has also created a Morse poster so that you can learn Morse code more easily.

As with all accessibility features, the more input methods the better. Everything that makes technology more accessible is a good thing.

Of course, Google used its gigantic I/O conference to introduce this feature to make the company look good too. But it’s a fine trade-off, a win-win for both Google and users who can’t use a traditional keyboard.

Correction: A previous version of this article said that Morse code is available on iOS and Android. The beta is only available on Android.

08 May 2018

Google adds six new voices and John Legend to Assistant

Not to be outdone by recent upgrades to Alexa and Siri, Google today announced that it’s adding not one, but six new voices to its Assistant offering.

The company announced today at I/O that it’s been working on newer, more lifelike version of its spoken AI that features a more natural voice that is “closer to how humans speak.”Improvements include more natural pauses and other subtleties to help create a “more natural dialogue” with Assistant. 

The new version of the assistant, which features both male and female voices, is built on a a tech machine learning technology called Wavenet, which the company started building out some 18 months ago. Google’s working on rolling out more customized version of the AI with custom dialects across the globe.

Meantime, it’s been bringing some big names into the studio to add some personality to the offering, starting with musician John Legend, who will roll out for assistant, starting later this year, turning that Alexa Super Bowl ad into reality. Your move, Alexa.

08 May 2018

Google announces a new generation for its TPU machine learning hardware

As the war for creating customized AI hardware heats up, Google is now rolling out its third generation of silicon, the Tensor Processor Unit 3. 

Google joins pretty much every other major company in looking to create custom silicon in order to handle its machine operations. And while multiple frameworks for developing machine learning tools have emerged, including PyTorch and Caffe2, this one is optimized for Google’s TensorFlow. Google is looking to make Google Cloud an omnipresent platform at the scale of Amazon, and offering better machine learning tools is quickly becoming table stakes. 

Amazon and Facebook are both working on their own kind of custom silicon. Facebook’s hardware is optimized for its Caffe2 framework, which is designed to handle the massive information graphs it has on its users. You can think about it as taking everything Facebook knows about you — your birthday, your friend graph, and everything that goes into the news feed algorithm — fed into a complex machine learning framework that works best for its own operations. That, in the end, may have ended up requiring a customized approach to hardware. We know less about Amazon’s goals here, but it also wants to own the cloud infrastructure ecosystem with AWS. 

All this has also spun up an increasingly large and well-funded startup ecosystem looking to create a customized piece of hardware targeted toward machine learning. There are startups like Cerebras Systems, SambaNova Systems, and Mythic, with a half dozen or so beyond that as well (not even including the activity in China). Each is looking to exploit a similar niche, which is find a way to outmaneuver Nvidia on price or performance for machine learning tasks. Most of those startups have raised more than $30 million. 

Google unveiled its second-generation TPU processor at I/O last year, so it wasn’t a huge surprise that we’d see another one this year. We’d heard from sources for weeks that it was coming, and that the company is already hard at work figuring out what comes next. Google at the time touted performance, though the point of all these tools is to make it a little easier and more palatable in the first place. 

https://techcrunch.com/2017/05/17/google-announces-second-generation-of-tensor-processing-unit-chips/

There are a lot of questions around building custom silicon, however. It may be that developers don’t need a super-efficient piece of silicon when an Nvidia card that’s a few years old can do the trick. But data sets are getting increasingly larger, and having the biggest and best data set is what creates a defensibility for any company these days. Just the prospect of making it easier and cheaper as companies scale may be enough to get them to adopt something like GCP. 

Intel, too, is looking to get in here with its own products. Intel has been beating the drum on FPGA as well, which is designed to be more modular and flexible as the needs for machine learning change over time. But again, the knock there is price and difficulty, as programming for FPGA can be a hard problem in which not many engineers have expertise. Microsoft is also betting on FPGA, and unveiled what it’s calling Brainwave just yesterday at its BUILD conference for its Azure cloud platform — which is increasingly a significant portion of its future potential.

Google more or less seems to want to own the entire stack of how we operate on the internet. It starts at the TPU, with TensorFlow layered on top of that. If it manages to succeed there, it gets more data, makes its tools and services faster and faster, and eventually reaches a point where its AI tools are too far ahead and locks developers and users into its ecosystem. Google is at its heart an advertising business, but it’s gradually expanding into new business segments that all require robust data sets and operations to learn human behavior. 

Now the challenge will be having the best pitch for developers to not only get them into GCP and other services, but also keep them locked into TensorFlow. But as Facebook increasingly looks to challenge that with alternate frameworks like PyTorch, there may be more difficulty than originally thought. Facebook unveiled a new version of PyTorch at its main annual conference, F8, just last month. We’ll have to see if Google is able to respond adequately to stay ahead, and that starts with a new generation of hardware.