Year: 2018

08 May 2018

Android blatantly copies the iPhone X navigation gestures

Google unveiled some of the new features in the next version of Android at its developer conference. One feature looked particularly familiar. Android P will get new navigation gestures to switch between apps. And it works just like the iPhone X.

“As part of Android P, we’re introducing a new system navigation that we’ve been working on for more than a year now,” VP of Android Engineering Dave Burke said. “And the new design makes Android multitasking more approachable and easier to understand.”

While Google has probably been working on a new multitasking screen for a year, it’s hard to believe that the company didn’t copy Apple. The iPhone X was unveiled in September 2017.

On Android P, the traditional home, back and multitasking buttons are gone. There’s a single pill-shaped button at the center of the screen. If you swipe up from this button, you get a new multitasking view with your most recent apps. You can swipe left and right and select the app you’re looking for.

If you swipe up one more time, you get the app drawer with suggested apps at the very top. At any time, you can tap on the button to go back to the home screen. These gestures also work when you’re using an app. Android P adds a back button in the bottom left corner if you’re in an app.

But the most shameless inspiration is the left and right gestures. If you swipe left and right on the pill-shaped button, you can switch to the next app, exactly like on the iPhone X. You can scrub through multiple apps. As soon as you release your finger, you’ll jump to the selected app.

You can get Android P beta for a handful of devices starting today. End users will get the new version in the coming months.

It’s hard to blame Google with this one as the iPhone X gestures are incredibly elegant and efficient. Using a phone that runs the current version of Android after using the iPhone X is much slower as it requires multiple taps to switch to the most recent app. But Google still deserves to be called out.

08 May 2018

Android P Beta is available today

There was plenty of talk about Android P at today’s big I/O kickoff — but when can you actually download the thing? Right now, as a matter of fact, if you’ve got one of a handful of compatible handsets. You can find the upgrade here, if you dare. 

The list includes the Pixel, naturally, but the company’s also partnering with a whole bunch of third-party developers, to allow a lot more users to try an early build of the mobile operating system. The list includes handsets from Nokia, Vivo, OnePlus, Xiaomi, Sony, Essential and Oppo.

It’s a pretty broad spectrum of handsets, though there are some key absences here, including, notably, Samsung, HTC, LG and Huawei.

08 May 2018

Google News gets an AI-powered redesign

As had been previously rumored, Google introduced a revamped version of Google News at its I/O developer conference today. The A.I.-powered, redesigned news destination combines elements found in Google’s digital magazine app, Newsstand, as well as YouTube, and introduces new features like “newscasts” and “full coverage” to help people get a summary or a more holistic view of a news story.

Google CEO Sundar Pichai spoke of the company’s responsibility to present accurate information to users who seek out the news on its platform, and how it could leverage A.I. technology to help with that.

The updated Google News will do three things, the company says: allow users to keep up with the news they care about, understand the full story, and enjoy and support the publishers they trust.

On the first area of focus – keeping up with the news – the updated version of Google News will present a briefing at the top of the page with the five stories you need to know about right now, as well as more stories selected just for you.

The feature uses A.I. technology to read the news on the web, to assemble the key things you need to know about, including also local news and events in your area. And the more you use this personalized version of Google News, the better it will get, thanks to the “reinforcement learning” technology under the hood.

However, you can also tell Google News what you want to see more or less of, in terms of both topics and publications.

In addition to the personalization and A.I.-driven news selection, the revamped Google News looks different, too. The site has been updated to use Google’s Material Design language, which makes it fit in better with Google’s other products, and it puts a heavier emphasis on photos and videos – including those from YouTube.

Another new feature called “Newscasts” will help users get a feel for a story through short-form summaries presented in a card-style design you can flip through.

If you want to learn more, you can dive in more deeply to stories through the “Full Coverage” feature, which is also launching along with the redesign.

Full Coverage largely aims to help users get a better perspective on news – that is, pop their filter bubbles by presenting news from multiple sources. It also aggregates coverage into “opinion,” “analysis” and “fact checks.” There were labels Google News was already using in the older version of the site, but are now much more prominent as they appear as section titles.

Full Coverage will also include a timeline of events, so you can get a sense of the history of what’s being reported.

As Google News PM Trystan Upstill explained, “having a productive conversation or debate requires everyone to have access to the same information.” That seems to be a bit of a swipe at Facebook, and the way it allowed fake news to propagate across its social network.

In another competitive move against Facebook, Google announced an easier way for users to subscribe to publisher content through a new “Subscribe with Google” option rolling out in the coming weeks.

The process of subscribing will leverage users’ Google account, and the payment information they already have on file. Then, the paid content becomes available across Google platforms, including Google News, Google Search and publishers’ own websites.

And Google News will integrate Newsstand, offering over 1,0000 magazine titles you can follow by tapping a star icon, or subscribing to.

The changes come at a time when Apple is reportedly prepping a premium news subscription service, based on the technology from Texture, the digital newsstand business it bought in March. Notably, it also arrives amid serious concerns among publishers about Facebook’s role in the media business, not only because of fake news, but also its methods of ranking content, among other things.

“Google’s new News app is rolling out to Android, iOS and Web in 127 countries starting today,” said Upstill. “We know getting accurate and timely information into people’s hands and supporting high quality journalism is more important than it has ever been right now.”

 

08 May 2018

Google rolls out a suite of time management controls to promote more healthy app usage

Google today announced at its I/O developer conference a new suite tools for its new Android P operating system that will help users better manage their screen time, including a more robust do not disturb mode and ways to track your app usage.

The biggest change is introducing a dashboard to Android P that tracks all of your Android usage, labeled under the “digital wellbeing” banner. Users can see how many times they’ve unlocked their phones, how many notifications they get, and how long they’ve spent on apps, for example. Developers can also add in ways to get more information on that app usage. YouTube, for example, will show total watch time across all devices in addition to just Android devices.

Google says it has designed all of this to promote what developers call “meaningful engagement,” trying to reduce the kind of idle screen time that might not necessarily be healthy — like sitting on your phone before you go to bed. Here’s a quick rundown of some of the other big changes:

  • Google’s do not disturb mode is getting additional ways to ignore notifications. Users can turn their phones over in order to automatically engage do not disturb, a gesture that Google is calling “shush.” Google is also reducing visual notifications in addition to texts and calls when do not disturb is activated.
  • Google is also introducing a “wind down” mode that activates before users go to bed. Wind down mode changes the screen color to a grayscale, and lowers the brightness over time. This one is geared toward helping people put their phones down when they’re going to bed.
  • Users can set time limits on their apps. Android P will nudge users when they are approaching that time limit, and once they it it, the app will turn gray on the launcher in order to indicate that they’ve exceeded the screen time they wanted for that app.

The launch had been previously reported by The Washington Post, and arrives at a time when there’s increasing concerns about the negative side of technology and, specifically, its addictive nature. The company already offers tools for parents who want to manage children’s devices, via Family Link – software for controlling access to apps, setting screen time limits, and configuring device bedtimes, among other things. Amazon also offers a robust set of parental controls for its Fire tablets and Apple is expected to launch an expanded set of parental controls for iOS later this year.

08 May 2018

Google’s ML Kit makes it easy to add AI smarts to iOS and Android apps

At its I/O developer conference, Google today introduced ML Kit, a new software development kit (SDK) for app developers on iOS and Android that allows them to integrate a number of pre-built Google-provided machine learning models into their apps. One nifty twist here is that these models, which support text recognition, face detection, barcode scanning, image labeling and landmark recognition, are available both on- and offline, depending on network availability and the developer’s preference.

In the coming months, Google plans to extend the current base set of available APIs with two more: one for integrating the same kind of smart replies that you’re probably familiar with from apps like Inbox and Gmail, and a high-density face contour feature for the face detection API.

The real game-changer here are the offline models that developers can integrate into their apps and that they can use for free. Unsurprisingly, there is a tradeoff here. The models that run on the device are smaller and hence offer a lower level of accuracy. In the cloud, neither model size nor available compute power are an issue, so those models are larger and hence more accurate, too.

That’s pretty much standard for the industry. Earlier this year, Microsoft launched its offline neural translations, for example, which can also either run online or on the device. The tradeoff there is the same.

Brahim Elbouchikhi, Google’s group product manager for machine intelligence and the camera lead for Android, told me that a lot of developers will likely do some of the preliminary machine learning inference on the device, maybe to see if there is actual an animal in a picture, and then move to the cloud to detect what breed of dog it actually is. And that makes sense because the on-device image labelling service features about 400 labels while the cloud-based on feature more than 10,000. To power the on-device models, ML Kit uses the standard Neural Networks API on Android and its equivalent on Apple’s iOS.

He also stressed that this is very much a cross-platform product. Developers don’t think of machine learning models as Android- or iOS-specific, after all.

For developers who want to go beyond the pre-trained models, ML Kit also supports TensorFlow Lite models.

As Google rightly notes, getting up to speed with using machine learning isn’t for the faint of heart.  This new SDK, which falls under Google’s Firebase brand, is clearly meant to make using machine learning easier for mobile developers. While Google Cloud already offers a number of similar pre-trained and customizable machine learning APIs, those don’t work offline and the experience isn’t integrated tightly with Firebase and the Firebase Console either, which is quickly becoming Google’s go-to hub for all things mobile development.

Even for custom TensorFlow Lite models, Google is working on compressing the models to a more workable size. For now, this is only an experiment, but developers who want to give it a try can sign up here.

Overall, Elbouchikhi argues, the work here is about democratizing machine learning. “Our goal is to make machine learning just another tool,” he said.

08 May 2018

Android P leverages DeepMind for new Adaptive Battery feature

No surprise here, Android P was the highlight of today’s Google I/O keynote. The new version of the company’s mobile operating system still doesn’t have a name (at least not as of this writing), but the company’s already highlighted a number of key new features, including, notable, Adaptive Battery.

Aimed at taking on basically everyone’s biggest complaints about their handset, the new feature is designed to make more efficient use of on-board hard. Google’s own DeepMind is doing much of the heavy lifting here, relying on user habits to determine what apps they use, when, and delegating power accordingly.

According to the company, the new feature is capable of “anticipating actions,” resulting in 30-percent fewer CPU wakeups. Google has promised more information on the feature in the upcoming developer keynote. Combined with larger on-board batteries and faster charging in recent handsets, the new tech could go a long ways toward changing the way users interact with their devices, shift the all night charge model to quick charging bursts — meaning, for better or worse, you can sleep with your handset nearby without having to worry about keeping it plugged in. 

08 May 2018

With App Actions and Slices, Google introduces more ways for users to interact with the apps on their phones

With Instant Apps, Google offers a feature for Android users that allows them to load a small part of an app right from the search results and get a native app experience. With Slices, Google is launching a new feature today that may look somewhat similar at first glance, but which solves a very different problem. While Instant Apps feature on providing a full app experience and are a great way to get uses to install the full app, Slices are about solving a small, well-defined problem — and the work with apps that are already installed on your device.

In addition to Slices, Google also today announced App Actions, a new feature in Android P. Actions allow developers to bring their content directly to Android surfaces like Search, the Google Assistant and the Google Launcher when and where the user needs it. The idea here is to surface not just the right content, which is something Google has long done, but also the right action. Google says these Actions will appear based on usage and relevance. Some of the details here remain a bit unclear, but Google says this feature is modeled after the Conversational Actions for the Google Assistant and that developers will soon be able to give them a spin themselves by signing up for early access here.

Slices is also meant to get users to interact more with the apps they have already installed, but the overall premise is a bit different from App Actions. Slices essentially provide users with a mini snippet of an app and they can appear in Google Search and the Google Assistant. From the developer’s perspective, they are all about driving the usage of their apps, but from the user’s perspective, they look like an easy way to get something done quickly.

“A slice is designed to solve a problem: I’m a user and want to get something quickly done on my device,” Google’s PM director for Android Stephanie Saad Cutherbertson told me ahead of today’s announcement. Maybe that’s calling a Lyft or book a hotel room, for example. To surface those slices, all you have to do is type “I want to book a ride” in the search box on Android and you’ll see that mini version of the app right there without having to go into the main app.

“This radically changes how users interact with the app,” Cuthbertson said. She also noted that developers obviously want people to use their app, so every additional spot where users can interact with it is a win for them.

Slices will launch with Android P, but will be broadly available across Android versions.

To make things easier for developers, Google modeled the development pattern after Android Notifications. All Android developers are pretty familiar with that, so getting started with Slices should be pretty straightforward. Google is also providing developers with templates that make it easier to build the user interface for these and ensure that the interface will be consistent across Slices. Developers who want to branch out, though, have the flexibility to built their experience from the ground up, too.

08 May 2018

Slack hits 8 million daily active users with 3 million paid users

As Slack looks to woo larger and larger companies with the prospect of a simpler workplace collaboration tool, the company said it has now hit 8 million daily active users.

The company said it also has 3 million paid users. A darling in Silicon Valley, Slack was initially able to capitalize on pent-up demand for workplace communications tools that were much simpler and easy to use. Companies like Yammer, Microsoft, and others looked to remake internal communications in ways that looked more like consumer tools in the Web 2.0 era, but Slack came out with an approach that was initially just a slick chat and team communications tool. That helped it rocket to a $5.1 billion valuation and drive its initial adoption among smaller companies and startups.

Slack in September said it had around 6 million daily active users, 50,000 teams and 2 million paid users, and around $200 million in annual recurring revenue. So it’s a pretty significant jump over the past nine months or so, though the company still has to break from the perception that it’s a tool that’s just good for startups and smaller companies. The larger enterprise deals are the ones that tend to drive larger contracts — and additional revenue — as it looks to build a robust business. More than half of Slack’s users are outside the U.S., a signal that it looks to continue to expand into new regions that may demand tools like Slack beyond just domestic markets.

Slack has been trying to roll out additional tools to support those larger companies, rather than just operate as a chat tool that can get out of control when companies have thousands of employees. The company has invested heavily in machine learning tools to make it easier to search for answers that may already exist in some Slack channel or direct message. Slack also rolled out threads, a long-awaited feature that users often demanded though it wasn’t clear how that would exist in Slack’s simpler interface.

There are already startups looking to pick away at niches that the company might not necessarily fill, too. Slite, a startup looking to build a simpler notes tool that would create a smarter internal wiki of sorts, raised $4.4 million last month. There’s also Atlassian’s Stride, which opened up to developers in February this year. And Microsoft has its own Slack competitor, Teams, that continues to get pretty big updates. Slack clearly exposed a lot of pent-up demand for similar tools, and now faces a lot of competition going forward.

Slack started the Slack Fund as a way to woo developers to build tools for Slack, and early last year invested in 11 new companies. The company has been trying to create a robust ecosystem where developers can fill the niches that the company might be missing, but has looks to focus on its core products. The company says there are now more than 1,500 apps in the Slack directory.

08 May 2018

Google Assistant is coming to Google Maps

Google wants to bundle its voice assistant into every device and app. And it’s true that it makes sense to integrate Google Assistant in Google Maps. It’ll be available on iOS and Android this summer.

At Google I/O, director of Google Assistant Lilian Rincon showed a demo of Google Maps with Google Assistant. Let’s say you’re driving and you’re using Google Maps for directions. You can ask Google Assistant to share your ETA without touching your phone.

You can also control the music with your voice for instance. Rincon even played music on YouTube, but without the video element of course. It lets you access YouTube’s extensive music library while driving.

If you’re using a newer car with Android Auto or Apple CarPlay, you’ve already been using voice assistants in your car. But many users rely exclusively on their phone. That’s why it makes sense to integrate Google Assistant in Google Maps directly.

It’s also a great way to promote Google Assistant to users who are not familiar with it yet. That could be an issue as Google Assistant asks for a ton of data when you first set it up. It forces you to share your location history, web history and app activity. Basically you let Google access everything you do with your phone.

08 May 2018

The Google Assistant will soon be able to call restaurants and make a reservation for you

Google just showed a crazy (and terrifying) new feature for the Google Assistant at its I/O developer conference. The Assistant will soon be able make calls for you to make a reservation — maybe for a salon appointment or to reserve a table at a restaurant that doesn’t take online bookings. For now, this was only a demo, but the company plans to start testing this feature with the Assistant in the summer.

In the demo, Google showed how you can tell the Assistant that you want to make a haircut appointment. The Assistant can then make that call, talk to whoever answers and make the request. In the demo, the Assistant even handled complicated conversations, adding small little hints (“ummm”) that make it sound natural. Even for calls that don’t quite go as expected, the Assistant can handle these interactions quite gracefully — though Google obviously only demoed two examples that worked out quite well.

Google calls this feature “Duplex” and it’ll roll out at some point in the future.

The crazy thing here is that the Assistant in the demos was able to sound quite human, adding little pauses to the voice queries and responses, for example. I’m sure that restaurant workers will soon figure out which voice signifies a call from the Assistant and have some fun with it.

There’s always a chance that Google fudged this demo a bit, so we’ll have to wait to see what it’ll actually sound like when it goes live.