Category: UNCATEGORIZED

07 May 2019

Facebook talked privacy, Google actually built it

Mark Zuckerberg: “The future is private”. Sundar Pichai: ~The present is private~. While both CEO’s made protecting user data a central theme of their conference keynotes this month, Facebook’s product updates were mostly vague vaporware while Google’s were either ready to ship or ready to demo. The contrast highlights the divergence in strategy between the two tech giants.

For Facebook, privacy is a talking point meant to boost confidence in sharing, deter regulators, and repair its battered image. For Google, privacy is functional, going hand-in-hand with on-device data processing to make features faster and more widely accessible.

Everyone wants tech to be more private, but we must discern between promises and delivery. Like “mobile”, “on-demand”, “AI”, and “blockchain” before it, “privacy” can’t be taken at face value. We deserve improvements to the core of how our software and hardware work, not cosmetic add-ons and instantiations no one is asking for.

AMY OSBORNE/AFP/Getty Images

At Facebook’s F8 last week, we heard from Zuckerberg about how “Privacy gives us the freedom to be ourselves” and he reiterated how that would happen through ephemerality and secure data storage. He said Messenger and Instagram Direct will become encrypted…eventually…which Zuckerberg had already announced in January and detailed in March. We didn’t get the Clear History feature that Zuckerberg made the privacy centerpiece of his 2018 conference, or anything about the Data Transfer Project that’s been silent for the 10 months since it’s reveal.

What users did get was a clumsy joke from Zuckerberg about how “I get that a lot of people aren’t sure that we’re serious about this. I know that we don’t exactly have the strongest reputation on privacy right now to put it lightly. But I’m committed to doing this well.” No one laughed. At least he admitted that “It’s not going to happen overnight.”

But it shouldn’t have to. Facebook made its first massive privacy mistake in 2007 with Beacon, which quietly relayed your off-site ecommerce and web activity to your friends. It’s had 12 years, a deal with the FTC promising to improve, countless screwups and apologies, the democracy-shaking Cambridge Analytica scandal, and hours of being grilled by congress to get serious about the problem. That makes it clear that if “the future is private”, then the past wasn’t. Facebook is too late here to receive the benefit of the doubt.

At Google’s I/O, we saw demos from Pichai showing how “our work on privacy and security is never done. And we want to do more to stay ahead of constantly evolving user expectations.” Instead of waiting to fall so far behind that users demand more privacy, Google has been steadily working on it for the past decade since it introduced Chrome incognito mode. It’s changed directions away from using Gmail content to target ads and allowing any developer to request access to your email. And now when the company is hit with scandals, it’s typically over its frightening efficiency as with its cancelled Project Maven AI military tech, not its creepiness.

Google made more progress on privacy in low-key updates in the runup to I/O than Facebook did on stage. In the past month it launched the ability to use your Android device as a physical security key, and a new auto-delete feature rolling out in the coming weeks that erases your web and app activity after 3 or 18 months. Then in its keynote today, it published “privacy commitments” for Made By Google products like Nest detailing exactly how they use your data and your control over that. For example, the new Nest Home Max does all its Face Match processing on device so facial recognition data isn’t sent to Google. Failing to note there’s a microphone in its Nest security alarm did cause an uproar in February, but the company has already course-corrected

That concept of on-device processing is a hallmark of the new Android 10 Q operating system. Opening in beta to developers today, it comes with almost 50 new security and privacy features like TLS 1.3 support and Mac address randomization. Google Assistant will now be better protected, Pichai told a cheering crowd. “Further advances in deep learning have allowed us to combine and shrink the 100 gigabyte models down to half a gigabyte — small enough to bring it onto mobile devices.” This makes Assistant not only more private, but fast enough that it’s quicker to navigate your phone by voice than touch. Here, privacy and utility intertwine.

The result is that Google can listen to video chats and caption them for you in real-time, transcribe in-person conversations, or relay aloud your typed responses to a phone call without transmitting audio data to the cloud. That could be a huge help if you’re hearing or vision impaired, or just have your hands full. A lot of the new Assistant features coming to Google Pixel phones this year will even work in Airplane mode. Pichai says that “Gboard is already using federated learning to improve next word prediction, as well as emoji prediction across 10s of millions of devices” by using on-phone processing so only improvements to Google’s AI are sent to the company, not what you typed.

Google’s senior director of Android Stephanie Cuthbertson hammered the idea home, noting that “On device machine learning powers everything from these incredible breakthroughs like Live Captions to helpful everyday features like Smart Reply. And it does this with no user input ever leaving the phone, all of which protects user privacy.” Apple pioneered much of the on-device processing, and many Google features still rely on cloud computing, but it’s swiftly progressing.

When Google does make privacy announcements about things that aren’t about to ship, they’re significant and will be worth the wait. Chrome will implement anti-fingerprinting tech and change cookies to be more private so only the site that created them can use them. And Incognito Mode will soon come to the Google Maps and Search apps.

Pichai didn’t have to rely on grand proclamations, cringey jokes, or imaginary product changes to get his message across. Privacy isn’t just a means to an end for Google. It’s not a PR strategy. And it’s not some theoretical part of tomorrow like it is for Zuckerberg and Facebook. It’s now a natural part of building user-first technology…after 20 years of more cavalier attitudes towards data. That new approach is why the company dedicated to organizing the world’s information is getting so little backlash.

With privacy, it’s all about show, don’t tell.

07 May 2019

Waymo and Lyft partner to scale self-driving robotaxi service in Phoenix

Waymo is partnering with Lyft to bring self-driving vehicles onto the ride-hailing network in Phoenix as the company ramps up its commercial robotaxi service.

Waymo will add 10 of its self-driving vehicles on Lyft platform over the next few months, according to CEO John Krafcik. Once Waymo vehicles are on the platform, Lyft users in the area will have the option to select a Waymo directly from the Lyft app for eligible rides.

“This first step in our partnership will allow us to introduce the Waymo Driver to Lyft users, enabling them to take what for many will be their first ride in a self-driving vehicle,” Krafcik said in a blog posted Tuesday.

The companies didn’t provide further details about the partnership, but it appears to be similiar to Lyft’s relationship with Aptiv . Under that partnership, Aptiv’s self-driving vehicles operate on Lyft’s ride-hailing platform in Las Vegas. As of last month, the vehicles had provided more than 40,000 paid autonomous rides in Las Vegas via the Lyft app.
07 May 2019

Google’s big VR news is that there is no VR news

Google seems to be waking up from its VR daydream. The company’s ambitious plans to leverage the smartphones that people already own to power VR experiences went unmentioned at its I/O developer conference.

After spending 2016 and 2017 making big promises for how the company would shape the mobile VR market and ensure that its Daydream platform would dominate, Google has largely abandoned its plans letting the headsets gather dust and its VR content in the Play Store.

The only virtual reality news to emerge is that the company’s newest Pixel smartphones won’t support the company’s own VR platform, The Verge reports.

The company released two generations of Daydream View headsets for its mobile platform in 2016 and 2017 but there were no updates last year and not a single onstage mention of the platform or headset this year.

Google detailed plans during a dedicated VR I/O 2017 keynote to bring standalone devices to market via partnerships with HTC and Lenovo. HTC ended up bailing on the program, and after Lenovo released its Mirage Solo device long after estimated, Google failed to bring updates and prioritize bringing content that leveraged the new WorldSense tracking tech. The company is now largely pitching the device as a developer kit, though one wonders what exactly they’re developing for.

Facebook’s VR arm Oculus has announced and released two standalone VR headsets since Google last announced new VR hardware onstage at an event.

“On the VR front, our focus right now is much more on services and the bright spots where we see VR being really useful,” Google VR/AR head Clay Bavor told CNET in an interview, noting that they are still experimenting with hardware.

07 May 2019

Google tests faster image loading in Chrome Canary

Google today announced a change coming to Chrome that will help image-heavy websites load more quickly — but the addition for now is only available in the experimental version of the web browser, Chrome Canary. Explained Chrome Product Manager Tal Oppenheimer, speaking at the Google I/O conference on Tuesday, the company is rolling out a new way to create a better image loading experience on websites that leverages “lazy loading” — a technique that only loads images on a website when they’re actually needed.

“Modern websites are more visual than ever, using lots of beautiful high resolution imagery,” said Oppenheimer. “But loading all those images at once can slow down the browser, and can waste the user’s data by loading unnecessary images that the user never actually sees,” she continued. “So it’s often better to load images only as they’re actually needed — a technique known as ‘lazy loading.’ We know it can be a lot of work for developers to use their own JavaScript solutions. And it can be hard to get the quality experience you want for your business. So we wanted to make it incredibly simple to have a great image loading experience on your site,” Oppenheimer added.

Starting today behind a flag in Chrome Canary, you can try out the new image loading experience after adding the new loading attributes to your image tags. Chrome then takes care of the rest by taking into account factors like the user’s connection speed to determine when to load images. It will also check the first two kilobytes of the different images on the site in order to add a placeholder in the right size.

The end result is a smoother experience for image heavy websites, all without the need to write any extra code to enable this image loading experience.

The feature makes sense for those using the web browser with limited connectivity, where trying to browse today’s media rich web can really slow things down. This would allow those users to reach all the same websites as those with high speed connections with fewer issues.

The company didn’t say when such a feature would make its way out of the experimental version of Chrome to the flagship product.

07 May 2019

Google’s Flutter framework spreads its wings and goes multi-platform

Google’s Flutter UI toolkit for cross-platform development may only be two years old, but it has quickly become the framework of choice for many developers. Until now, though, ‘cross-platform’ only referred to Android and iOS. Late last year, Google announced that it would also take flutter beyond mobile and to the web. Today, at its I/O developer conference, it’s doing exactly that with the launch of the first technical preview of Flutter for the web

Google also today announced that Flutter developers will soon be able to target macOS, Windows and Linux and that the company itself is already using the framework to power some experiences on the Google Home Hub as it looks to bring Flutter to more embedded devices, too.

“We built Flutter from the ground up to be this beautiful, fast, productive, open-source toolkit for building tailored experiences, originally for mobile,” Google’s group product manager for Flutter, Tim Sneath, told me. “The big news for this week is that we are finally opening Flutter up beyond just mobile to really lean into our broader vision for Flutter as our general-purpose, portable UI toolkit for mobile, we, embedded and desktop.”

By default, Flutter apps are written in Google’s Dart language, which can be compiled to JavaScript. In that respect, bringing Flutter to the browser seems straightforward, but getting the Flutter engine up to production quality in the browser took some engineering work. The team, Sneath noted, was especially keen on making sure that Flutter would work just as well in the browser as it does on mobile and to ensure that both the developer and user experience remain the same.

“The challenge is really how to bring it down to the client and create these rich Flutter-based experiences that can take advantage of the standards-based web,” he said. Going to the web also means addressing basic things like resizable windows, but also support for interacting with keyboards and mice.

Those same requirements also apply to the desktop, of course, where the code isn’t quite production-ready yet. Developers, however, can now start experimenting with these features. The team says that the macOS version is currently the most mature, though if you are brave enough, you can try building for Windows and Linux, too.

The team also wanted to build this in a way that there will be one Flutter code base and that there would be no need to fork the framework or the applications that developers build on top of it to support these different platforms. “Our expectation is that we will be able to deliver one framework for all of these places,” said Sneath and stressed that we’re talking about native code, even on the desktop, not a web app that pretends to be a desktop app.

Sneath showed me a demo of the New York Times puzzle app on mobile and the web and the experience was identical. That’s the ideal scenario for Flutter developers, of course.

With today’s update, Google is also introducing a few new features to the core Flutter experience. These include new widgets for iOS and Google’s Material Design, support for Dart 2.3’s UI-as-code features and more. The Flutter team also announced an ML Kit Custom Image Classifier for Flutter to help developers build image classification workflows into their apps. “You can collect training data using the phone’s camera, invite others to contribute to your datasets, trigger model training, and use trained models, all from the same app,” the team writes in today’s announcement.

Looking ahead, the team plans to introduce improved support for text selection and copy/paste, support for plugins and out-of-the box support for new technologies like Progressive Web Apps.

07 May 2019

Red Hat and Microsoft are cozying up some more with Azure Red Hat OpenShift

It won’t be long before Red Hat becomes part of IBM, the result of the $34 billion acquisition last year that is still making its way to completion. For now, Red Hat continues as a stand-alone company, and is if to flex its independence muscles, it announced its second agreement in two days with Microsoft Azure, Redmond’s public cloud infrastructure offering. This one involving running Red Hat OpenShift on Azure.

OpenShift is RedHat’s Kubernetes offering. The thinking is that you can start with OpenShift in your data center, then as you begin to shift to the cloud, you can move to Azure Red Hat OpenShift — such a catchy name — without any fuss, as you have the same management tools you have been used to using.

As Red Hat becomes part of IBM, it sees that it’s more important than ever to maintain its sense of autonomy in the eyes of developers and operations customers, as it holds its final customer conference as an independent company. Red Hat executive vice president and president, of products and technologies certainly sees it that way. “I think [the partnership] is a testament to, even with moving to IBM at some point soon, that we are going to be  separate and really keep our Switzerland status and give the same experience for developers and operators across anyone’s cloud,” he told TechCrunch.

It’s essential to see this announcement in the context of both IBM’s and Microsoft’s increasing focus on the hybrid cloud, and also in the continuing requirement for cloud companies to find ways to work together, even when it doesn’t always seem to make sense, because as Microsoft CEO Satya Nadella has said, customers will demand it. Red Hat has a big enterprise customer presence and so does Microsoft. If you put them together, it could be the beginning of a beautiful friendship.

Scott Guthrie, executive vice president for the cloud and AI group at Microsoft understands that. “Microsoft and Red Hat share a common goal of empowering enterprises to create a hybrid cloud environment that meets their current and future business needs. Azure Red Hat OpenShift combines the enterprise leadership of Azure with the power of Red Hat OpenShift to simplify container management on Kubernetes and help customers innovate on their cloud journeys,” he said in a statement.

This news comes on the heels of yesterday’s announcement, also involving Kubernetes. TechCrunch’s own Frederic Lardinois described it this way:

What’s most interesting here, however, is KEDA, a new open-source collaboration between Red Hat and Microsoft that helps developers deploy serverless, event-driven containers. Kubernetes-based event-driven autoscaling, or KEDA, as the tool is called, allows users to build their own event-driven applications on top of Kubernetes. KEDA handles the triggers to respond to events that happen in other services and scales workloads as needed.

Azure Red Hat OpenShift is available now on Azure. The companies are working on some other integrations too including Red Hat Enterprise Linux (RHEL) running on Azure and Red Hat Enterprise Linux 8 support in Microsoft SQL Server 2019.

07 May 2019

Lyft lost $1.14B in Q1 2019 on $776M in revenue

In its first-ever earnings report as a public company, Lyft (NASDAQ: LYFT) failed to display progress toward profitability.

The ride-hailing business, which raised $2 billion in a March initial public offering, posted first-quarter revenues of $776 million on losses of $1.14 billion, including $894 million of stock-based compensation and related payroll tax expenses. The company’s earnings surpassed Wall Street estimates of $740 million in revenue on $274.1 million, or $3.77 a share, in losses.

Lyft began rising in after-hours trading as a result.

“The first quarter was a strong start to an important year, our first as a public company,” Lyft co-founder and chief executive officer Logan Green said in a statement.  “Our performance was driven by the increased demand for our network and multi-modal platform, as Active Riders grew 46 percent and revenue grew 95 percent year-over-year. Transportation is one of the largest segments of our economy and we are still in the very early stages of an enormous secular shift from personal car ownership to Transportation-as-a-Service.”

The company said adjusted net losses came in at $211.5 million compared to $228.4 million in the first quarter of 2018. Next quarter, Lyft expects revenue of more than $800 million on adjusted EBITDA losses of between $270 million and $280 million. For the entire year, Lyft projects roughly $3.3 billion in total revenue on EBITDA losses of about $1.2 billion.

Lyft was the first of a cohort of venture-backed ‘unicorns,’ including Pinterest, Zoom and soon, Uber — which will make its long-overdue debut on the New York Stock Exchange later this week — to complete a public offering in 2019. Despite a sizeable IPO pop, Lyft shares have only sunk since its first appearance on the Nasdaq. Lyft hit a share price of $87 on its first day of trading, up from a $74 IPO price. However, in the weeks post-IPO its floated closer to the $60 mark, closing Tuesday down 2 percent at $59.41.

Lyft has never posted a profit and its founders John Zimmer and Green have made it clear they expect to invest in the company’s growth for the next several years as it expands its multimodal offerings and ultimately launches operations overseas.

“The road ahead represents a massive opportunity to serve our communities and drive value for our stockholders, Lyft’s co-founders wrote in the company’s IPO prospectus. “We take this responsibility to serve our communities and stockholders seriously, and we look forward to proving that with actions and results. If we told you we were building the world’s best canal, railroad or highway infrastructure, you’d understand that this would take time. In that same light, the opportunity ahead requires continued long-term thinking, focus and execution.”

This story is updating.

07 May 2019

Kotlin is now Google’s preferred language for Android app development

Google today announced that the Kotlin programming language is now its preferred language for Android app developers.

“Android development will become increasingly Kotlin-first,” Google writes in today’s announcement. “Many new Jetpack APIs and features will be offered first in Kotlin. If you’re starting a new project, you should write it in Kotlin; code written in Kotlin often mean much less code for you–less code to type, test, and maintain.”

It was only two years ago at I/O 2017 that Google announced support for Kotlin in its Android Studio IDE. That came as a bit of a surprise, given that Java had long been the preferred language for Android app development, but few announcements at that year’s I/O got more applause. Over the course of the last two years, Kotlin’s popularity has only increased. More than 50% of professional Android developers now use the language to develop their apps, Google says, and in the latest Stack Overflow developer survey, it ranks as the fourth-most loved programming language.

With that, it makes sense for Google to increase its Kotlin support. “We’re announcing that the next big step that we’re taking is that we’re going Kotlin-first,” Chet Haase, Chief Advocate for Android, said.

“We understand that not everybody is on Kotlin right now, but we believe that you should get there,” Haase said. “There may be valid reasons for you to still be using the C++ and Java programming languages and that’s totally fine. These are not going away.”

07 May 2019

Google now lets developers build games for its smart displays

At its I/O developer conference, Google today announced that it is opening up its Smart Display platform to developers. Until now, there was no real way for developers to target devices like the newly-renamed Nest Hub. Only Google’s own first-party services got full access to the display. Now, however, Developers will be able to start developing Google Assistant actions for these displays, starting with games.

I wouldn’t expect that we’ll see very complex and highly-graphical games on smart displays, but this is a good surface for word games or similar casual games. We are talking about relatively low-end hardware, after all. The fact that the games are based on HTML, CSS and JavaScript also enforces some limits to what developers can do with this platform. Given that Google itself is now using its Flutter multi-platform framework to build some of its own smart display experiences, chances are that developers, too, will be able to bring their games to these devices in the same way, too.

To enable this, Google is launching Interactive Canvas, a new API that allows developers to create full-screen experiences. This will actually work across Android and smart displays.

Over time, the company plans to open up the smart display platform to other third-party experiences as well. When exactly that will happen remains to be seen, though. The only timeline Google is committing to is ‘soon.’

07 May 2019

Google launches new Assistant developer tools

At its I/O conference, Google today announced a slew of new tools for developers who want to build experiences for the company’s Assistant platform. These range from the ability to build games for smart displays like the Google Home Hub and the launch of App Actions for taking users from an Assistant answer to their native apps, to a new Local Home SDK that allows developers to run their smart home code locally on Google Home Speakers and Nest Displays.

This Local Home SDK, may actually be the most important announcement in this list, given that it turns these devices into a real hardware hub for these smart home devices and provides local compute capacity without the round-trip to the cloud. The first set of partners include Philips, Wemo, TP-Link and LIFX, but the SDK will become available to all developers next month.

In addition, this SDK will make it easier for new users to set up their smart devices in the Google Home app. Google tested this feature with GE last October and is now ready to roll it out to additional partners.

Developers who want to take people from the Assistant to the right spot inside of their native apps, Google announced a preview of App Actions last year. Health and fitness, finance, banking, ridesharing and food ordering apps can now make use of these built-in intents. “If I wanted to track my run with Nike Run Club, I could just say ‘Hey Google, start my run in Nike Run Club’ and the app will automatically start tracking my run,” Google explains in today’s announcement.

For how-to sites, Google also announced extended markup support that allows them to prepare their content for inclusion in Google Assistant answers on smart displays and in Google Search using standard schema.org markup.

You can read more about the new ability to write games for smart displays here, but this is clearly just a first step and Google plans to open up the platform to more third-party experiences over time.