Year: 2018

25 Jul 2018

Google is baking machine learning into its BigQuery data warehouse

There are still a lot of obstacles to building machine learning models and one of those is that in order to build those models, developers often have to move a lot of data back and forth between their data warehouses and wherever they are building their models. Google is now making this part of the process a bit easier for the developers and data scientists in its ecosystem with BigQuery ML, a new feature of its BigQuey data warehouse by building some machine learning functionality right into BigQuery.

Using BigQuery ML, developers can build models using linear and logistical regression right inside their data warehouse without having to transfer data back and forth as they build and fine-tune their models. And all they have to do to build these models and get predictions is to write a bit of SQL.

Moving data doesn’t sound like it should be a big issue, but developers often spend a lot of their time on this kind of grunt work — time that would be better spend on actually working on their models.

BigQuery ML also promises to make it easier to build these models, even for developers who don’t have a lot of experience with machine learning. To get started, developers can use what’s basically a variant of standard SQL to say what kind of model they are trying to build and what the input data is supposed to be. From there, BigQuery ML then builds the model and allows developers to almost immediately generate predictions based on it. And they won’t even have to write any code in R or Python.

These new features are now available in beta.

25 Jul 2018

Google launches a stand-alone version of Drive for businesses that don’t want the full G Suite

If you are a business and want to use Google Drive, then your only option until now was to buy a full G Suite subscription, even if you don’t want or need access to the rest of the company’s productivity tools. Starting today, though, these businesses, will be able to buy a subscription to a stand-alone version of Google Drive, too.

Google says that a stand-along version of drive has been at the top of the list of requests from prospective customers, so it’s now giving this option to them in the form of this new service (though to be honest, I’m not sure how much demand there really is for this product). Stand-along Google Drive will come with all the usual online storage and sharing features as the G Suite version.

Pricing will be based on usage. Google will charge $8 per month per active user and $0.04 per GB stored in a company’s Drive.

Google’s idea here is surely to convert those stand-alone Drive users to full G Suite users over time, but it’s also an acknowledgement on Google’s part that not every business is ready to move away from legacy email tools and desktop-based productivity applications like Word and Excel just yet (and that its online productivity suite may not be right for all of those businesses, too).

Drive, by the way, is going to hit a billion users this week, Google keeps saying. I guess I appreciate that they don’t want to jump the gun and are actually waiting for that to happen instead of just announcing it now when it’s convenient. Once it does, though, it’ll become the company’s eighth product with more than a billion users.

25 Jul 2018

Google is making a fast specialized TPU chip for edge devices and a suite of services to support it

In a pretty substantial move into trying to own the entire AI stack, Google today announced that it will be rolling out a version of its Tensor Processing Unit — a custom chip optimized for its machine learning framework TensorFlow — optimized for inference in edge devices.

That’s a bit of a word salad to unpack, but here’s the end result: Google is looking to have a complete suite of customized hardware for developers looking to build products around machine learning, such as image or speech recognition, that it owns from the device all the way through to the server. Google will have the cloud TPU (the third version of which will soon roll out) to handle training models for various machine learning-driven tasks, and then run the inference from that model on a specialized chip that runs a lighter version of TensorFlow that doesn’t consume as much power. Google is exploiting an opportunity to split the process of inference and machine training into two different sets of hardware and dramatically reduce the footprint required in a device that’s actually capturing the data. That would result in faster processing, less power consumption, and potentially more importantly, a dramatically smaller surface area for the actual chip.

Google is also rolling out a new set of services to compile TensorFlow (Google’s machine learning development framework) into a lighter-weight version that can run on edge devices without having to call the server for those operations. That, again, reduces the latency and could have any number of results, from safety (in autonomous vehicles) to just a better user experience (voice recognition). As competition heats up in the chip space, both from the larger companies and from the emerging class of startups, nailing these use cases is going to be really important for larger companies. That’s especially true for Google as well, which also wants to own the actual development framework in a world where there are multiple options like Caffe2 and PyTorch.

Google will be releasing the chip on a kind of modular board not so dissimilar to the Raspberry Pi, which will get it into the hands of developers that can tinker and build unique use cases. But more importantly, it’ll help entice developers who are already working with TensorFlow as their primary machine learning framework with the idea of a chip that’ll run those models even faster and more efficiently. That could open the door to new use cases and ideas, and should it be successful, will lock those developers further into Google’s cloud ecosystem on both the hardware (the TPU) and framework (TensorFlow) level. While Amazon owns most of the stack for cloud computing (with Azure being the other largest player), it looks like Google is looking to own the whole AI stack – and not just offer on-demand GPUs as a stopgap to keep developers operating within that ecosystem.

Thanks to the proliferation of GPUs, machine learning has become increasingly common across a variety of use cases, which doesn’t just require the horsepower to train a model to identify what a cat looks like. It also needs the ability to take in an image and quickly identify that said four-legged animal is a cat based on the model it’s trained with tens of thousands (or more) images of what a cat is. GPUs were great for both use cases, but it’s clear that better hardware is necessary with the emergence of use cases like autonomous driving, photo recognition on cameras, or a variety of others — for which even millisecond-level lag is too much and power consumption, or surface area, is a dramatic limiting factor.

The edge-specialized TPU is an ASIC chip, a breed of chip architecture that’s increasingly popular for specific use cases like mining for cryptocurrency (such as larger companies like Bitmain). The chips excel at doing specific things really well, and it’s opened up an opportunity to tap various niches, such as mining cryptocurrency, with specific chips that are optimized for those calculations. These kinds of edge-focused chips tend to do a lot of low-precision calculations very fast, making the whole process of juggling runs between memory and the actual core significantly less complicated and consuming less power as a result.

While Google’s entry into this arena has long been a whisper in the Valley, this is a stake in the ground for the company that it wants to own everything from the hardware all the way up to the end user experience, passing through the development layer and others on the way there. It might not necessarily alter the calculus of the ecosystem, as even though it’s on a development board to create a playground for developers, Google still has to make an effort to get the hardware designed into other pieces of hardware and not just its own if it wants to rule the ecosystem. That’s easier said than done, even for a juggernaut like Google, but it is a big salvo from the company that could have rather significant ramifications down the line as every big company races to create its own custom hardware stack that’s specialized for its own needs.

25 Jul 2018

Google takes on Yubico and builds its own hardware security keys

Google today announced it is launching its own hardware security keys for two-factor authentication. These so-called Titan Security Keys will go up against similar keys from companies like Yubico, which Google has long championed as the de facto standard for hardware-based two-factor authentication for Gmail and other services.

The FIDO-compatible Titan keys will come in two versions. One with Bluetooth support for mobile devices and one that plugs directly into your computer’s USB port. In terms of looks and functionality, those keys look quite a lot like the existing keys from Yubico, though our understanding is that these are Google’s own designs.

Unsurprisingly, the folks over at Yubico got wind of today’s announcement ahead of time and have already posted a reaction to today’s news (and the company is exhibiting at Google Cloud Next, too, which may be a bit awkward after today’s announcement).

“Yubico strongly believes there are security and privacy benefits for our customers, by manufacturing and programming our products in USA and Sweden,” Yubico founder and CEO Stina Ehrensvard writes, and goes on to throw a bit of shade on Google’s decision to support Bluetooth. “Google’s offering includes a Bluetooth (BLE) capable key. While Yubico previously initiated development of a BLE security key, and contributed to the BLE U2F standards work, we decided not to launch the product as it does not meet our standards for security, usability and durability. BLE does not provide the security assurance levels of NFC and USB, and requires batteries and pairing that offer a poor user experience.”

It’s unclear who is manufacturing the Titan keys for Google (the company spokesperson didn’t know when asked during the press conference), but the company says that it developed its own firmware for the keys. And while Google is obviously using the same Titan brand it uses for the custom chips that protect the servers that make up its cloud, it’s also unclear if there is any relation between those.

No word on pricing yet, but the keys are now available to Google Cloud customers and will be available for purchase for anyone in the Google Store, soon.

25 Jul 2018

Google brings its search technology to the enterprise

One of Google’s first hardware products was its search appliance, a custom-built server that allowed businesses to bring Google’s search tools to the data behind their firewalls. That appliance is no more, but Google today announced the spiritual successor to it with an update to Cloud Search. Until today, Cloud Search only indexed G Suite data. Now, it can pull in data from a wide variety of third-party services that can run on-premise or in the cloud, too, making the tool far more useful for large businesses that want to make all of their data searchable by their employees.

“We are essentially taking all of Google expertise in search and are applying it to your enterprise content,” Google said.

One of the launch customers for this new service is Whirlpool, which built its own search portal and indexed over 12 million documents from more than a dozen services using this new service.

“This is about giving employees access to all the information from across the enterprise, even if it’s traditionally siloed data whether that’s in a database or a legacy productivity tool and make all of that available in a single index,” Google explained.

To enable this functionality, Google is making a number of software adapters available that will bridge the gap between these third-party services and Cloud Search. Over time, Google wants to add support for more services and bring this cloud-based technology on par with what its search appliance was once capable of.

25 Jul 2018

Snark AI looks to help companies get on-demand access to idle GPUs

Riding on a wave of an explosion in the use of machine learning to power, well, just about everything is the emergence of GPUs as one of the go-to methods to handle all the processing for those operations.

But getting access to those GPUs — whether using the cards themselves or possibly through something like AWS — might still be too difficult or too expensive for some companies or research teams. So Davit Buniatyan and his co-founders decided to start Snark AI, which helps companies rent GPUs that aren’t in use across a distributed network of companies that just have them sitting there, rather than through a service like Amazon. While the larger cloud providers offer similar access to GPUs, Buniatyan’s hope is that it’ll be attractive enough to companies and developers to tap a different network if they can lower that barrier to entry. The company is launching out of Y Combinator’s Summer 2018 class.

“We bet on that there will always be a gap between mining and AWS or Google Cloud prices,” Buniatyan said. “If the mining will be [more profitable than the cost of running a GPU], anyone can get into AWS and do mining and be profitable. We’re building a distributed cloud computing platform for clients that can easily access the resources there but are not used.”

The startup works with companies with a lot of spare GPUs that aren’t in use, such as gaming cloud companies or crypto mining companies. Teams that need GPUs for training their machine learning models get access to the raw hardware, while teams that just need those GPUs to handle inference get access to them through a set of APIs. There’s a distinction between the two because they are two sides to machine learning — the former building the model that the latter uses to execute some task, like image or speech recognition. When the GPUs are idle, they run mining to pay the hardware providers, and Snark AI also offers the capability to both mine and run deep learning inference on a piece of hardware simultaneously, Buniatyan said.

Snark AI matches the proper amount of GPU power to whatever a team needs, and then deploys it across a network of distributed idle cards that companies have in various data centers. It’s one way to potentially reduce the cost of that GPU over time, which may be a substantial investment initially but get a return over time while it isn’t in use. If that’s the case, it may also encourage more companies to sign up with a network like this — Snark AI or otherwise — and deploy similar cards.

There’s also an emerging trend of specialized chips that focus on machine learning or inference, which look to reduce the cost, power consumption, or space requirements of machine learning tasks. That ecosystem of startups, like Cerebras Systems, Mythic, Graphcore, or any of the other well-funded startups, all potentially have a shot at unseating GPUs for machine learning tasks. There’s also the emergence of ASICs, customized chips that are better suited to tasks like crypto mining, which could fracture an ecosystem like this — especially if the larger cloud providers decide to build or deploy something similar (such as Google’s TPU). But this also means that there’s room to potentially create some new interface layer that can snap up all the leftovers for tasks that companies might need, but don’t necessarily need bleeding-edge technology like that from those startups.

There’s always going to be the same argument that was made for Dropbox prior to its significant focus on enterprises and collaboration: the price falls dramatically as it becomes more commoditized. That might be especially true for companies like Amazon and Google, which have already run that playbook before, and could leverage their dominance in cloud computing to put a significant amount of pressure on a third-party network like Snark AI. Google also has the ability to build proprietary hardware like the TPU for specialized operations. But Buniatyan said that the company’s focus on being able to juggle inference and mining, in addition to keeping that cost low for idle GPUs of companies that are just looking to deploy, should keep it viable even amid a changing ecosystem that’s focusing on machine learning.

25 Jul 2018

Google Cloud introduces shielded VMs for additional security

While we might like to think all of our applications are equal in our eyes, in reality some are more important than others and require an additional level of security. To meet those requirements, Google introduced shielded virtual machines at Google Next today.

As Google describes it, “Shielded VMs leverage advanced platform security capabilities to help ensure your VMs have not been tampered with. With Shielded VMs, you can monitor and react to any changes in the VM baseline as well as its current runtime state.”

These specialized VMs run on GCP and come with a set of partner security controls to defend against things like rootkits and bootkits, according to Google. There are a whole bunch of things that happen even before an application launches inside a VM, and each step in that process is vulnerable to attack.

That’s because as the machine starts up, before you even get to your security application, it launches the firmware, the boot sequence, the kernel, then the operating system — and then and only then, does your security application launch.

That time between startup and the security application launching could leave you vulnerable to certain exploits that take advantage of those openings. The shielded VMs strip out as much of that process as possible to reduce the risk.

“What we’re doing here is we are stripping out any of the binary that doesn’t absolutely have to be there. We’re ensuring that every binary that is there is signed, that it’s signed by the right party, and that they load in the proper sequence,” a Google spokesperson explained. All of these steps should reduce overall risk.

Shielded VMs are available in Beta now

25 Jul 2018

Lyft brings on ex-Tesla VP to lead driver experience operations

Lyft has hired Karim Bousta, Tesla’s now-former VP of Worldwide Service and Customer Experience, to serve as the transportation company’s VP of driver experience operations. At Lyft, Bousta will be responsible for leading rental program Express Drive, and launching and scaling the next generation of Lyft Hubs, which are Lyft’s driver support centers. He will also be tasked with beefing up driver rewards and incentives.

“I am all about the drivers. I want driving with Lyft to be the best and most rewarding experience,” Bousta said in a statement. “We know that the biggest burden of driving is the cost of vehicle maintenance, servicing, and insurance. My goal is to severely reduce or even eliminate that burden and put more money back in our drivers’ pockets.”

In May, Lyft committed $100 million to better support its drivers by specifically putting thE money toward cheaper oil changes, basic car maintenance, serviced car washes and more. Lyft will also almost double its operating hours at its driver hubs in 15 cities throughout the nation.

The idea with that commitment is to help drivers make more money and maximize their earnings by offsetting the costs of driving. Other benefits will include car and SUV rentals, tax education and more.

Lyft also says it expects to more than double its driver base in the next five years. Currently, Lyft has 1.4 million drivers, according to its latest economic impact report. The hiring of Bousta also comes shortly after Lyft hit a $15.1 billion following a $600 million funding round and outlined its bike and scooter plans.

Karim Bousta, Lyft VP of Driver Experience Operations

It’s worth noting that Bousta’s boss, Lyft COO Jon McNeill, joined the company earlier this year — also from Tesla, where he served as president of sales and service.

“Karim is a creative and motivating leader with the talent, track record, and passion to revolutionize the Lyft driver experience,” Bousta said in a statement. “Drivers will see meaningful improvements top to bottom with Karim joining the team, and I’m excited to dig in with him.”

25 Jul 2018

Google’s big redesign for Gmail is now generally available for enterprise G Suite customers

Google is running its playbook again of releasing big new products (or redesigns) to its average users and then moving what works over to its enterprise services, G Suite, today by making the Gmail redesign generally available to G Suite customers.

Gmail’s redesign launched for consumers in April earlier this year, including new features like self-destructing messages, email snoozing and other new features in addition to a little bit of a new look for the service that has more than 1 billion users. All those services are useful for consumers, but they might actually have more palatable use cases within larger companies that have to have constant communication with anywhere from a few to thousands of employees. Email hell is a common complaint for, well, basically every single user on Facebook, Twitter, LinkedIn or anywhere else people can speak publicly to any kind of network, and any attempts to tackle that — that work, at least — could have pretty substantial ramifications.

Google is directly competing with other enterprise mail services, especially as it looks to make G Suite a go-to set of enterprise tools for larger companies. It’s a nice, consistent business that can grow methodically, which is a kind of revenue stream that Wall Street loves and can cover the potential trip-ups in other divisions. Google has also made a big push in its cloud efforts, especially on the server front with its competitors for Microsoft and Azure — which doesn’t make it that surprising that Google is announcing this at what is effectively its cloud conference, Google Cloud Next 2018 in San Francisco.

The new Gmail uses machine learning to find threat indicators across a huge bucket of messages to tackle some of the lowest-hanging fruit, like potential phishing attacks, that could compromise a company’s security and potentially cost millions of dollars. Google says those tools protect users from almost 10 million spam and malicious emails every minute, and the new update also gives G Suite users access to those security features, as well as offline access and the redesigned security warnings that Google included in its consumer-focused redesign.

Whether companies will adopt this redesign — or at least what rate they will — remains to be seen, as even small tweaks to any kind of software that has a massive amount of engagement can potentially interrupt the workflow of users. We’ve seen that happen before with Facebook users losing it over small changes to News Feed, and while enterprise Gmail is definitely a different category, Google has to take care to ensure that those small changes don’t interrupt the everyday use cases for enterprise users. If companies are going to pay Google for something like this, they have to get it right.

25 Jul 2018

Google introduces ‘Context-aware’ access to supplement traditional logons

We know by now that usernames and passwords are a poor way of securing applications and online services, but they remain for the most part a key tool in the security arsenal. The trouble is that with all of the security breaches in recent years from Equifax to Anthem to Target (and many others), people’s credentials have been widely shared on the internet black market.

Google wants to help fix that problem and today at Google Next, it announced Context-aware access, a new program that looks beyond your credentials to other factors to help determine if it’s really you or someone pretending to be you.

Context-aware access lets administrators define a set of information that could help them more accurately ascertain the identity of the person trying to access your service. “Context-aware access allows organizations to define and enforce granular access to GCP APIs, resources, G Suite, and third-party SaaS apps based on a user’s identity, location, and the context of their request,” Google explained.

One way to better understand the person accessing your services is to look at some contextual clues such as where they are logging on, the IP address of the machine they are logging on from, the time of day and other factors. Does all of this make sense based on what you know about the person?

The idea flips the notion of security responsibility on its head. Instead of requiring the user to be completely responsible for proving who they are, it puts the burden (and control) in the hands of the administrator where it makes more sense.

Google created this security tool because it recognizes as well as any company that users aren’t tied to the office anymore. They are working on mobile devices and accessing apps and cloud services and it becomes more difficult to trust identity, especially with so many stolen credentials out there.

The new program builds on Google’s BeyondCorp vision, an idea they began developing in 2011 to address the fact that computing no longer took place inside a clearly defined perimeter. In the days before mobile and cloud, people generally accessed computer systems from a specific place. If someone tried to log on from outside of that, you could catch them and turn them away.

Mobile and the cloud changed all that and Google began defining an idea called Zero Trust, the notion that you don’t trust anyone on your services and build an appropriate security position based on that idea. Identity is really central to this, but at some point, even in a Zero Trust model, you have to let people in and do business on your services. This tool gives administrators in a Zero Trust Model much more information to work with beyond a username and password to determine if the user is authentic or not.

Context-aware access management is available today for customers using ​VPC Service Controls. It will be available soon for customers using ​Cloud Identity and Access Management​ (IAM), ​Cloud Identity-Aware Proxy​ (IAP), and ​Cloud Identity​, according to the company.