Month: October 2018

11 Oct 2018

Review: The tiny $149 Echo Sub is a huge audio upgrade

Want to make your music more interesting? Add a subwoofer. That’s what Amazon did and, suddenly, the entire Echo smart speaker lineup is more interesting. If you were not impressed with the sound of an Echo, consider trying again when the Echo is paired with an Echo Sub. The subwoofer changes the game.

The Echo Sub is a small, round sub covered in the same fabric as the Echo speakers. Currently it’s only available in dark gray. It’s designed to be sat on the floor or a sturdy desk and serve up the low notes the Echo speakers are unable to reproduce. The Echo Sub does its job. When paired with an Echo speaker, the audio is more full and enjoyable, well-balanced and healthy. The Echo Sub is a must-have for Echo owners.

Review

Amazon provided TechCrunch with a pair of $99 Echo speakers and the $129 Echo Sub. This kit is available for $300, but Amazon also sells the Echo Sub bundle with two Echo Plus devices for $329 — that’s the bundle to get since the Plus models have larger speaker drivers. I suspect the difference will be worth the additional $30.

Setting up the system takes about 25 minutes. Each speaker is individually added to the Alexa smartphone app. Once all three speakers are installed, they have to be bundled in a virtual group. The app’s prompts make it easy, but I found the process buggy. When trying to combine the speakers into a group, the app would sometimes fail to locate one of the speakers. Other times, the two speakers were found, but the sub was not. Eventually, I got it configured and ended up with two Echo speakers running in stereo and a subwoofer handling the low-end sounds.

The difference an additional speaker and subwoofer makes is lovely. But it shouldn’t be surprising. Stereo is how music was supposed to be enjoyed.

Years ago the Jambox and its countless Bluetooth speaker clones convinced a generation that one speaker is all that’s needed for music. That’s a lie. One speaker gets the job done, but two, running in stereo will always be better. And in this case, with the addition of a subwoofer, it’s much, much better.

Des Rocs’ Let me Live takes full advantage of the newfound soundstage. The left and right speakers explode with activity, creating an immersive listening experience that’s not possible with any single speaker from an Amazon Echo to Apple HomePod. The stereo arrangement lets the music breath.

AKA George’s Stone Cold Classic comes alive with this setup. The Echo Sub provides dramatically more depth to the track while the stereo Echos offer a full experience. Need more proof? Turn to Van Halen’s Panama. A single speaker cannot give the same experience; the channels get muddled and mixed. But when played in true stereo with the backup of a woofer, the David Lee Roth comes alive.

I’m impressed with the sound quality of this $300 bundle. A lot of the heavy lifting is offloaded to the Echo Sub, allowing the Echo speakers to handle the mids and highs, which are clear and precise for the price point. At $300, it’s hard to find a better audio system than two Echo speakers and the Echo Sub. And the Echo’s smart features sweeten the deal.

Amazon provided two $99 Echo speakers, and they do the job. The Echo Sub can also be paired with two $149 Echo Plus speaker, which feature more significant drivers; I suspect using two of these speakers would result in even better sound and when purchased as part of a bundle, they’re only a few dollars more.

The Echo Sub works well in most situations. Compared to other subwoofers, it’s on the smaller side of the scale. It provides much-needed bass, but the woofer cannot shake walls. It does not pound, per se. It’s a great match for hard rock or pounding pop; it’s not for trunk-rattling rap. Think Arctics Monkeys instead of Post Malone.

The Alexa app allows users to adjust the amount of bass, mid and treble the subwoofer produces. I found the adjustments to be minor and unable to change the sound profile of the woofer drastically. Overall, the Echo Sub is an elegant, little sub that works well in conjunction with a pair of Echo speakers.

The Echo Sub can work with just one Echo speaker, too. Own just Echo smart speaker? Add an Echo Sub for an astounding upgrade in sound quality.

Amazon is not the only company pairing smart speakers for a new age of stereo sound. Sonos has long allowed owners to wirelessly connect speakers to create stereo and surround sound setups. Two Google Home Max can be paired to create a lovely stereo set. The same goes for Apple HomePods: Two $350 HomePods can be wirelessly tied together for a stereo kit. Each of the setups mentioned above provides great audio quality, but they’re more expensive than Amazon’s solution. Only Sonos sells a dedicated subwoofer, though.

Amazon, with the addition of the Echo Sub, now offers a great audio experience for much less than that of its closest competitors. The $129 Echo Sub is compact and capable and the best way to instantly upgrade an Echo smart speaker setup. If possible, add a second an Echo speaker to create a virtual set of stereo speakers.

The Echo Sub is an easy recommendation for homes where an Echo speaker is dedicated to music. If forced to pick between adding a second Echo or adding an Echo Sub, go for the subwoofer first.

11 Oct 2018

The Internet Bill of Rights is just one piece of our moral obligations

Congressman Ro Khanna’s proposed Internet Bill of Rights pushes individual rights on the Internet forward in a positive manner. It provides guidelines for critical elements where the United States’ and the world’s current legislation is lacking, and it packages it in a way that speaks to all parties. The devil, as always, is in the details—and Congressman Khanna’s Internet Bill of Rights still leaves quite a bit to subjective interpretation.

But what should not be neglected is that we as individuals have not just rights but also moral obligations to this public good—the Internet. The web positively impacts our lives in a meaningful fashion, and we have a collective responsibility to nurture and keep it that way.

Speaking to the specific rights listed in the Bill, we can likely all agree that citizens should have control over information collected about them, and that we should not be discriminated against based on that personal data. We probably all concur that Internet Service Providers should not be permitted to block, throttle, or engage in paid prioritization that would negatively impact our ability to access the world’s information. And I’m sure we all want access to numerous affordable internet providers with clear and transparent pricing.

These are all elements included in Congressman Khanna’s proposal; all things that I wholeheartedly support.

As we’ve seen of late with Facebook, Google, and other large corporations, there is an absolute need to bring proper legislation into the digital age. Technological advancements have progressed far faster than regulatory changes, and drastic improvements are needed to protect users.

What we must understand, however, is that corporations, governments, and individuals all rely on the same Internet to prosper. Each group should have its own set of rights as well as responsibilities. And it’s those responsibilities that need more focus.

Take, for example, littering. There may be regulations in place that prevent people from discarding their trash by the side of the road. But regardless of these laws, there’s also a moral obligation we have to protect our environment and the world in which we live. For the most part, people abide by these obligations because it’s the right thing to do and because of social pressure to keep the place they live beautiful—not because they have a fear of being fined for littering.

We should approach the protection of the Internet in the same way.

We should hold individuals, corporations, and governments to a higher standard and delineate their responsibilities to the Internet. All three groups should accept and fulfill those responsibilities, not because we create laws and fines, but because it is in their best interests.

For individuals, the Internet has given them powers beyond their wildest dreams and it continues to connect us in amazing ways. For corporations, it has granted access to massively lucrative markets far and wide that would never have been accessible before. For governments, it has allowed them to provide better services to their citizens and has created never before seen levels of tax revenue from the creation of businesses both between and outside their physical borders.

Everyone — and I mean everyone — has gained (and will continue to gain) from protecting an open Internet, and we as a society need to recognize that and start imposing strong pressure against those who do not live up to their responsibilities.

We as people of the world should feel tremendously grateful to all the parties that contributed to the Internet we have today. If a short-sighted government decides it wants to restrict the Internet within its physical borders, this should not be permitted. It will not only hurt us, but it will hurt that very government by decreasing international trade and thus tax revenue, as well as decreasing the trust that the citizens of that country place in their government. Governments often act against their long-term interests in pursuit of short-term thinking, thus we have 2 billion people living in places with heavy restrictions on access to online information.

When an Internet Service Provider seeks full control over what content it provides over its part of the Internet, this, again, should not be allowed. It will, in the end, hurt that very Internet Service Provider’s revenue; a weaker, less diverse Internet will inevitably create less demand for the very service they are providing along with a loss of trust and loyalty from their customers.

Without the Internet, our world would come grinding to a halt. Any limitations on the open Internet will simply slow our progress and prosperity as a human race. And, poignantly, the perpetrators of those limitations stand to lose just as much as any of us.

We have a moral responsibility, then, to ensure the Internet remains aligned with its original purpose. Sure, none of us could have predicted the vast impact the World Wide Web would have back in 1989—probably not even Sir Tim Berners-Lee himself—but in a nutshell, it exists to connect people, WHEREVER they may be, to a wealth of online information, to other people, and to empower individuals to make their lives better.

This is only possible with an open and free Internet.

Over the next five years, billions of devices—such as our garage door openers, refrigerators, thermostats, and mattresses—will be connected to the web via the Internet of Things. Further, five billion users living in developing markets will join the Internet for the first time, moving from feature phones to smartphones. These two major shifts will create incredible opportunities for good, but also for exploiting our data—making us increasingly vulnerable as Internet users.

Now is the time to adequately provide Americans and people around the world with basic online protections, and it is encouraging to see people like Congressman Khanna advancing the conversation. We can only hope this Internet Bill of Rights remains bipartisan and real change occurs.

Regardless of the outcome, we must not neglect our moral obligations—whether individual Internet users, large corporations, or governments. We all shoulder a responsibility to maintain an open Internet. After all, it is perhaps the most significant and impactful creation in modern society.

11 Oct 2018

Snowflake scoops up another blizzard of cash with $450 million round

When Snowflake, the cloud data warehouse, landed a $263 million investment earlier this year, CEO Bob Muglia speculated that it would be the last money his company would need before an eventual IPO. But just 9 months after that statement, the company announced a second even larger round. This time it’s getting $450 million, as an unexpected level of growth led them to seek additional cash.

Sequoia Capital led the round, joined by new investor Meritech Capital and existing investors Altimeter Capital, Capital One Growth Ventures, Madrona Venture Group, Redpoint Ventures, Sutter Hill Ventures and Wing Ventures. Today’s round brings the total raised to over $928 million with $713 million coming just this year. That’s a lot of dough.

Oh and the valuation has skyrocketed too from $1.5 billion in January to $3.5 billion with today’s investment. “We are increasing the valuation from the prior round substantially, and it’s driven by the growth numbers of almost quadrupling the revenue, and tripling the customer base,” company CFO Thomas Tuchscherer told TechCrunch.

At the time of the $263 million round, Muglia was convinced the company had enough funds and that the next fundraise would be an IPO. “We have put ourselves on the path to IPO. That’s our mid- to long-term plan. This funding allows us to go directly to IPO and gives us sufficient capital, that if we choose, IPO would be our next funding step,” he said in January.

Tuchscherer said in fact that was the plan at the time of the first batch of funding. He joined the company, partly because of his experience bringing Talend public in 2016, but he said the growth has been so phenomenal, that they felt it was necessary to change course.

“When we raised $263 million earlier in the year, we raised based on a plan that was ambitious in terms of growth and investment. We are exceeding and beating that, and it prompted us to explore how do we accelerate investment to continue driving the company’s growth,” he said.

Running on both Amazon Web Services and Microsoft Azure, which they added as a supported platform earlier this year, certainly contributed to the increased sales, and forced them to rethink the amount of money it would take to fuel their growth spurt.

“I think it’s very important as a distinction that we view the funding as being customer driven in the sense that in order to meet the demand that we’re seeing in the market for Snowflake, we have to invest in our infrastructure, as well as in our R&D capacity. So  the funding that we’re raising now is meant to finance those two core investments,” he stressed

The number of employees is skyrocketing as the company adds customers. Just eight months ago the company had around 350 employees. Today it has close to 650. Tuchscherer expects that to grow to between 900 and 1000 by the end of January, not that far off.

As for that IPO, surely that is still a goal, but the growth simply got in the way. “We are building the company to be autonomous and to be a large independent company. It’s definitely on the horizon,” he said.

While Tuchscherer wouldn’t definitively say that the company is looking to support at least one more cloud platform in addition to Amazon and Microsoft, he strongly hinted that such a prospect could happen.

The company also plans to plunge a lot of money into the sales team, building out new sales offices in the US and doubling their presence around the world, while also enhancing the engineering and R&D teams to expand their product offerings.

Just this year alone the company has added Netflix, Office Depot, DoorDash, Netgear, Ebates and Yamaha as customers. Other customers include Capital One, Lions Gate and Hubspot.

11 Oct 2018

Chargify launches a new payment management tool for subscription services

Chargify, the payment management service owned by Scaleworks, has launched a new tool for billing management. 

The new product is designed to remove limitations and allow payers to assign or reassign payment responsibility for subscriptions or groups of subscriptions, according to the company.

Called WhoPays, the new service is pitched to businesses as a way to consolidate and manage their payments with different subscribers in an organization.

According to the company, the launch of the product required the re-engineering of its underlying invoice architecture — centering it around the hierarchies of employees that can be involved in making purchasing decisions.

“One of our customers sells API services primarily to developers. They noticed that there’d be multiple developers in different parts of their company… each with their own subscription. The customer didn’t know that the multiple subscriptions were connected and sometimes being paid by the same credit card,” said Chargify chief executive Tom Rotem. “They want to make sense of their own customer organizations and subscriptions and are having a hard time doing it. What we’re launching is exactly what they need to fix the pain around that chaos.”

Billers can model out their relationships with customers to create the hierarchies of decision-making so users can group subscriptions so the responsible payer receives one consolidated invoice that can be paid in one transaction.

“WhoPays is the capstone feature on the new invoice-first architecture we’ve been building to make subscriptions more relational,” said chief technology officer and co-founder Michael Klett in a statement.

11 Oct 2018

Apple has acquired Spektral, a Danish computer vision startup, for augmented reality technology

On the heels of Apple this morning inking a $600 million deal to acquire IP, talent and licenses from Dialog Semiconductor in Europe, it has also confirmed another acquisition of a smaller startup in the region.

Apple has purchased Spektral, a computer vision company based out of Denmark that has worked segmentation technology, a more efficient way to “cut out” figures from their backgrounds in digital images and videos, reportedly for about $30 million.

This type of technology can be used, for example, to make quicker and more accurate/realistic cut-out images in augmented reality environments, but also for more standard applications like school photos (which was actually the first market the startup targeted, in 2015, although it appeared to shift strategy after that to build up IP and make deeper inroads into video).

Rumors of the deal started to surface yesterday, first in Danish financial newspaper Børsen, without confirmation from Apple. We reached out, and Apple has today finally confirmed the deal with its standard statement: “Apple buys smaller technology companies from time to time, and we generally do not discuss our purpose or plans.”

From what we understand, the acquisition happened a while back — which lines up with a LinkedIn profile for Toke Jansen, who had been a co-founder of Spektral but now notes that as of December 2017 he has been a manager of computational imaging at Apple.

Others associated with the company — including the other co-founder, Henrik Paltoft — have not updated their profiles, so it’s unclear how many others have joined. Børsen reports that the deal includes the company’s engineers and was in the region of 200 million Danish kroner, which is equivalent to around $31 million.

Spektral started life as CloudCutout, built on algorithms from Jansen’s PhD. The startup initially pitched its product as a cheaper and more efficient “green screen” technology, to remove primary images from their plain (typically green) or standard-pattern backgrounds, with the early iteration of the product built by training the system on over 100,000 professional cutouts.

Spectral’s first application may have been the fairly retro world of school pictures, but what’s most notable here is what Spektral might contribute to Apple’s imaging business. That goes not just in applications that Apple has yet to launch, but to improve the quality of those that are already in the market, from legacy products like PhotoBooth through to ARKit, the company’s platform for mobile development.

Segmentation could help add live filters to human figures in a photo but can also be effective in occluding AR environments behind figures to make digital AR content appear interact with the position of humans.

Spektral’s segmentation technology is also able to run on mobile phones, making it potentially a quicker and more efficient way of processing AR images directly on devices.

“To provide high quality cutouts, the core of our engine exploits recent advances in spectral graph theory and neural networks. The computation of pixel transparencies (the alpha channel) for a single image involves solving multiple large-scale equation systems, as well as carrying out multiple feed-forward passes in our neural networks,” we reported the founders saying when the startup raised its seed round. (It raised more funding in 2017, $2.8 million from Litecap and Amp Ventures, to diver deeper into video.)

“We pose the problem of determining an alpha channel of an image as a machine learning task. Compared to usual chroma keying, this allows us to consider a much broader range of backgrounds since the model will learn, i.e., texture representations from existing training data.”

Computer vision has been a key focus (sorry!) for Apple for a while now. The iPhone and Mac giant has made more than 40 acquisitions in Europe in the last 10 years — I guess we may still have some hunting to do — and a number of its acquisitions both in Europe and elsewhere have been in the area of computer vision. They have included Akonia Holographics, InVisage Technologies, Regaind, Vrvana, SensoMotoric Instruments, Indoor.io, Flyby Media, Emotient, Faceshift, Metaio, Polar Rose and more.

Additional reporting Natasha Lomas.

11 Oct 2018

CrunchMatch connects attendees at Disrupt Berlin 2018

On 29-30 November, thousands of early-stage startups across Europe and beyond will attend Disrupt Berlin 2018 and spend two program-packed days exhibiting and exploring the very latest in tech innovations. In a crowd that size, it helps to have a tool to find and connect with the right people.

That’s why we’re making our CrunchMatch platform available to all Disrupt Berlin attendees. Last year, our free business match-making service connected investors and founders to discuss potential funding opportunities based on similar goals and interests. Now CrunchMatch can help everyone network more efficiently.

We’re talking founders and investors looking to connect, developers in search of employment, founders hunting for collaborators or startups recruiting tech talent — the list goes on. CrunchMatch can save you valuable time and help you make valuable connections.

Luke Heron, CEO of TestCard, has first-hand experience with the power of CrunchMatch, which he used to secure meetings with multiple VCs at Disrupt Berlin 2017. Those connections, and the relationships he built, paid off.

In a recent email, Heron told us that TestCard “just closed $1.7 million in funding (which is thanks to you and your team, bless you!) You guys are fantastic — the lifeblood of the startup scene.”

And several founders who attended Disrupt San Francisco this past September used CrunchMatch and walked away from their meetings with term sheets.

Representing the investment point of view, here’s what Michael Kocan, managing partner at Trend Discovery, said about his CrunchMatch experience.

“It makes vetting deals extremely efficient. I scheduled more than 35 meetings with startups using CrunchMatch, and we made a significant investment in one, who came to our attention through Startup Battlefield.”

Ready to simplify your networking at Disrupt Berlin? Here’s what you need to know. When we open CruntchMatch, all registered attendees will receive an email explaining how to access the platform and fill out their profiles. Your profile spells out your role and the type of connections you want to make. CrunchMatch kicks into gear and makes suggested connections and then — subject to your approval — the platform handles all the scheduling details.

Disrupt Berlin 2018 takes place 29-30 November. Still need a ticket? Buy your pass right here. We can’t wait to see you in Berlin! And be sure to use the CrunchMatch advantage — it’s the most efficient way to find your people and fuel your dream.

11 Oct 2018

Google+ for G Suite lives on and gets new features

You thought Google+ was dead, didn’t you? And it is — if you’re a consumer. But the business version of Google’s social network will live on for the foreseeable future — and it’s getting a bunch of new features today.

Google+ for G Suite isn’t all that different from the Google+ for consumers, but its focus is very much on allowing users inside a company to easily share information. Current users include the likes of Nielsen and French retailer Auchan.

The new features that Google is announcing today give admins more tools for managing and reviewing posts, allow employees to tag content and provide better engagement metrics to posters.

Recently Google introduced the ability for admins to bulk-add groups of users to a Google+ community, for example. And soon, those admins will be able to better review and moderate posts made by their employees. Soon, admins will also be able to define custom streams so that employees could get access to a stream with all of the posts from a company’s leadership team, for example.

But what’s maybe more important in this context is that tags now make it easy for employees to route content to everybody in the company, no matter which group they work in. “Even if you don’t know all employees across an organization, tags makes it easier to route content to the right folks,” the company explains in today’s blog post. “Soon you’ll be able to draft posts and see suggested tags, like #research or #customer-insights when posting customer survey results.”

As far as the new metrics go, there’s nothing all that exciting going on here, but G Suite customers who keep their reporting structure in the service will be able to provide analytics to employees so they can see how their posts are being viewed across the company and which teams engage most with them.

At the end of the day, none of these are revolutionary features. But the timing of today’s announcement surely isn’t a coincidence, given that Google announced the death of the consumer version of Google+ — and the data breach that went along with that — only a few days ago. Today’s announcement is clearly meant to be a reminder that Google+ for the enterprise isn’t going away and remains in active development. I don’t think all that many businesses currently use Google+, though, and with Hangouts Chat and other tools, they now have plenty of options for sharing content across groups.

11 Oct 2018

Google’s Apigee officially launches its API monitoring service

It’s been about two years since Google acquired API management service Apigee. Today, the company is announcing new extensions that make it easier to integrate the service with a number of Google Cloud services, as well as the general availability of the company’s API monitoring solution.

Apigee API monitoring allows operations teams to get more insight into how their APIs are performing. The idea here is to make it easy for these teams to figure out when there’s an issue and what’s the root cause for it by giving them very granular data. “APIs are now part of how a lot of companies are doing business,” Ed Anuff, Apigee’s former SVP of product strategy and now Google’s product and strategy lead for the service, told me. “So that tees up the need for API monitoring.”

Anuff also told me that he believes that it’s still early days for enterprise API adoption — but that also means that Apigee is currently growing fast as enterprise developers now start adopting modern development techniques. “I think we’re actually still pretty early in enterprise adoption of APIs,” he said. “So what we’re seeing is a lot more customers going into full production usage of their APIs. A lot of what we had seen before was people using it for maybe an experiment or something that they were doing with a couple of partners.” He also attributed part of the recent growth to customers launching more mobile applications where APIs obviously form the backbone of much of the logic that drives those apps.

API Monitoring was already available as a beta, but it’s now generally available to all Apigee customers.

Given that it’s now owned by Google, it’s no surprise that Apigee is also launching deeper integrations with Google’s cloud services now — specifically services like BigQuery, Cloud Firestore, Pub/Sub, Cloud Storage and Spanner. Some Apigee customers are already using this to store every message passed through their APIs to create extensive logs, often for compliance reasons. Others use Cloud Firestore to personalize content delivery for their web users or to collect data from their APIs and then send that to BigQuery for analysis.

Anuff stressed that Apigee remains just as open to third-party integrations as it always was. That is part of the core promise of APIs, after all.

11 Oct 2018

Google introduces dual-region storage buckets to simplify data redundancy

Google is playing catch-up in the cloud, and as such it wants to provide flexibility to differentiate itself from AWS and Microsoft. Today, the company announced a couple of new options to help separate it from the cloud storage pack.

Storage may seem stodgy, but it’s a primary building block for many cloud applications. Before you can build an application you need the data that will drive it, and that’s where the storage component comes into play.

One of the issues companies have as they move data to the cloud is making sure it stays close to the application when it’s needed to reduce latency. Customers also require redundancy in the event of a catastrophic failure, but still need access with low latency. The latter has been a hard problem to solve until today when Google introduced a new dual-regional storage option.

As Google described it in the blog post announcing the new feature, “With this new option, you write to a single dual-regional bucket without having to manually copy data between primary and secondary locations. No replication tool is needed to do this and there are no network charges associated with replicating the data, which means less overhead for you storage administrators out there. In the event of a region failure, we transparently handle the failover and ensure continuity for your users and applications accessing data in Cloud Storage.”

This allows companies to have redundancy with low latency, while controlling where it goes without having to manually move it should the need arise.

Knowing what you’re paying

Companies don’t always require instant access to data, and Google (and other cloud vendors) offer a variety of storage options, making it cheaper to store and retrieve archived data. As of today, Google is offering a clear way to determine costs, based on customer storage choice types. While it might not seem revolutionary to let customers know what they are paying, Dominic Preuss, Google’s director of product management says it hasn’t always been a simple matter to calculate these kinds of costs in the cloud. Google decided to simplify it by clearly outlining the costs for medium (Nearline) and long-term (Coldline) storage across multiple regions.

As Google describes it, “With multi-regional Nearline and Coldline storage, you can access your data with millisecond latency, it’s distributed redundantly across a multi-region (U.S., EU or Asia), and you pay archival prices. This is helpful when you have data that won’t be accessed very often, but still needs to be protected with geographically dispersed copies, like media archives or regulated content. It also simplifies management.”

Under the new plan, you can select the type of storage you need, the kind of regional coverage you want and you can see exactly what you are paying.

Google Cloud storage pricing options. Chart: Google

Each of these new storage services has been designed to provide additional options for Google Cloud customers, giving them more transparency around pricing and flexibility and control over storage types, regions and the way they deal with redundancy across data stores.

11 Oct 2018

Google expands its identity management portfolio for businesses and developers

Over the course of the last year, Google has launched a number of services that bring to other companies the same BeyondCorp model for managing access to a company’s apps and data without a VPN that it uses internally. Google’s flagship product for this is Cloud Identity, which is essentially Google’s BeyondCorp, but packaged for other businesses.

Today, at its Cloud Next event in London, it’s expanding this portfolio of Cloud Identity services with three new products and features that enable developers to adopt this way of thinking about identity and access for their own apps and that make it easier for enterprises to adopt Cloud Identity and make it work with their existing solutions.

The highlight of today’s announcements, though, is Cloud Identity for Customers and Partners, which is now in beta. While Cloud Identity is very much meant for employees at a larger company, this new product allows developers to build into their own applications the same kind of identity and access management services.

“Cloud Identity is how we protect our employees and you protect your workforce,” Karthik Lakshminarayanan, Google’s product management director for Cloud Identity, said in a press briefing ahead of the announcement. “But what we’re increasingly finding is that developers are building applications and are also having to deal with identity and access management. So if you’re building an application, you might be thinking about accepting usernames and passwords, or you might be thinking about accepting social media as an authentication mechanism.”

This new service allows developers to build in multiple ways of authenticating the user, including through email and password, Twitter, Facebook, their phones, SAML, OIDC and others. Google then handles all of that authentication work. Google will offer both client-side (web, iOS and Android) and server-side SDKs (with support for Node.ja, Java, Python and other languages).

“They no longer have to worry about getting hacked and their passwords and their user credentials getting compromised,” added Lakshminarayanan, “They can now leave that to Google and the exact same scale that we have, the security that we have, the reliability that we have — that we are using to protect employees in the cloud — can now be used to protect that developer’s applications.”

In addition to Cloud Identity for Customers and Partners, Google is also launching a new feature for the existing Cloud Identity service, which brings support for traditional LDAP-based applications and IT services like VPNs to Cloud Identity. This feature is, in many ways, an acknowledgment that most enterprises can’t simply turn on a new security paradigm like BeyondCorp/Cloud Identity. With support for secure LDAP, these companies can still make it easy for their employees to connect to these legacy applications while still using Cloud Identity.

“As much as Google loves the cloud, a mantra that Google has is ‘let’s meet customers where they are.’ We know that customers are embracing the cloud, but we also know that they have a massive, massive footprint of traditional applications,” Lakshminarayanan explained. He noted that most enterprises today run two solutions: one that provides access to their on-premise applications and another that provides the same services for their cloud applications. Cloud Identity now natively supports access to many of these legacy applications, including Aruba Networks (HPE), Itopia, JAMF, Jenkins (Cloudbees), OpenVPN, Papercut, pfSense (Netgate), Puppet, Sophos and Splunk. Indeed, as Google notes, virtually any application that supports LDAP over SSL can work with this new service.

Finally, the third new feature Google is launching today is context-aware access for those enterprises that already use its Cloud Identity-Aware Proxy (yes, those names are all a mouthful). The idea here is to help enterprises provide access to cloud resources based on the identity of the user and the context of the request — all without using a VPN. That’s pretty much the promise of BeyondCorp in a nutshell, and this implementation, which is now in beta, allows businesses to manage access based on the user’s identity and a device’s location and its security status, for example. Using this new service, IT managers could restrict access to one of their apps to users in a specific country, for example.