Year: 2020

03 Dec 2020

Space startup Aevum debuts world’s first fully autonomous orbital rocket launching drone

Launching things to space doesn’t have to mean firing a large rocket vertically using massive amounts of rocket-fuel powered thrust – startup Aevum breaks the mould in multiple ways, with an innovative launch vehicle design that combines uncrewed aircraft with horizontal take-off and landing capabilities, with a secondary stage that deploys at high altitude and can take small payloads the rest of the way to space.

Aevum’s model actually isn’t breaking much new ground in terms of its foundational technology, according to founder and CEO Jay Skylus, who I spoke to prior to today’s official unveiling of the startup’s Ravn X launch vehicle. Skylus, who previously worked for a range of space industry household names and startups including NASA, Boeing, Moon Express and Firefly, told me that the startup has focused primarily on making the most of existing available technologies to create a mostly reusable, fully automated small payload orbital delivery system.

To his point, Ravn X doesn’t look too dissimilar from existing jet aircraft, and bears obvious resemblance to the Predator line of UAVs already in use for terrestrial uncrewed flight. The vehicle is 80 feet long, and has a 60-foot wingspan, with a total max weight of 55,000 lbs including payload. 70% of the system is fully reusable today, and Skylus says that the goal is to iterate on that to the point where 95% of the launch system will be reusable in the relatively near future.

Image Credits: Aevum

Ravn X’s delivery system is design for rapid response delivery, and is able to get small satellites to orbit in as little as 180 minutes – with the capability of having it ready to fly and deliver another again fairly shortly after that. It uses traditional jet fuel, the same kind used on commercial airliners, and it can take off and land in “virtually any weather,” according to Skylus. It also takes off and lands on any 1-mile stretch of traditional aircraft runway, meaning it can theoretically use just about any active airport in the world as a launch and landing site.

One of they key defining differences of Aevum relative to other space launch startups is that what they’re presenting isn’t theoretical, or in development – the Ravn X already has paying customers, including over $1 billion in U.S. government contracts. It’s first mission is with the U.S. Space Force, the ASLON-45 small satellite launch mission, and it also has a contract for 20 missions spanning 9 years with the U.S. Air Force Space and Missile Systems Center.  Deliveries of Aevum’s production launch vehicles to its customers have already begun, in fact, Skylus says.

The U.S. Department of Defense has been actively pursuing space launch options that provide it with responsive, short turnaround launch capabilities for quite some time now. That’s the same goal that companies like Astra, which was originally looking to win the DARPA challenge for such systems (since expired) with its Rocket small launcher. Aevum’s system has the added advantage of being essentially fully compatible with existing airfield infrastructure – and also of not requiring that human pilots be involved or at risk at all, as they are with the superficially similar launch model espoused by Virgin Orbit.

Aevum isn’t just providing the Ravn X launcher, either; its goal is to handle end-to-end logistics for launch services, including payload transportation and integration, which are parts of the process that Skylus says are often overlooked or underserved by existing launch providers, and that many companies creating payloads also don’t realize are costly, complicated and time-consuming parts of actually delivering a working small satellite to orbit. The startup also isn’t “re-inventing the wheel” when it comes to its integration services – Skylus says they’re working with a range of existing partners who all already have proven experience doing this work but who haven’t previously had the motivation or the need to provide these kinds of services to the customers that Skylum sees coming online, both in the public and private sector.

The need isn’t for another SpaceX, Skylus says; rather, thanks to SpaceX, there’s a wealth of aerospace companies who previously worked almost exclusively with large government contracts and the one or two massive legacy rocket companies to put missions together. They’re now open to working with the greatly expanded market for orbital payloads, including small satellites that aim to provide cost-effective solutions in communications, environmental monitor, shipping and defense.

Aevum’s solution definitely sounds like it addresses a clear and present need, in a way that offers benefits in terms of risk profile, reusability, cost and flexibility. The company’s first active missions will obviously be watched closely, by potential customers and competitors alike.

03 Dec 2020

Android’s winter update adds new features to Gboard, Maps, Books, Nearby Share and more

Google announced this morning Android phones will receive an update this winter that will bring some half-dozen new features to devices, including improvements to apps like Gboard, Google Play Books, Voice Access, Google Maps, Android Auto, and Nearby Share. The release is the latest in a series of update bundles that now allow Android devices to receive new features outside of the usual annual update cycle.

The bundles may not deliver Android’s latest flagship features, but they offer steady improvements on a more frequent basis.

One of the more fun bits in the winter update will include a change to “Emoji Kitchen,” the feature in the Gboard keyboard app that lets users combine their favorite emoji to create new ones that can be shared as customized stickers. To date, users have remixed emoji over 3 billion times since the feature launched earlier this year, Google says. Now, the option is being expanded. Instead of offering hundreds of design combinations, it will offer over 14,000. You’ll also be able to tap two emoji to see suggested combinations or double tap on one emoji to see other suggestions.

Image Credits: Google

This updated feature had been live in the Gboard beta app, but will now roll out to Android 6.0 and above devices in the weeks ahead.

Another update will expand audiobook availability on Google Play Books. Now, Google will auto-generate narrations for books that don’t offer an audio version. The company says it worked with publishers in the U.S. and U.K. to add these auto-narrated books to Google Play Books. The feature is in beta but will roll out to all publishers in early 2021.

An accessibility feature that lets people use and navigate their phone with voice commands, Voice Access, will also be improved. The feature will soon leverage machine learning to understand interface labels on devices. This will allow users to refer to things like the “back” and “more” buttons, and many others by name when they are speaking.

The new version of Voice Access, now in beta, will be available to all devices worldwide running Android 6.0 or higher.

An update for Google Maps will add a new feature to one of people’s most-used apps.

In a new (perhaps Waze-inspired) “Go Tab,” users will be able to more quickly navigate to frequently visited places — like a school or grocery store, for example — with a tap. The app will allow users to see directions, live traffic trends, disruptions on the route, and gives an accurate ETA, without having to type in the actual address. Favorite places — or in the case of public transit users, specific routes — can be pinned in the Go Tab for easy access. Transit users will be able to see things like accurate departure and arrival times, alerts from the local transit agency, and an up-to-date ETA.

Image Credits: Google

One potentially helpful use case for this new feature would be to pin both a transit route and driving route to the same destination, then compare their respective ETAs to pick the faster option.

This feature is coming to both Google Maps on Android as well as iOS in the weeks ahead.

Android Auto will expand to more countries over the next few months. Google initially said it would reach 36 countries, but then updated the announcement language as the timing of the rollout was pushed back. The company now isn’t saying how many countries will gain access in the months to follow or which ones, so you’ll need stay tuned for news on that front.

Image Credits: Google

The final change is to Nearby Share, the proximity-based sharing feature that lets users share things like links, files, photos and and more even when they don’t have a cellular or Wi-Fi connection available. The feature, which is largely designed with emerging markets in mind, will now allow users to share apps from Google Play with people around them, too.

To do so, you’ll access a new “Share Apps” menu in “Manage Apps & Games” in the Google Play app. This feature will roll out in the weeks ahead.

Some of these features will begin rolling out today, so you may receive them earlier than a timeframe of several “weeks,” but the progress of each update will vary.

03 Dec 2020

iPhones can now automatically recognize and label buttons and UI features for blind users

Apple has always gone out of its way to build features for users with disabilities, and Voiceover on iOS is an invaluable tool for anyone with a vision impairment — assuming every element of the interface has been manually labeled. But the company just unveiled a brand new feature that uses machine learning to identify and label every button, slider, and tab automatically.

Screen Recognition, available now in iOS 14, is a computer vision system that has been trained on thousands of images of apps in use, learning what a button looks like, what icons mean, and so on. Such systems are very flexible — depending on the data you give them, they can become expert at spotting cats, facial expressions, or as in this case the different parts of a user interface.

The result is that in any app now, users can invoke the feature and a fraction of a second later every item on screen will be labeled. And by “every,” they mean every — after all, screen readers need to be aware of every thing that a sighted user would see and be able to interact with, from images (which iOS has been able to create one-sentence summaries of for some time) to common icons (home, back) and context-specific ones like “…” menus that appear just about everywhere.

The idea is not to make manual labeling obsolete — developers know best how to label their own apps, but updates, changing standards, and challenging situations (in-game interfaces, for instance) can lead to things not being as accessible as they could be.

I chatted with Chris Fleizach from Apple’s iOS accessibility engineering team, and Jeff Bigham from the AI/ML accessibility team, about the origin of this extremely helpful new feature. (It’s described in a paper due to be presented next year.)

“We looked for areas where we can make inroads on accessibility, like image descriptions,” said Fleizach. “In iOS 13 we labeled icons automatically – Screen Recognition takes it another step forward. We can look at the pixels on screen and identify the hierarchy of objects you can interact with, and all of this happens on device within tenths of a second.”

The idea is not a new one, exactly; Bigham mentioned a screen reader, Outspoken, which years ago attempted to use pixel-level data to identify UI elements. But while that system needed precise matches, the fuzzy logic of machine learning systems and the speed of iPhones’ built-in AI accelerators means that Screen Recognition is much more flexible and powerful.

It wouldn’t have been possibly just a couple years ago — the state of machine learning and the lack of a dedicated unit for executing it meant that something like this would have been extremely taxing on the system, taking much longer and probably draining the battery all the while.

But once this kind of system seemed possible, the team got to work prototyping it with the help of their dedicated accessibility staff and testing community.

“VoiceOver has been the standard bearer for vision accessibility for so long. If you look at the steps in development for Screen Recognition, it was grounded in collaboration across teams — Accessibility throughout, our partners in data collection and annotation, AI/ML, and, of course, design. We did this to make sure that our machine learning development continued to push toward an excellent user experience,” said Bigham.

It was done by taking thousands of screenshots of popular apps and games, then manually labeling them as one of several standard UI elements. This labeled data was fed to the machine learning system, which soon became proficient at picking out those same elements on its own.

It’s not as simple as it sounds — as humans, we’ve gotten quite good at understanding the intention of a particular graphic or bit of text, and so often we can navigate even abstract or creatively designed interfaces. It’s not nearly as clear to a machine learning model, and the team had to work with it to create a complex set of rules and hierarchies that ensure the resulting screen reader interpretation makes sense.

The new capability should help make millions of apps more accessible, or just accessible at all, to users with vision impairments. You can turn it on by going to Accessibility settings, then VoiceOver, then VoiceOver Recognition, where you can turn on and off image, screen, and text recognition.

It would not be trivial to bring Screen Recognition over to other platforms, like the Mac, so don’t get your hopes up for that just yet. But the principle is sound, though the model itself is not generalizable to desktop apps, which are very different from mobile ones. Perhaps others will take on that task; the prospect of AI-driven accessibility features is only just beginning to be realized.

03 Dec 2020

Microsoft launches Azure Purview, its new data governance service

As businesses gather, store and analyze an ever-increasing amount of data, tools for helping them discover, catalog, track and manage how that data is shared are also becoming increasingly important. With Azure Purview, Microsoft is launching a new data governance service into public preview today that brings together all of these capabilities in a new data catalog with discovery and data governance features.

As Rohan Kumar, Microsoft’s corporate VP for Azure Data told me, this has become a major paint point for enterprises. While they may be very excited about getting started with data-heavy technologies like predictive analytics, those companies’ data- and privacy- focused executives are very concerned to make sure that the way the data is used is compliant or that the company has received the right permissions to use its customers’ data, for example.

In addition, companies also want to make sure that they can trust their data and know who has access to it and who made changes to it.

“[Purview] is a unified data governance platform which automates the discovery of data, cataloging of data, mapping of data, lineage tracking — with the intention of giving our customers a very good understanding of the breadth of the data estate that exists to begin with, and also to ensure that all these regulations that are there for compliance, like GDPR, CCPA, etc, are managed across an entire data estate in ways which enable you to make sure that they don’t violate any regulation,” Kumar explained.

At the core of Purview is its catalog that can pull in data from the usual suspects like Azure’s various data and storage services but also third-party data stores including Amazon’s S3 storage service and on-premises SQL Server. Over time, the company will add support for more data sources.

Kumar described this process as a ‘multi-semester investment,’ so the capabilities the company is rolling out today are only a small part of what’s on the overall roadmap already. With this first release today, the focus is on mapping a company’s data estate.

Image Credits: Microsoft

“Next [on the roadmap] is more of the governance policies,” Kumar said. “Imagine if you want to set things like ‘if there’s any PII data across any of my data stores, only this group of users has access to it.’ Today, setting up something like that is extremely complex and most likely you’ll get it wrong. That’ll be as simple as setting a policy inside of Purview.”

In addition to launching Purview, the Azure team also today launched Azure Synapse, Microsoft’s next-generation data warehousing and analytics service, into general availability. The idea behind Synapse is to give enterprises — and their engineers and data scientists — a single platform that brings together data integration, warehousing and big data analytics.

“With Synapse, we have this one product that gives a completely no code experience for data engineers, as an example, to build out these [data] pipelines and collaborate very seamlessly with the data scientists who are building out machine learning models, or the business analysts who build out reports for things like Power BI.”

Among Microsoft’s marquee customers for the service, which Kumar described as one of the fastest-growing Azure services right now, are FedEx, Walgreens, Myntra and P&G.

“The insights we gain from continuous analysis help us optimize our network,” said Sriram Krishnasamy, senior vice president, strategic programs at FedEx Services. “So as FedEx moves critical high value shipments across the globe, we can often predict whether that delivery will be disrupted by weather or traffic and remediate that disruption by routing the delivery from another location.”

Image Credits: Microsoft

03 Dec 2020

As Metromile looks to go public, insurtech funding is on the rise

Earlier this week, TechCrunch covered the latest venture round for AgentSync, a startup that helps insurance agents comply with rules and regulations. But while the product area might not keep you up tonight, the company’s growth has been incredibly impressive, scaling its annual recurring revenue (ARR) 10x in the last year and 4x since the start of the pandemic.

Little surprise, then, that the company’s latest venture deal was raised just months after its last; investors wanted to get more money into AgentSync rapidly, boosting a larger venture-wide wager on insurtech startups more broadly that we’ve seen throughout 2020.


The Exchange explores startups, markets and money. Read it every morning on Extra Crunch, or get The Exchange newsletter every Saturday.


But private investors aren’t the only ones getting in on the action. Public investors welcomed the Lemonade IPO earlier this year, giving the rental insurance unicorn a strong debut. Root also went public, but has lost around half of its value after a strong pricing run, comparing recent highs with its current price.

But with one success and one struggle for the sector on the scoreboard this year, Metromile is also looking to get in on the action. And, per a TechCrunch data analysis this morning and some external data work on the insurtech venture capital market, it appears that private insurtech investment is matching the attention public investors are also giving the sector.

This morning let’s do a quick exploration of the Metromile deal and take a look at the insurtech venture capital market to better understand how much capital is going into the next generation of companies that will want to replicate the public exits of our three insurtech pioneers.

Finally, we’ll link public results and recent private deal activity to see if both sides of the market are currently aligned.

Metromile

Let’s start with Metromile’s debut. It’s going public via a SPAC, namely INSU Acquisition Corp. II. Here’s the equivalent of an S-1 from both parties, going over the economics of the blank-check company and Metromile itself.

On the economics front for the insurtech startup, we have to start with some extra work. During nearly every 2020 IPO we’ve spent lots of time examining how quickly the company in question is growing. We’re not doing that today because Metromile is not growing in GAAP terms and we need to understand why that’s the case.

In simple terms, a change to Metromile’s reinsurance setup last May led to the company ceding “a larger percentage of [its] premium than in prior periods,” which resulted “in a significant decrease in our revenues as reported under GAAP,” the company said.

Ceded premiums don’t count as revenue. Lemonade, in its recent earnings results, explained the concept well from the perspective of its own, related change to its business:

While our July 1, 2020 reinsurance contracts deliver a significant improvement in the fundamentals of our business, they also result in a significant change in GAAP revenue, as GAAP excludes all ceded premiums (and proportional reinsurance is fundamentally about ceding premium). This led to a spike in GAAP gross margin and a dip in GAAP revenue on July 1 – even though no corresponding change in the scope or profitability of our business took place at midnight on June 30.

So Lemonade has shaken up its business, cutting its revenues and tidying its economics. The impact has been sharp, with the company’s GAAP revenues falling from $17.8 million in the year-ago quarter, to $10.5 million in Q3 2020.

Root has undertaken similar steps. Starting July 1, it has “transfer[ed] 70% of our premiums and related losses to reinsurers, while also gaining a 25% commission on written premium to offset some of our up-front and ongoing costs.” The result has been falling GAAP revenue and improving economics once again.

All neo-insurance companies that have provided financial results while going public have changed their reinsurance approach, making their results look a bit wonky in the short term, leaving investors to decipher what they are really worth.

03 Dec 2020

Sight Tech Global day 2 is live! Hear from Apple, Waymo, Microsoft, Sara Hendren and Haben Girma

Day 2 for the virtual event Sight Tech Global is streaming on TechCrunch from 8 a.m. PST to 12:30. The event looks at how AI-based technologies are rapidly changing the field of accessibility, especially for blind people and those with low vision. Today’s programming includes top accessibility product and technology leaders from Apple, Waymo, Microsoft and Google, plus sessions featuring disability rights lawyer Haben Girma and author and designer Sara Hendren. Check out the event’s full agenda.

The Sight Tech Global project aims to showcase the remarkable community of technologists working on accessibility-related products and platforms. It is a project of the nonprofit Vista Center for the Blind and Visually Impaired, which is based in Silicon Valley.

This year’s event sponsors include: Waymo, Verizon Media, TechCrunch, Ford, Vispero, Salesforce, Mojo Vision, iSenpai, Facebook, Ability Central, Google, Microsoft, Wells Fargo, Amazon, Eyedaptic, Verizon 5G, Humanware, APH, and accessiBe. Our production partners: Cohere Studio (design),  Sunol Media Group (video production), Fable (accessibility crowd testing), Clarity Media (speaker prep), Be My Eyes (customer service), 3Play and Vitac  (captioning).

03 Dec 2020

AI’s next act: Genius chips, programmable silicon and the future of computing

If only 10% of the world had enough power to run a cell phone, would mobile have changed the world in the way that it did?

It’s often said the future is already here — just not evenly distributed. That’s especially true in the world of artificial intelligence (AI) and machine learning (ML). Many powerful AI/ML applications already exist in the wild, but many also require enormous computational power — often at scales only available to the largest companies in existence or entire nation-states. Compute-heavy technologies are also hitting another roadblock: Moore’s law is plateauing and the processing capacity of legacy chip architectures are running up against the limits of physics.

If major breakthroughs in silicon architecture efficiency don’t happen, AI will suffer an unevenly distributed future and huge swaths of the population miss out on the improvements AI could make to their lives.

The next evolutionary stage of technology depends on completing the transformation that will make silicon architecture as flexible, efficient and ultimately programmable as the software we know today. If we cannot take major steps to provide easy access to ML we’ll lose unmeasurable innovation by having only a few companies in control of all the technology that matters. So what needs to change, how fast is it changing and what will that mean for the future of technology?

An inevitable democratization of AI: A boon for startups and smaller businesses

If you work at one of the industrial giants (including those “outside” of tech), congratulations — but many of the problems with current AI/ML computing capabilities I present here may not seem relevant.

For those of you working with lesser caches of resources, whether financially or talent-wise, view the following predictions as the herald of a new era in which organizations of all sizes and balance sheets have access to the same tiers of powerful AI and ML-powered software. Just like cell phones democratized internet access, we see a movement in the industry today to put AI in the hands of more and more people.

Of course, this democratization must be fueled by significant technological advancement that actually makes AI more accessible — good intentions are not enough, regardless of the good work done by companies like Intel and Google. Here are a few technological changes we’ll see that will make that possible.

From dumb chip to smart chip to “genius” chip

For a long time, raw performance was the metric of importance for processors. Their design reflected this. As software rose in ubiquity, processors needed to be smarter: more efficient and more commoditized, so specialized processors like GPUs arose — “smart” chips, if you will.

Those purpose-built graphics processors, by happy coincidence, proved to be more useful than CPUs for deep learning functions, and thus the GPU became one of the key players in modern AI and ML. Knowing this history, the next evolutionary step becomes obvious: If we can purpose-build hardware for graphics applications, why not for specific deep learning, AI and ML?

There’s also a unique confluence of factors that makes the next few years pivotal for chipmaking and tech in general. First and second, we’re seeing a plateauing of Moore’s law (which predicts a doubling of transistors on integrated circuits every two years) and the end of Dennard scaling (which says performance-per-watt doubles at about the same rate). Together, that used to mean that for any new generation of technology, chips doubled in density and increased in processing power while drawing the same amount of power. But we’ve now reached the scale of nanometers, meaning we’re up against the limitations of physics.

Thirdly, compounding the physical challenge, the computing demands of next-gen AI and ML apps are beyond what we could have imagined. Training neural networks to within even a fraction of human image recognition, for example, is surprisingly hard and takes huge amounts of processing power. The most intense applications of machine learning are things like natural language processing (NLP), recommender systems that deal with billions or trillions of possibilities, or super high-resolution computer vision, as is used in the medical and astronomical fields.

Even if we could have predicted we’d have to create and train algorithmic brains to learn how to speak human language or identify objects in deep space, we still could not have guessed just how much training — and therefore processing power — they might need to become truly useful and “intelligent” models.

Of course, many organizations are performing these sorts of complex ML applications. But these sorts of companies are usually business or scientific leaders with access to huge amounts of raw computing power and the talent to understand and deploy them. All but the largest enterprises are locked out of the upper tiers of ML and AI.

That’s why the next generation of smart chips — call them “genius” chips — will be about efficiency and specialization. Chip architecture will be made to optimize for the software running on it and run altogether more efficiently. When using high-powered AI doesn’t take a whole server farm and becomes accessible to a much larger percentage of the industry, the ideal conditions for widespread disruption and innovation become real. Democratizing expensive, resource intensive AI goes hand-in-hand with these soon-to-be-seen advances in chip architecture and software-centered hardware design.

A renewed focus on future-proofing innovation

The nature of AI creates a special challenge for the creators and users of AI hardware. The amount of change itself is huge: We’re living through the leap from humans writing code to software 2.0 — where engineers can train machine learning programs to eventually “run themselves.” The rate of change is also unprecedented: ML models can be obsolete in months or even weeks, and the very methods through which training happens are in constant evolution.

But creating new AI hardware products still requires designing, prototyping, calibrating, troubleshooting, production and distribution. It can take two years from concept to product-in-hand. Software has, of course, always outpaced hardware development, but now the differential in velocity is irreconcilable. We need to be more clever about the hardware we create for a future we increasingly cannot predict.

In fact, the generational way we think about technology is beginning to break down. When it comes to ML and AI, hardware must be built with the expectation that much of what we know today will be obsolete by the time we have the finished product. Flexibility and customization will be the key attributes of successful hardware in the age of AI, and I believe this will be a further win for entire market.

Instead of sinking resources into the model du jour or a specific algorithm, companies looking to take advantage of these technologies will have more options for processing stacks that can evolve and change as the demands of ML and AI models evolve and change.

This will allow companies of all sizes and levels of AI savvy to stay creative and competitive for longer and prevent the stagnation that can occur when software is limited by hardware — all leading to more interesting and unexpected AI applications for more organizations.

Widespread adoption of real AI and ML technologies

I’ll be the first to admit to tech’s fascination with shiny objects. There was a day when big data was the solution to everything and IoT was to be the world’s savior. AI has been through the hype cycle in the same way (arguably multiple times). Today, you’d be hard pressed to find a tech company that doesn’t purport to use AI in some way, but chances are they are doing something very rudimentary that’s more akin to advanced analytics.

It’s my firm belief that the AI revolution we’ve all been so excited about simply has not happened yet. In the next two to three years however, as the hardware that enables “real” AI power makes its way into more and more hands, it will happen. As far as predicting the change and disruption that will come from widespread access to the upper echelons of powerful ML and AI — there are few ways to make confident predictions, but that is exactly the point!

Much like cellphones put so much power in the hands of regular people everywhere, with no barriers to entry either technical or financial (for the most part), so will the coming wave of software-defined hardware that is flexible, customizable and future-proof. The possibilities are truly endless, and it will mark an important turning point in technology. The ripple effects of AI democratization and commoditization will not stop with just technology companies, and so even more fields stand to be blown open as advanced, high-powered AI becomes accessible and affordable.

Much of the hype around AI — all the disruption it was supposed to bring and the leaps it was supposed to fuel — will begin in earnest in the next few years. The technology that will power it is being built as we speak or soon to be in the hands of the many people in the many industries who will use their newfound access as a springboard to some truly amazing advances. We’re especially excited to be a part of this future, and look forward to all the progress it will bring.

03 Dec 2020

Ben Ling’s Bling Capital just rounded up $113 million more from investors

Ben Ling is as done with 2020 as the rest of us, but certainly for him, the year could be worse.

Ling, who founded his own venture outfit in 2018 — naming it Bling Capital (a nickname from way back) — just closed on $113 million in capital commitments across two new funds: a seed-focused $77 million fund, and an opportunity fund focused on breakout companies from his portfolio that closed with $36 million in capital commitments.

It’s a decent amount of money for a so-called solo GP fund, especially coming as it does just two years after Bling closed on two very similar-size funds: a $61 seed-stage fund and a $35 opportunities-type fund. Yet Ling says it could have been twice as much committed capital, given demand. “I had to basically kick people out,” he says of those willing to write him a check.

It’s not so hard to believe, considering the track record of Ling, a former exec at Google, then Facebook, then YouTube, then Google again before Ling turned to venture capital in 2013, joining Khosla Ventures.

Between the more than five years that Ling spent with Sand Hill Road firm and the “nearly 80” investments he made as an angel investor before that, he says he has invested in 10 “unicorns” altogether so far, including Rippling, Airtable, Udemy, Quora, Instacart, Gusto, and the now publicly traded companies Pagerduty, Square, Lyft, and Palantir.

A Stanford PhD in computer science, Ling insists that by working as a lone GP — one supported by three principals — he can continue getting into more hot deals, too. “It’s important because you can make decisions much more quickly, whereas in partnerships, you have to get a partner looped in, and all those days can cost you an investment opportunity.”

Having a powerful network is surely helpful, too. Ling says that roughly 100 limited partners make up Bling’s investor base, and that these individuals are largely the heads of product, the heads of growth, and even the founders of many major startups. Among Bling’s backers, for example, is Affirm CEO Max Levchin, Yelp CEO Jeremy Stoppelman, and Quora CEO Adam D’Angelo.

Such contacts matter because when they see reports who are leaving to start new things, they will ostensibly point Bling in the founders’ direction. As for possible conflicts of interest, Ling is clear that there is a “wall, in that our LPs don’t receive any proprietary confidential information about a company unless its CEO says, ‘I want to meet these five to seven people’ who are investors in the fund.”

In the meantime, Ling is continuing to write checks, saying that in seed stage deals, Bling’s investments typically range from $400,000 to $1 million for a 10% to 12% stake in a company, and that for later-stage deals, he’s writing checks of between $1 million and $3 million.

If you’re curious, some of the later-stage bets in Bling’s portfolio include the micro-mobility company LimeTempo, a home fitness company that involves a wall-mounted screen and is focused on weight lifting; and Vise, which automate aspects of investment management for financial advisers using artificial intelligence.

More nascent bets include InFeedo, a four-year-old, Gurgaon, India-based company that’s focused on employee retention; Sprout Therapy, a year-old, Bay Area startup that’s using tech to expand healthcare access to autistic children; and Hermeus, a 2.5-year-old, Atlanta, Ga.-based company attempting to build a Mach 5 aircraft that would be capable of making the trip from New York to London in just 90 minutes. (Bling has written checks into both Hermeus’s seed and Series A rounds.)

If it seems like Bling is investing all over the place — at least within the U.S. — it is.

Ling credits his background, where he worked for among the world’s largest consumer-facing companies but where, internally, he was developing commerce and SaaS tools for the companies’ small and medium-size business customers. Indeed, Ling says some of the only areas that are off limits for Bling are “rockets, ag tech, biotech or crypto, because we don’t have a comparative advantage in those things.”

If Bling is “pitched on a biotech startup from London, that’s because every biotech investor and every London-based investor has already passed and we’re the dumb money,” he says with a laugh.

As for whether Bling will stay headquartered in the Bay Area, Ling says he’s not sure, that he’s considering a move to either Austin or Miami like a growing number of other founders and investors. He’s worried about the state of San Francisco right now, he suggests. But also, after this very strange year, he’s maybe ready for a change.

From Ling’s perspective, it doesn’t really matter. There’s “still a lot of white space in tech,” no matter where one is investing.

03 Dec 2020

Pave raises millions to bring transparency to startup compensation

Compensation within private venture-backed startups can be a confusing minefield that if unsuccessfully navigated can lead to inconsistent salaries and the kind of ambiguity that breeds an unhappy workforce.

Pave, a San Francisco-based startup that recently graduated from YC Combinator is aiming to end the pay and equity gap with a software tool it developed to make it easier to track, measure, and communicate how and what they pay their employees.

The question is whether Silicon Valley, which has a history of pay inequity and gender disparities, is ready for that kind of transparency?

Investors certainly think so. Andreessen Horowitz has poured millions into Pave’s $16 million Series A round, at a post-money valuation of $75 million, confirming our reports from August. The round also includes the a16z Cultural Leadership Fund, Bessemer Venture Partners, Bezos Expeditions (a personal investment company of Jeff Bezos), Dash Fund, and Y Combinator.

Kristina Shen, a GP at A16z, will be joining the board. Marc Andreessen will take a board observer seat.

A rebrand and re-focus

Pave, known until now as Trove, is trying to build an online market of data and real-time tools that bring more fairness in compensation to the startup world. The tools allow a company to track, measure and ultimately communicate compensation on an employee-by-employee basis. It does so by integrating HR tools such as Workday, Carta and Greenhouse into one unified service that CEO Matt Schulman says it only takes the customer 5 minutes to set up with Pave.

The service can then help companies figure out how to manage their employees’ pay, from promotion cycles and compensation adjustments to how to reward a bonus and how much equity to grant a new employee.

Employees, meanwhile, can see data on their entire compensation package as well as predictive analytics on how they can grow their stake in the company. The tool is called Total Rewards, and its closest competitor, Welcome (which raised $6 million this week) launched a tool with the same name, and same goal.

Pave’s Total Rewards Portal for employees.

Schulman says that all startups struggle with figuring out stock options, equity, benchmarking data and promotion cycles because it’s an offline (and cumbersome) process. Clear communication about these details, though, helps with both hiring and retention.

Pave’s biggest challenge, is convincing its startup customers to share data on their payment structures. While data is anonymized so employees can’t see their colleagues salaries, it does require buy-in from a company to track potential inequity in the first place.

“I imagine there will be some late adopters that are not fully aligned with that vision at first,” Schulman admits. “How can we really change how compensation works as something that has been stagnant for decades upon decades? That’s not an easy challenge.” Right now, Pave is working with companies on a case by case basis to see how much they want to communicate with employees. Long-term, Schulman wants there to be a standard.

Is the industry ready to be benchmarked?

And the founder is optimistic that he can get there. Schulman pointed to Carta, a cap management tool, as an example of widespread adoption.

“There were companies that at first resisted Carta, and they were not comfortable putting all of their records into one centralized database,” he said. “Now, it’s ubiquitous. Every company uses Carta among venture-backed companies.”

But,even Carta has struggled with what it wants other companies to do: pay their employees fairly. Carta is currently facing a lawsuit from its former vice president of marketing, Emily Kramer, for gender discrimination. In the lawsuit, Kramer notes that she was paid $50,000 less relative to her peers, and her equity grant was one-third the amount of shares than her male counterparts. The company also laid off 16% of its employees, citing a lack of new customers.

If Carta, valued at $3 billion, has difficulties, then an early-stage startup such as Pave will also come up against big hurdles around transparency. The startup is hoping that its new industry-wide benchmark project will help kickstart the conversation and nudge companies in the right direction.

Launching today, Pave has teamed up with the portfolio companies of Bessemer Venture Partners, NEA, Redpoint Ventures and YC to gather compensation data. The data, which is opt-in, will allow Pave to release a compensation benchmark survey to show how companies pay their employees. The survey will be public but will aggregate all company responses, so there is no way to see which company is doing better than others.

Other platforms have tried to do measure pay across roles, such as Glassdoor and Angellist. Schulman says that “companies don’t trust that data” because it’s crowdsourced and therefore has a survey bias.

The tool would help companies go from doing a D&I analysis once a year to being able to do it consistently, “so they don’t drift away from a fair and equitable state,” he said.

While Pave tries to convince other startups to share intimate information, as a company it is still figuring out how to do the same. The company declined to share the diversity break-down of its team, which grew from five to 13 employees in just months and has a 30-person target by end of year. Based on LinkedIn, Pave’s team skews white and male.

A push from the rise of remote work might make transparency happen sooner than later. The rise of distributed workforces has forced companies to start asking questions around compensation, Schulman said.

“How do you pay your San Francisco engineer who wants to move to Wyoming?” Schulman said. “That’s the question that’s on everyone’s mind.” The shift is making compensation become a mainstream conversation, the company has found interest in its service from companies including Allbirds, Checkr, Tide, and Allbase. Schulman says early adopters have been bullish about transparency.

Once Pave can figure out how to support venture-backed startups, it’s looking outwards to other geographies and types of businesses.

“There’s 3 billion humans in the world that work in a part of the labor market,” he said. “And right now it’s a black box in how they’re compensated.”

03 Dec 2020

Google now lets anyone contribute to Street View using AR and an app

An update to Google’s Street View app on Android will now let anyone contribute their photos to help enhance Google Maps, the company announced this morning. Using a “connected photos” tool in the new version of the Street View app, users are able to record a series of images as they move down the street or a path. The feature requires an ARCore-compatible Android device, and for the time being, will only support image capture and upload in select geographic regions.

ARCore is Google’s platform for building augmented reality experiences. It works by allowing the phone to sense its environment, including the size and location of all types of surfaces, the position of the phone in relation to the world around it, and the lighting conditions of the environment. This is supported on a variety of Android devices running Android 7.0 (Nougat) or higher.

Meanwhile, Google’s Street View app has been around for half a decade. Initially, it was designed to allow users to share their own panoramic photos to improve the Google Maps experience. But as phones have evolved, so has the app.

The updated version of the Street View app allows users to capture images using ARCore — the same AR technology Google users for its own Live View orientation experiences in Maps, which helps phones “see” various landmarks to help users get their bearings.

After the images are published in the Street View app, Google will then automatically rotate, position and create a series of connected photos using those images, and put them in the correct place on Google Maps so others can see them.

It will also use the same privacy controls on these contributed photos as are offered on its own Street View images (the ones it captured by driving the Street View car around). This include blurring people’s faces and license plates, and allowing users to report imagery and other content for review, if needed.

Image Credits: Google

The new system of connected photos won’t be as polished as Google’s own Street View images, but it does make the ability to publish to Street View more accessible. Now, the image capturing process no longer requires a 360-degree camera or other equipment mounted to a top of car, for example. And that means users who live in more remote regions will be able to contribute to Street View, without needing anything more than a supported Android phone and internet connection.

Google says it will still default to showing its own Street View imagery when it’s available, which will be indicated with a solid blue line. But in the case where there’s no Street View option, the contributed connected photos will appear in the Street View layer as a dotted blue line instead.

Image Credits: Google

The company will also use the data in the photos to update Google Maps with the names and addresses of businesses that aren’t already in the system, including their posted hours, if that’s visible on a store sign, for instance.

During early tests, users captured photo using this technology in Nigeria, Japan and Brazil.

Today, Google says it’s officially launching the connected photos feature in beta in the Street View app. During this public beta period, users will be able to try the feature in Toronto, Canada, New York, NY and Austin, TX, along with Nigeria, Indonesia and Costa Rica. More regions will be supported in the future as the test progresses, Google says.