Year: 2020

08 Oct 2020

Enhanced computer vision, sensors raise manufacturing stakes for robots as a service

For more than two decades, robotics market commentaries have predicted a shift, particularly in manufacturing, from traditional industrial manipulators to a new generation of mobile, sensing robots, called “cobots.” Cobots are agile assistants that use internal sensors and AI processing to operate tools or manipulate components in a shared workspace, while maintaining safety.

It hasn’t happened. Companies have successfully deployed cobots, but the rate of adoption is lagging behind expectations.

According to the International Federation of Robotics (IFR), cobots sold in 2019 made up just 3% of the total industrial robots installed. A report published by Statista projects that in 2022, cobots’ market share will advance to 8.5%. This is a fraction of a February 2018 study cited by the Robotic Industries Association that forecasted by 2025, 34% of the new robots being sold in the U.S. will be cobots.

To see a cobot in action, here’s the Kuka LBR iiwa. To ensure safe operation, cobots come with built-in constraints, like limited strength and speed. Those limitations have also limited their adoption.

As cobots’ market share languishes, standard industrial robots are being retrofitted with computer vision technology, allowing for collaborative work combining the speed and strength of industrial robots with the problem-solving skills and finesse of humans.

This article will document the declining interest in cobots, the reasons for it and the technology that is replacing it. We report on two firms developing computer vision technology for standard robots and describe how developments in 3D vision and so-called “robots as a service” (yes, RaaS) are defining this faster-growing second generation of robots that can work alongside humans.

What are robotics sensing platforms?

08 Oct 2020

Blissfully expands from SaaS management into wider IT services aimed at midmarket

When Blissfully launched in 2016, it was focused on helping companies understand their SaaS usage inside their organizations, but over time the company has seen that there is a wider need, especially in midmarket companies, and today it announced it was expanding into broader IT management.

Company co-founder and CEO Arial Diaz says that the startup began helping to track SaaS usage, eventually expanding into employee onboarding and exiting, and today they are expanding into a broader set of IT services.

“Our vision when starting a company was really that IT is being redefined in the age of SaaS. So step one was to help with everything around managing SaaS. And step two is what does that mean in terms of the broader IT management vision,” Diaz told TechCrunch.

Blissfully believed that SaaS was going to take a bigger and bigger part of IT in terms of mindshare, spend and how you manage it, and they turned out to be right. Now, they felt the time is right to expand their original idea to encompass more of the IT management function.

That has resulted in a newly expanded platform they are releasing today that not only includes the earlier SaaS management components that it’s been providing all along, but also four other new categories.

For starters they are offering IT asset management. “We are now offering the ability to track not just SaaS applications, but all your IT assets including hardware devices and traditional software,” Diaz said.

Next, they are including help desk management and ticketing capabilities to handle requests that fall outside of their SaaS management workflows. In addition, they are adding role-based access control to allow different people access to various IT management services, which is increasingly essential during the pandemic as people are being forced to troubleshoot and manage various IT issues from home. Finally, the startup is opening up its APIs so that IT can tap into that and build customized functionality or workflows on top of the Blissfully platform.

Diaz believes that the company has reached a point of maturity when it comes to SaaS management, and they saw a need in the midmarket to provide these additional IT services that larger organizations tend to get from a company like ServiceNow.

The new services will be available starting today from Blissfully.

08 Oct 2020

Millennial Media’s Paul Palmieri launches Tradeswell, a startup promising to fix e-commerce margins

A new startup called Tradeswell said it’s using artificial intelligence to help direct-to-consumer and e-commerce brands build healthier businesses.

The company is led by Paul Palmieri, who previously took mobile advertising company Millennial Media public and then sold it to TechCrunch’s corporate parent AOL (now Verizon Media). Afterwards, Palmieri founded Grit Capital Partners, but he told me he decided to join Tradeswell as a co-founder and CEO because he was so excited about the vision.

Palmieri said that just as Millennial helped independent app developers get smarter about advertising, Tradeswell gives upstart e-commerce companies the data they need to compete with “the big platform behemoths.”

It’s no secret that a number of direct-to-consumer companies have struggled to make a profit due to challenging unit economics. Palmieri suggested that one reason for this is the fragmentation of their tools and data.

“If you’re selling something like Campbell’s Soup, you want to figure out, how is your tomato soup business and your chicken soup business?” Palmieri said. “Today, brands are saying, ‘How’s my Amazon business? How’s my Shopify business? How’s my Shopify business on Instagram?'”

So rather than relying on those platforms for data, Palmieri suggested brands want an indpendent platform that they trust to bring everything together, “where it’s a combination of a Bloomberg terminal plus a trading platform.”

Tradeswell’s AI focuses in six key areas of an e-commerce business: marketing, retail, inventory, logistics, forecasting, lifetime value and financials. Palmieri suggested that in some cases (like ad-buying), Tradeswell will replace existing software, while in other cases it will integrate.

“Think of us as a neural AI layer, where [a brand] might have different platform relationships, which are the fingers, and we’re the AI brain,” he said. “We’re giving brands insights and forecasts: If you make this change, we anticipate XYZ will happen.”

In some cases, like the aforementioned advertising, Tradeswell can also support full automation, so that merchants don’t have to worry about “setting up and tearing down hundreds of campaigns.”

The key, Palmieri said, is that the platform has access to the business’ full financials, so it can optimize for net margins, rather than simply driving the most impressions or clicks or sales.

While Tradeswell is only coming out of stealth mode today, it’s already been working more than 100 brands. For example, Steve Tracy of Red Monkey Foods and San Francisco Salt Company said in a statement that the startup’s “unique, comprehensive, algorithmic approach has helped us grow sales, identify commercialization opportunities and forecast far more accurately.”

08 Oct 2020

Instacart raises $200M more at $17.7B valuation

Instacart announced today that it has raised $200 million in a new funding round featuring prior investors. D1 Capital and Valiant Peregrine Fund led the investment. Instacart is now worth $17.7 billion, post-money, or $17.5 billion pre-money. The plan is to use the funding to focus on introducing new features and tools to improve the customer experience, and further support Instacart’s enterprise and ads businesses, according to a blog post.

Previously in 2020, Instacart raised $100 million in July, and $225 million in June. The June round valued the company at around $13.7 billion, meaning that the unicorn’s new funding round — raised just months later — came at a much higher price.

Instacart, like some other tech, and tech-enabled businesses, has seen demand for its service expand during the pandemic. It’s not hard to trace a connection between COVID-19 and its business results, as folks wanting to stay at home have turned to on-demand services to keep themselves safe.

The growth shown by Uber’s food delivery business is another example of this trend.

Instacart’s valuation has more than doubled since its 2018 Series F, when it was worth around $7.9 billion. The pace at which Instacart has created paper value is impressive, though its IPO plans appear murky from the outside and how much of the its COVID-bump will be retained when the pandemic ends is not yet clear.

The startup famously turned a profit during a month in Q2, worth around $10 million per The Information. The same report indicated that Instacart lost around $300 million in 2019. What the company’s full-year profitability profile will look like is not know.

TechCrunch sent a number of questions to the firm, including if it has had any further profitable months in 2020, and how quickly it grew in Q3 2020. The company’s spokespeople did not answer those questions.

“Today’s investment is a testament to the strong conviction our existing investors have in the strength of our teams and the important role Instacart plays for customers, partners, and the entire grocery ecosystem,” Instacart CEO Apoorva Mehta said in a press release. “I’m incredibly proud of our team’s work to scale our business this past year and rise to meet the unprecedented consumer demand and growth.”

Instacart is one of the company’s caught up in a regulatory war after California passed AB5, which changed the state’s rules on gig workers. A voter proposition — Prop 22 — that would keep rideshare drivers and delivery workers classified as independent contractors, is coming up for a vote in California. Instacart is in favor of the proposition, along with Uber, Lyft, DoorDash and Postmates (now owned by Uber).

Uber, Lyft, Instacart and DoorDash have collectively contributed $184,008,361.46 to the Yes on 22 campaign. Those contributions have been monetary, non-monetary and have come in the form of loans. In September, the four companies each committed another $17.5 million to Yes on Prop 22 in monetary contributions. Of all the measures on this November’s ballot, Yes on Prop 22 has received the most contributions, according to California’s Fair Political Practices Commission.

Beyond Prop 22, Instacart is facing a lawsuit from Washington D.C. District Attorney General Karl A. Racine that alleges the company charged customers millions of dollars in “deceptive service fees” and failed to pay hundreds of thousands of dollars’ worth of sales tax. The suit seeks restitution for customers who paid those service fees, as well as back taxes and interest on taxes owed to D.C. Specifically, it alleges Instacart misled customers regarding the 10% service fee to think it was a tip for the delivery person, from September 2016 to April 2018.

Meanwhile, amid the pandemic and wildfires in California, workers have demanded personal protective equipment and better pay, and, most recently, disaster relief.

08 Oct 2020

Waymo starts to open driverless ride-hailing service to the public

Waymo, the Google self-driving-project-turned-Alphabet unit, is beginning to open up its driverless ride-hailing service to the public.

The company said that starting today members of its Waymo One service will be able to take family and friends along on their fully driverless rides in the Phoenix area. Existing Waymo One members will have the first access to the driverless rides — terminology that means no human behind the wheel. However, the company said that in the next several weeks more people will be welcomed directly into the service through its app, which is available on Google Play and the App Store.

Waymo said that 100% of its rides will be fully driverless — which it has deemed its “rider only” mode. That 100% claim requires a bit of unpacking. The public shouldn’t expect hundreds of Waymo-branded Chrysler Pacifica minivans — no human behind the wheel — to suddenly inundate the entire 600-plus square miles of the greater Phoenix area.

Waymo has abut 600 vehicles in its fleet. About 300 to 400 of those are in the Phoenix area. Waymo wouldn’t share exact numbers of how many of these vehicles would be dedicated to driverless rides. However, Waymo CEO John Krafcik explained to TechCrunch in a recent interview, that there will be various modes operating in the Phoenix area. Some of these will be “rider only,” while other vehicles will still have train safety operators behind the wheel. Some of the fleet will also be used for testing.

“We’re just ready from every standpoint,” Krafcik told TechCrunch. “And how do we know we’re ready? We’ve had our wonderful group of early riders, who’ve helped us hone the service, obviously not from a safety standpoint because we’ve had the confidence on the safety side for some time, but rather more for the fit of the product itself.” He added that these early riders helped the company determine if the product was “delivering satisfaction and delight for them.”

Later this year, Waymo will relaunch rides with a trained vehicle operator to add capacity and allow us to serve a larger geographical area. Krafcik said the company is in the process of adding in-vehicle barriers between the front row and rear passenger cabin for in-vehicle hygiene and safety.

Waymo operates in about 100-square-mile area. The driverless or “rider only” service area that will be offered to Waymo One members is about 50 square miles, Krafcik said.

Despite the various caveats, this is still a milestone — one of many the company has achieved in the past decade. The past five years has been particularly packed, starting with Steve Mahan, who is legally blind, taking the “first: driverless ride in the company’s Firefly prototype on Austin’s city streets in 2015. More than a dozen journalists experienced driverless rides in 2017 on a closed course at Waymo’s testing facility in Castle; and. Then last November, TechCrunch took one of the first driverless rides in Waymo Pacifica minivan along the public streets of a Phoenix suburb.

waymo-driverless app

The company scaled its commercial product even as these demos and testing continued. In 2017, Waymo launched its early rider program, which let vetted members of the public, who had signed non-disclosure agreements, hail its self-driving cars in the Phoenix area. Those autonomous vehicles all had human safety operators behind the wheel.

Waymo then launched Waymo One, a self-driving ride-hailing service aimed for public use, no NDA strings attached. But again, those rides all had human safety operators in the drivers seat, ready to take over if needed. Waymo slowly moved its early rider program members into the more open Waymo One service. It also started experimenting with charging for rides and expanded its footprint – or geofenced service area. The Waymo One service (with human safety operators) is about 100 square miles in Phoenix suburbs like Chandler.

The first meaningful signs that Waymo was ready to put people in vehicles without human safety operators popped up last fall when members of its early rider program received an email indicating that driverless rides would soon become available.

And they did. These driverless rides were limited and free. And importantly, still fell under the early rider program, which had that extra NDA protection. Waymo did slowly scale until about 5 to 10% of its total rides in 2020 were fully driverless for its exclusive group of early riders under NDA. Then COVID-19 hit and the service was halted. The company has continued testing with its safety drivers in Arizona and California. That has raised some concerns among those workers about the dual issue of catching COVID and dealing with air quality issues caused by wildfires in California.

Waymo said it has added new safety protocols due to COVID-19, including requiring users to wear masks, having hand sanitizer in all vehicles and conducting what Krafcik described as a cabin flush — essentially a four to five increase in air volume sent through the vehicle — after every ride.

Krafcik also said Waymo will soon add the all-electric Jaguar I-Pace to the mix, first testing them on public roads and then adding the vehicles to the early rider program.

08 Oct 2020

General Motors finally gets serious about in-car tech, taps the Unreal Engine for next-gen interface

The upcoming GMC Hummer EV will feature a new in-car user interface powered by the Unreal Engine. This powerful platform underpins the latest video games and is well-suited to provide vehicle occupants with a dynamic and robust experience.

Epic Games made the announcement yesterday while unveiling the latest development tools for its human-machine interface program.

Here’s the ugly truth: General Motors’ current crop of in-car user interfaces are among the worst on the market. As one of the world’s leading auto manufacturers, it’s surprising, but the infotainment system constantly underwhelms me in Chevy, Buick, GMC, and Cadillac vehicles. Compared to competitors, GM’s cars’ systems are slow, boring, and lack the advanced features found in competing vehicles.

The Unreal Engine is a powerful platform and should provide GM’s engineers with plenty of space to accommodate the latest features and interfaces car shoppers can find elsewhere. As Epic Games, the company behind the Unreal Engine, explains, the platform features a comprehensive set of developer tools that should improve designers’ and engineers’ workflow.

This news doubles down on the notion that the Hummer EV is a pivotal product for General Motors. The company unveiled the project at the beginning of 2020 with a Super Bowl ad spot and has since revealed little about the upcoming electric vehicle.

It’s unclear if GM intends to use the Unreal Engine in additional vehicles.

08 Oct 2020

The Zebra reaches $100M run rate, turns profitable as insurtech booms

From a cluster of insurance marketplace startups raising capital earlier this year, to neoinsurance provider Lemonade going public this summer at a strong valuation, Hippo’s huge new round and Root’s impending unicorn IPO, 2020 has proven to be a busy year for startups and other growth-oriented private tech companies focused on insurance.

That news cycle continues today, with The Zebra announcing that it has reached a roughly $100 million run rate, and, perhaps even more notably, that it has turned profitable.

TechCrunch most recently covered the car and home insurance marketplace startup in February, when it raised the first $38.5 million in a Series C eventually worth $43.5 million that Accel led. As we noted at the time, the startup joined “Insurify ($23 million), Gabi ($27 million) and Policygenius ($100 million) in raising new capital this year.”

The Zebra released a number of financial performance metrics as part of its Series C cycle, including that it recorded revenues of $37 million in 2019, and that it had reached a $60 million annual run rate around the time of its Series C. The Zebra also said that it could double in size this year, putting it above a $100 million run rate by the end of 2020.

With that history in hand, let’s talk about the company’s more recent performance.

A changing market

According to the company, The Zebra recorded net revenue of $6 million in May, 2020. That number grew to around $8 million in September. For those of you able to multiply, $8 million times 12 is $96 million, or a hair under $100 million. According to a call with the The Zebra’s CEO Keith Melnick, the company’s September was very close to $8.3 million, a figure that would put it on a $100 million run rate.

Given that our $100 million ARR club has a history of granting startups a little wiggle room when it comes to their size, it seems perfectly fine to say that The Zebra has reached revenue scale of $100 million; at its current rate of growth, even if its final September revenue tally is a hair light. the company should reach a nine-figure topline pace in October.

According to Melnick, while the bulk of The Zebra’s revenue isn’t recurring, a growing portion of it is. Per the CEO, around 2-5% of The Zebra’s revenue was recurring last year, a figure that he said is up to around 10% today. (If The Zebra binds an insurance policy itself, and that policy is renewed, its commissions can recur.)

What drove the company’s quick 2020 growth? In part, the insurance market changed, with insurance networks that depended on in-person sales seeing their ability to drive business slow thanks to COVID-19. Insurance marketplaces like The Zebra stepped in to assist, helping move some offline demand online. Melnick detailed that dynamic to TechCrunch, adding that when certain advertising channels saw demand fall, his company was able to leverage inexpensive inventory.

A number of factors appear to have added to The Zebra’s rapid growth thus far in 2020. Our next question is whether other, related players in the insurtech startup space have seen similar acceleration. More on that in a few days.

Finally, regarding The Zebra, the company said that it is now profitable. Of course, profit is a squishy word in 2020, so we wanted to know precisely what the company meant by the statement. Per the company’s CEO, it is generating positive net income, the gold-standard for profitability as the metric is inclusive of all costs, including the non-cash expenses that startups tend to strip out of their numbers to make the results look better than they really are.

If other players in the insurtech space are surfing similar trajectories, all that capital that went into the sector around the start of the year is going to appear prescient.

08 Oct 2020

Greycroft, Lerer Hippeau and Audible back audio measurement startup Veritonic

Veritonic is announcing that it has raised $3.2 million in Series A funding led by Greycroft, with participation from Lerer Hippeau and Amazon-owned audiobook service Audible.

CEO Scott Simonelli, who founded the New York startup with COO Andrew Eisner and CTO Kevin Marshall, told me that his goal is to create a new category of “audio intelligence” — namely, measuring and predicting the effectiveness of any pie of audio content or advertising.

The company is focused on marketing initially, with its first product, Creative Measurement, analyzing any audio ad and showing marketers how it scores compared to similar content, as well as identifying which parts of the audio are most effective. And Veritonic is launching a new product, Competitive Intelligence, which helps businesses see how and where their competitors are spending on advertising and provides alerts when those competitors launch a new ad.

Simonelli said that until now, audio measurement has been limited to things like creating audience panels with a few hundred people, which simply doesn’t scale given the enormous growth in the audio market.

Veritonic, on the other hand, has analyzed thousands of audio files, correlating the content with data about how people responded and using that analysis to predict how people will respond to new audio. Simonelli said the company can add more “fuel” by going out gathering more human response data, but even without additional data, it can provide an instant prediction on an ad or campaign’s effectiveness.

Veritonic

Image Credits: Veritonic

Simonelli also noted that Veritonic has spent the past five years developing technology that’s specifically attuned to the challenges of measuring audio effectiveness — like the fact that audio is experienced over time and, even more than other media, needs to be memorable.

“We can look at a sonic profile and predict and evaluate how somebody is going to respond,” he said.

The ultimate goal, he added, is to create the “benchmark for audio advertising,” which means working with a variety of players in the industry. For example, he said that when you look at other audio investments in Greycroft’s portfolio (such as podcast network Wondery or podcast analytics company Podsights): “Veritonic makes every one of those audio investments more valuable.”

Veritonic’s made pretty good progress on that goal already, with partners including Pandora, SiriusXM and NPR, and brand clients like Pepsi, Visa and Subway. It was previously backed by Newark Venture Partners (whose founder Don Katz previously founded Audible).

“We are excited to be a part of Veritonic’s continued growth and success,” said Greycroft’s Alan Patricof in a statement. “I’m personally very passionate about the future of voice, and the team at Veritonic deeply understands how to use audio to drive recall, stickiness and brand awareness — which is hugely important in a highly-competitive consumer brand landscape.”

Simonelli added that Veritonic will use the new funding to expand its data science and sales teams. Eventually, he hopes to start analyzing non-advertising content as well — for example, since Audible is an investor, he said, “Analyzing every audiobook on the planet is something we’re ready for and excited to do.”

08 Oct 2020

Microsoft is building a price comparison engine into its Edge browser

With its Edge browser now stable, Microsoft’s current focus for its Chromium-based browser is to build features that differentiate it from the competition.

With the holiday season coming up fast (though who knows what that will actually look like this year), it’s maybe no surprise that one of the first new features the company is announcing is a price comparison tool as part of its ‘Collections’ bookmarking service. That was always an obvious next step, but it’s nice to see Microsoft add some more functionality here.

Also coming to Edge is the general availability of its integration between Collections and Pinterest, as well as a new screenshot tool for capturing web content, improved PDF support and an update to its Teleparty extension for streaming TV shows in sync with your friends and chat about it in your browser’s sidebar.

In addition, you can now also start free video meetings with your friends and family (or co-workers), right from the browsers through an integration with Microsoft’s Meet Now service. You can have up to 50 people in these video chats, share screens and record these sessions. While this is rolling out in Edge first, it’s also coming to Outlook on the web and the Windows 10 taskbar in the next few weeks.

Image Credits: Microsoft

You can’t say Microsoft held back on new features with this release, but the highlight is surely the new price comparison engine, though.

“We’ve been talking about how collections is a great feature for anyone who wants to do research — whether that’s research in education or work, but a lot of people do research for shopping,” said Divya Kumar, Microsoft’s Director of Product Management for its browser and search tools. “We’ve really started to talk about this rhythm of, ‘okay, if use drop things into Collections, we should be really smart enough to give you the data that you’re looking for.’ This felt like a really natural next step for us to do.”

As long as Edge — through its connection with Microsoft Bing‘s existing price comparison engine — recognizes that you’re saving a product site, maybe from Amazon or Best Buy, it’ll show you the option to compare prices right in the browser tools bar. The next logical step now is for the team to add alerts when prices change and Kumar tells me that this is on the roadmap, together with several other features the team wasn’t ready to discuss yet.

Microsoft says it does not get affiliate fees when you buy through one of the links in Collections.

Talking about shopping, the team is also launching its Bing Rebates cashback program out of beta now (after shutting down a somewhat similar program a while back). The company signed up the likes for Walmart, Expedia, Walgreens and Nvidia for this program (though Nvidia only gives you a whopping 0.5% cashback). Still, it may just get some people to use Bing, though you have to sign up as a Microsoft Rewards member to participate.

“Rebates is a great part of the shopping story that we’re trying to land in terms of enabling smarter shopping experiences in the browser,” said Kumar.

In addition, through its Give with Bing program, you can now use your Microsoft Rewards points to donate to charitable organizations and until the end of the year, Microsoft will match your gift. This is live in the including: U.S., UK, Canada, Australia, France, Italy, Germany and Spain.

As somebody who works on the web and takes screenshots all day, the updated screenshotting tool is also worth a look. Edge could already help you take screenshots, but until now, all you could do was copy what was on your screen. Now, you can also grab content from further down the page and then save it or share it directly from Edge.

If you’re an iOS user and have switched to Edge there — or thought about it — the news here is that you can now select Edge as your default browser there, a feature Apple finally enabled with the launch of iOS 14.

08 Oct 2020

Headroom, which uses AI to supercharge videoconferencing, raises $5M

Videoconferencing has become a cornerstone of how many of us work these days — so much so that one leading service, Zoom, has graduated into verb status because of how much it’s getting used.

But does that mean videoconferencing works as well as it should? Today, a new startup called Headroom is coming out of stealth, tapping into a battery of AI tools — computer vision, natural language processing and more — on the belief that the answer to that question is a clear — no bad WiFi interruption here — “no.”

Headroom not only hosts videoconferences, but then provides transcripts, summaries with highlights, gesture recognition, optimised video quality, and more, and today it’s announcing that it has raised a seed round of $5 million as it gears up to launch its freemium service into the world.

You can sign up to the waitlist to pilot it, and get other updates here.

The funding is coming from Anna Patterson of Gradient Ventures (Google’s AI venture fund); Evan Nisselson of LDV Capital (a specialist VC backing companies buidling visual technologies); Yahoo founder Jerry Yang, now of AME Cloud Ventures; Ash Patel of Morado Ventures; Anthony Goldbloom, the cofounder and CEO of Kaggle.com; and Serge Belongie, Cornell Tech associate dean and Professor of Computer Vision and Machine Learning.

It’s an interesting group of backers, but that might be because the founders themselves have a pretty illustrious background with years of experience using some of the most cutting-edge visual technologies to build other consumer and enterprise services.

Julian Green — a British transplant — was most recently at Google, where he ran the company’s computer vision products, including the Cloud Vision API that was launched under his watch. He came to Google by way of its acquisition of his previous startup Jetpac, which used deep learning and other AI tools to analyze photos to make travel recommendations. In a previous life, he was one of the co-founders of Houzz, another kind of platform that hinges on visual interactivity.

Russian-born Andrew Rabinovich, meanwhile, spent the last five years at Magic Leap, where he was the head of AI, and before that, the director of deep learning and the head of engineering. Before that, he too was at Google, as a software engineer specializing in computer vision and machine learning.

You might think that leaving their jobs to build an improved videoconferencing service was an opportunistic move, given the huge surge of use that the medium has had this year. Green, however, tells me that they came up with the idea and started building it at the end of 2019, when the term “Covid-19” didn’t even exist.

“But it certainly has made this a more interesting area,” he quipped, adding that it did make raising money significantly easier, too. (The round closed in July, he said.)

Given that Magic Leap had long been in limbo — AR and VR have proven to be incredibly tough to build businesses around, especially in the short- to medium-term, even for a startup with hundreds of millions of dollars in VC backing — and could have probably used some more interesting ideas to pivot to; and that Google is Google, with everything tech having an endpoint in Mountain View, it’s also curious that the pair decided to strike out on their own to build Headroom rather than pitch building the tech at their respective previous employers.

Green said the reasons were two-fold. The first has to do with the efficiency of building something when you are small. “I enjoy moving at startup speed,” he said.

And the second has to do with the challenges of building things on legacy platforms versus fresh, from the ground up.

“Google can do anything it wants,” he replied when I asked why he didn’t think of bringing these ideas to the team working on Meet (or Hangouts if you’re a non-business user). “But to run real-time AI on video conferencing, you need to build for that from the start. We started with that assumption,” he said.

The first iteration of the product will include features that will automatically take transcripts of the whole conversation, with the ability to use the video replay to edit the transcript if something has gone awry; offer a summary of the key points that are made during the call; and identify gestures to help shift the conversation.

And Green tells me that they are already also working on features that will be added into future iterations. When the videoconference uses supplementary presentation materials, those can also be processed by the engine for highlights and transcription too.

And another feature will optimize the pixels that you see for much better video quality, which should come in especially handy when you or the person/people you are talking to are on poor connections.

“You can understand where and what the pixels are in a video conference and send the right ones,” he explained. “Most of what you see of me and my background is not changing, so those don’t need to be sent all the time.”

All of this taps into some of the more interesting aspects of sophisticated computer vision and natural language algorithms. Creating a summary, for example, relies on technology that is able to suss out not just what you are saying, but what are the most important parts of what you or someone else is saying.

And if you’ve ever been on a videocall and found it hard to make it clear you’ve wanted to say something, without straight-out interrupting the speaker, you’ll understand why gestures might be very useful.

But they can also come in handy if a speaker wants to know if he or she is losing the attention of the audience: the same tech that Headroom is using to detect gestures for people keen to speak up can also be used to detect when they are getting bored or annoyed and pass that information on to the person doing the talking.

“It’s about helping with EQ,” he said, with what I’m sure was a little bit of his tongue in his cheek, but then again we were on a Google Meet, and I may have misread that.

And that brings us to why Headroom is tapping into an interesting opportunity. At their best, when they work, tools like these not only supercharge videoconferences, but they have the potential to solve some of the problems you may have come up against in face-to-face meetings, too. Building software that actually might be better than the “real thing” is one way of making sure that it can have staying power beyond the demands of our current circumstances (which hopefully won’t be permanent circumstances).