Year: 2019

26 Nov 2019

New Amazon capabilities put machine learning in reach of more developers

Today, Amazon announced a new approach that it says will put machine learning technology in reach of more developers and line of business users. Amazon has been making a flurry of announcements ahead of its re:Invent customer conference next week in Las Vegas.

While the company offers plenty of tools for data scientists to build machine learning models and process, store and visualize data, it wants to put that capability directly in the hands of developers with the help of the popular database query language, SQL.

By taking advantage of tools like Amazon QuickSight, Aurora and Athena in combination with SQL queries, developers can have much more direct access to machine learning models and underlying data without any additional coding, says VP of artificial intelligence at AWS, Matt Wood.

“This announcement is all about is making it easier for developers to add machine learning predictions to their products and their processes by integrating those predictions directly with their databases,” Wood told TechCrunch.

For starters, Wood says developers can take advantage of Aurora, the company’s SQL (and Postgres) compatible database to build a simple SQL query into an application, which will automatically pull the data into the application and run whatever machine learning model the developer associates with it.

The second piece involves Athena, the company’s serverless query service. As with Aurora, developers can write a SQL query — in this case, against any data store — and based on a machine learning model they choose, return a set of data for use in an application.

The final piece is QuickSight, which is Amazon’s data visualization tool. Using one of the other tools to return some set of data, developers can use that data to create visualizations based on it inside whatever application they are creating.

“By making sophisticated ML predictions more easily available through SQL queries and dashboards, the changes we’re announcing today help to make ML more usable and accessible to database developers and business analysts. Now anyone who can write SQL can make — and importantly use — predictions in their applications without any custom code,” Amazon’s Matt Assay wrote in a blog post announcing these new capabilities.

Assay added that this approach is far easier than what developers had to do in the past to achieve this. “There is often a large amount of fiddly, manual work required to take these predictions and make them part of a broader application, process or analytics dashboard,” he wrote.

As an example, Wood offers a lead-scoring model you might use to pick the most likely sales targets to convert. “Today, in order to do lead scoring you have to go off and wire up all these pieces together in order to be able to get the predictions into the application,” he said. With this new capability, you can get there much faster.

“Now, as a developer I can just say that I have this lead scoring model which is deployed in SageMaker, and all I have to do is write literally one SQL statement that I do all day long into Aurora, and I can start getting back that lead scoring information. And then I just display it in my application and away I go,” Wood explained.

As for the machine learning models, these can come pre-built from Amazon, be developed by an in-house data science team or purchased in a machine learning model marketplace on Amazon, says Wood.

Today’s announcements from Amazon are designed to simplify machine learning and data access, and reduce the amount of coding to get from query to answer faster.

26 Nov 2019

Android’s Ambient Mode will soon come to ‘select devices’

You’ve probably heard murmurs about Google’s forthcoming Ambient Mode for Android . The company first announced this feature, which essentially turns an Android device into a smart display while it’s charging, in September. Now, in a Twitter post, Google confirmed that it will launch soon, starting with a number of select devices that run Android 8.0 or later.

At the time, Google said Ambient Mode was coming to the Lenovo Smart Tab M8 HD and Smart Tab tablets, as well as the Nokia 7.2 and 6.2 phones. According to the Verge, it’ll also come to Sony, Nokia, Transsion and Xiaomi phones, though Google’s own Pixels aren’t on the company’s list yet.

“The ultimate goal for proactive Assistant is to help you get things done faster, anticipate your needs and accomplish your tasks as quickly and as easily as possible,” said Google Assistant product manager Arvind Chandrababu in the announcement. “It’s fundamentally about moving from an app-based way of doing things to an intent-based way of doing things. Right now, users can do most things with their smartphones, but it requires quite a bit of mental bandwidth to figure out, hey, I need to accomplish this task, so let me backtrack and figure out all the steps that I need to do in order to get there.”

Those are pretty lofty goals. In practice, what this means, for now, is that you will be able to set an alarm with just a few taps from the ambient screen, see your upcoming appointments, turn off your connected lights and see a slideshow of your images in the background. I don’t think that any of those tasks really consumed a lot of mental bandwidth in the first place, but Google says it has more proactive experiences planned for the future.

 

26 Nov 2019

Android’s Ambient Mode will soon come to ‘select devices’

You’ve probably heard murmurs about Google’s forthcoming Ambient Mode for Android . The company first announced this feature, which essentially turns an Android device into a smart display while it’s charging, in September. Now, in a Twitter post, Google confirmed that it will launch soon, starting with a number of select devices that run Android 8.0 or later.

At the time, Google said Ambient Mode was coming to the Lenovo Smart Tab M8 HD and Smart Tab tablets, as well as the Nokia 7.2 and 6.2 phones. According to the Verge, it’ll also come to Sony, Nokia, Transsion and Xiaomi phones, though Google’s own Pixels aren’t on the company’s list yet.

“The ultimate goal for proactive Assistant is to help you get things done faster, anticipate your needs and accomplish your tasks as quickly and as easily as possible,” said Google Assistant product manager Arvind Chandrababu in the announcement. “It’s fundamentally about moving from an app-based way of doing things to an intent-based way of doing things. Right now, users can do most things with their smartphones, but it requires quite a bit of mental bandwidth to figure out, hey, I need to accomplish this task, so let me backtrack and figure out all the steps that I need to do in order to get there.”

Those are pretty lofty goals. In practice, what this means, for now, is that you will be able to set an alarm with just a few taps from the ambient screen, see your upcoming appointments, turn off your connected lights and see a slideshow of your images in the background. I don’t think that any of those tasks really consumed a lot of mental bandwidth in the first place, but Google says it has more proactive experiences planned for the future.

 

26 Nov 2019

Leading robotics VCs talk about where they’re investing

The Valley’s affinity for robotics shows no signs of cooling. Technical enhancements through innovations like AI/ML, compute power and big data utilization continue to drive new performance milestones, efficiencies and use cases.

Despite the old saying, “hardware is hard,” investment in the robotics space continues to expand. Money is pouring in across robotics’ billion-dollar sub verticals, including industrial and labor automation, drone delivery, machine vision and a wide range of others.

According to data from Pitchbook and Crunchbase, 2018 saw new highs for the number of venture deals and total invested capital in the space, with roughly $5 billion in investment coming from nearly 400 deals. With robotics well on its way to again set new investment peaks in 2019, we asked 13 leading VCs who work at firms spanning early to growth stages to share what’s exciting them most and where they see opportunity in the sector:

Participants discuss the compelling business models for robotics startups (such as “Robots as a Service”), current valuations, growth tactics and key robotics KPIs, while also diving into key trends in industrial automation, human replacement, transportation, climate change, and the evolving regulatory environment.

Shahin Farshchi, Lux Capital

Which trends are you most excited in robotics from an investing perspective?

The opportunity to unlock human superpowers:

  • Increase productivity to enhance creativity leading to new products and businesses.
  • Automating dangerous tasks and eliminating undesirable, dangerous jobs in mining, manufacturing, and shipping/logistics.
  • Making the most deadly mode of transport: driving, 100% safe.

How much time are you spending on robotics right now? Is the market under-heated, overheated, or just right?

  • Three-quarters of the new opportunities I look at involve some sort of automation.
  • The market for robot startups attempting direct human labor replacement, floor-sweeping, and dumb-waiter robots, and robotic lawnmowers and vacuums is OVER heated (too many startups).
  • The market for robot startups that assist human workers, increase human productivity, and automate undesirable human tasks is UNDER heated (not enough startups).

Are there startups that you wish you would see in the industry but don’t? Plus any other thoughts you want to share with TechCrunch readers.

I want to see more founders that are building robotics startups that:

  • Solve LATENT pain points in specific, well-understood industries (vs. building a cool robot that can do cool things).
  • Focus on increasing HUMAN productivity (vs. trying to replace humans).
  • Are solving for building interesting BUSINESSES (vs. emphasizing cool robots).

Kelly Chen, DCVC

Three years ago, the most compelling companies to us in the industrial space were in software. We now spend significantly more time in verticalized AI and hardware. Robotic companies we find most exciting today are addressing key driver areas of (1) high labor turnover and shortage and (2) new research around generalization on the software side. For many years, we have seen some pretty impressive science projects out of labs, but once you take these into the real world, they fail. In these changing environmental conditions, it’s crucial that robots work effectively in-the-wild at speeds and economics that make sense. This is an extremely difficult combination of problems, and we’re now finally seeing it happen. A few verticals we believe will experience a significant overhaul in the next 5 years include logistics, waste, micro-fulfillment, and construction.

With this shift in robotic capability, we’re also seeing a shift in customer sentiment. Companies who are used to buying outright machines are now more willing to explore RaaS (Robot as a Service) models for compelling robotic solutions – and that repeat revenue model has opened the door for some formerly enterprise software-only investors. On the other hand, companies exploring robotics in place of tasks with high labor shortages, such as trucking or agriculture, are more willing to explore per hour or per unit pick models.

Adoption won’t be overnight, but in the medium term, we are very enthusiastic about the ways robotics will transform industries. We do believe investing in this space requires the right technical know-how and network to evaluate and support companies, so momentum investors looking to dip their hand into a hot space may be disappointed.

Rob Coneybeer, Shasta Ventures

We’re entering the early stages of the golden age of robotics. Robotics is already a huge, multibillion-dollar market – but today that market is dominated by industrial robotics, such as welding and assembly robots found on automotive assembly lines around the world. These robots repeat basic tasks, over and over, and are usually separated by caged walls from humans for safety. However, this is rapidly changing. Advances in perception, driven by deep learning, machine vision and inexpensive, high-performance cameras allow robots to safely navigate the real world, escape the manufacturing cages, and closely interact with humans.

I think the biggest opportunities in robotics are those which attack enormous markets where it’s difficult to hire and retain labor. One great example is long-haul trucking. Highway driving represents one of the easiest problems for autonomous vehicles, since the lanes tend to be well-marked, the roads have gentle curves, and all traffic runs in the same direction. In the United States alone, long haul trucking is a multi-hundred billion dollar market every year. The customer set is remarkably scalable with standard trailer sizes and requirements for shipping freight. Yet at the same time, trucking companies have trouble hiring and retaining drivers. It’s the perfect recipe for robotic opportunity.

I’m intrigued by agricultural robots. I’ve seen dozens of companies attacking every part of the farming equation – from field clearing and preparation, to seeding, to weeding, applying fertilizer, and eventually harvesting. I think there’s a lot of value to be “harvested” here by robots, especially since seasonal field labor is becoming harder to find and increasingly expensive. One enormous challenge in this market, however, is that growing seasons mean that the robotic machinery has a lot of downtime and the cost of equipment isn’t as easily amortized in other markets with higher utilization. The other big challenge is that fields are very, very tough on hardware and electronics due to environmental conditions like rain, dust and mud.

There are a ton of important problems to be solved in robotics. The biggest open challenges in my mind are locomotion and grasping. Specifically, I think that for in-building applications, robots need to be able to do all the thing which humans can do – specifically opening and closing doors, climbing stairs, and picking items off of shelves and putting them down gently. Plenty of startups have tackled subsets of these problems, but to date no one has built a generalized solution. To be fair, to get to parity with humans on generalized locomotion and grasping, it’s probably going to take another several decades.

Overall, I feel like the funding environment for robotics is about right, with a handful of overfunded areas (like autonomous passenger vehicles). I think that the most overlooked near-term opportunity in robotics is teleoperation. Specifically, pairing fully automated robotic operations with occasional human remote operation of individual robots. Starship Technologies is a perfect example of this. Starship is actively deploying local delivery robots around the world today. Their first major deployment is at George Mason University in Virginia. They have nearly 50 active robots delivering food around the campus. They’re autonomous most of the time, but when they encounter a problem or obstacle they can’t solve, a human operator in a teleoperation center manually controls the robot remotely. At the same time. Starship tracks and prioritizes these problems for engineers to solve, and slowly incrementally reduces the number of problems the robots can’t solve on their own. I think people view robotics as a “zero or one” solution when in fact there’s a world where humans and robots work together for a long time.

26 Nov 2019

Leading robotics VCs talk about where they’re investing

The Valley’s affinity for robotics shows no signs of cooling. Technical enhancements through innovations like AI/ML, compute power and big data utilization continue to drive new performance milestones, efficiencies and use cases.

Despite the old saying, “hardware is hard,” investment in the robotics space continues to expand. Money is pouring in across robotics’ billion-dollar sub verticals, including industrial and labor automation, drone delivery, machine vision and a wide range of others.

According to data from Pitchbook and Crunchbase, 2018 saw new highs for the number of venture deals and total invested capital in the space, with roughly $5 billion in investment coming from nearly 400 deals. With robotics well on its way to again set new investment peaks in 2019, we asked 13 leading VCs who work at firms spanning early to growth stages to share what’s exciting them most and where they see opportunity in the sector:

Participants discuss the compelling business models for robotics startups (such as “Robots as a Service”), current valuations, growth tactics and key robotics KPIs, while also diving into key trends in industrial automation, human replacement, transportation, climate change, and the evolving regulatory environment.

Shahin Farshchi, Lux Capital

Which trends are you most excited in robotics from an investing perspective?

The opportunity to unlock human superpowers:

  • Increase productivity to enhance creativity leading to new products and businesses.
  • Automating dangerous tasks and eliminating undesirable, dangerous jobs in mining, manufacturing, and shipping/logistics.
  • Making the most deadly mode of transport: driving, 100% safe.

How much time are you spending on robotics right now? Is the market under-heated, overheated, or just right?

  • Three-quarters of the new opportunities I look at involve some sort of automation.
  • The market for robot startups attempting direct human labor replacement, floor-sweeping, and dumb-waiter robots, and robotic lawnmowers and vacuums is OVER heated (too many startups).
  • The market for robot startups that assist human workers, increase human productivity, and automate undesirable human tasks is UNDER heated (not enough startups).

Are there startups that you wish you would see in the industry but don’t? Plus any other thoughts you want to share with TechCrunch readers.

I want to see more founders that are building robotics startups that:

  • Solve LATENT pain points in specific, well-understood industries (vs. building a cool robot that can do cool things).
  • Focus on increasing HUMAN productivity (vs. trying to replace humans).
  • Are solving for building interesting BUSINESSES (vs. emphasizing cool robots).

Kelly Chen, DCVC

Three years ago, the most compelling companies to us in the industrial space were in software. We now spend significantly more time in verticalized AI and hardware. Robotic companies we find most exciting today are addressing key driver areas of (1) high labor turnover and shortage and (2) new research around generalization on the software side. For many years, we have seen some pretty impressive science projects out of labs, but once you take these into the real world, they fail. In these changing environmental conditions, it’s crucial that robots work effectively in-the-wild at speeds and economics that make sense. This is an extremely difficult combination of problems, and we’re now finally seeing it happen. A few verticals we believe will experience a significant overhaul in the next 5 years include logistics, waste, micro-fulfillment, and construction.

With this shift in robotic capability, we’re also seeing a shift in customer sentiment. Companies who are used to buying outright machines are now more willing to explore RaaS (Robot as a Service) models for compelling robotic solutions – and that repeat revenue model has opened the door for some formerly enterprise software-only investors. On the other hand, companies exploring robotics in place of tasks with high labor shortages, such as trucking or agriculture, are more willing to explore per hour or per unit pick models.

Adoption won’t be overnight, but in the medium term, we are very enthusiastic about the ways robotics will transform industries. We do believe investing in this space requires the right technical know-how and network to evaluate and support companies, so momentum investors looking to dip their hand into a hot space may be disappointed.

Rob Coneybeer, Shasta Ventures

We’re entering the early stages of the golden age of robotics. Robotics is already a huge, multibillion-dollar market – but today that market is dominated by industrial robotics, such as welding and assembly robots found on automotive assembly lines around the world. These robots repeat basic tasks, over and over, and are usually separated by caged walls from humans for safety. However, this is rapidly changing. Advances in perception, driven by deep learning, machine vision and inexpensive, high-performance cameras allow robots to safely navigate the real world, escape the manufacturing cages, and closely interact with humans.

I think the biggest opportunities in robotics are those which attack enormous markets where it’s difficult to hire and retain labor. One great example is long-haul trucking. Highway driving represents one of the easiest problems for autonomous vehicles, since the lanes tend to be well-marked, the roads have gentle curves, and all traffic runs in the same direction. In the United States alone, long haul trucking is a multi-hundred billion dollar market every year. The customer set is remarkably scalable with standard trailer sizes and requirements for shipping freight. Yet at the same time, trucking companies have trouble hiring and retaining drivers. It’s the perfect recipe for robotic opportunity.

I’m intrigued by agricultural robots. I’ve seen dozens of companies attacking every part of the farming equation – from field clearing and preparation, to seeding, to weeding, applying fertilizer, and eventually harvesting. I think there’s a lot of value to be “harvested” here by robots, especially since seasonal field labor is becoming harder to find and increasingly expensive. One enormous challenge in this market, however, is that growing seasons mean that the robotic machinery has a lot of downtime and the cost of equipment isn’t as easily amortized in other markets with higher utilization. The other big challenge is that fields are very, very tough on hardware and electronics due to environmental conditions like rain, dust and mud.

There are a ton of important problems to be solved in robotics. The biggest open challenges in my mind are locomotion and grasping. Specifically, I think that for in-building applications, robots need to be able to do all the thing which humans can do – specifically opening and closing doors, climbing stairs, and picking items off of shelves and putting them down gently. Plenty of startups have tackled subsets of these problems, but to date no one has built a generalized solution. To be fair, to get to parity with humans on generalized locomotion and grasping, it’s probably going to take another several decades.

Overall, I feel like the funding environment for robotics is about right, with a handful of overfunded areas (like autonomous passenger vehicles). I think that the most overlooked near-term opportunity in robotics is teleoperation. Specifically, pairing fully automated robotic operations with occasional human remote operation of individual robots. Starship Technologies is a perfect example of this. Starship is actively deploying local delivery robots around the world today. Their first major deployment is at George Mason University in Virginia. They have nearly 50 active robots delivering food around the campus. They’re autonomous most of the time, but when they encounter a problem or obstacle they can’t solve, a human operator in a teleoperation center manually controls the robot remotely. At the same time. Starship tracks and prioritizes these problems for engineers to solve, and slowly incrementally reduces the number of problems the robots can’t solve on their own. I think people view robotics as a “zero or one” solution when in fact there’s a world where humans and robots work together for a long time.

26 Nov 2019

NYSE proposes big change to direct listings

The New York Stock Exchange filed paperwork this morning with the U.S. Securities and Exchange Commission to allow companies to raise capital as part of a direct listing.

Direct listings are a way for companies to go public by selling existing shares held by insiders, employees and investors directly to the market, rather than the traditional method of issuing new shares. Direct listings have become increasingly popular since Spotify’s 2018 exit, which allowed its employees immediate liquidity, removed preferred access from bankers and allowed for market-driven price discovery. Companies, like Spotify, that opt to complete a direct listing are able to bypass the financial roadshow, thus avoiding some of Wall Street’s exorbitant fees. Historically, however, these companies have not been able to raise fresh capital as part of the process.

The NYSE’s new proposal seeks to change that. Specifically, the stock exchange plans to amend Chapter One of the Listed Company Manual, which outlines the NYSE’s initial listing requirements for companies completing initial public offerings or direct listings. If the amendment is approvedthe NYSE is subject to the regulatory oversight of the SECcompanies going public on the NYSE will be permitted to raise capital through a direct listing.

The proposed hybrid model is likely to appeal to Silicon Valley tech startups, who’ve grown more familiar with the innovate route to the public markets following Spotify and Slack’s direct listings. On the backs of these exits, tech industry leaders have touted direct listings as the latest and greatest path to the public markets. Venture capitalist Bill Gurley, in particular, has encouraged companies to consider the method. Meanwhile Silicon Valley darling Airbnb, which has stated its intent to go public in 2020, is said to be considering a direct listing rather than a traditional IPO.

Gurley, who has expressed his discontent with bankers’ inability to adequately price IPOs, recently hosted a one-day conference focused on direct listings titled Direct Listings: A Simpler and Superior Alternative to the IPO. The event was attended by members of tech’s elite, including Sequoia Capital’s Mike Moritz and Spotify chief financial officer Barry McCarthy .

“Most people are afraid of backlash from the banks so they don’t speak out,” Gurley told CNBC earlier this year of his decision to publicly advocate for direct listings. “I’m at a point in my career where I can handle the heat.”

26 Nov 2019

Gorgias raises $14M to help e-commerce companies deliver faster (and more lucrative) customer service

Gorgias, a startup offering artificial intelligence tools for customer service and support, is announcing that it has raised $14 million in Series A funding.

Co-founder and CEO Romain Lapeyre told me that the startup (whose name is pronounced “gorgeous”) is taking advantage of a broader shift as brands are looking to sell directly to consumers, rather than going through intermediaries like Amazon — for example, he pointed to Nike’s recent decision to pull its products from Amazon.

As brands make this change, Lapeyre (pictured above with his co-founder and CTO Alex Plugaro) said they need a “bundle of tools” to build their online business, and “each little part of the bundle is separate.” So they might create a store with Shopify, accept payments via Stripe — and naturally, Lapeyre believes they should be handling their customer support through Gorgias .

The product integrates with Shopify, using AI and customer data to automate responses to basic questions like, “What’s my tracking number?” By doing this, the business can free customer service representatives from spending most of their time responding to these routine requests, and the customers get faster answers.

Gorgias screenshot

“The automation should just be the very basic questions,” Lapeyre added.

But even when it comes to more complex queries, Gorgias also provides tools that help the customer service representatives to respond more quickly and to upsell customers on additional products and services — Lapeyre said they’re acting as “sales associates rather than customer service agents.”

It seems like this approach is becoming a reality at some of Gorgias’ 2,000 customers — the Groovelife customer service team gets paid a commission based on upselling. At Steve Madden, meanwhile, the customer service team is using automation to respond to 20% of tickets.

Gorgias previously raised $1.5 million in seed funding. The new round was led by Flex Capital, with participation of SaaStr, Alven, CRV, Amplify Partners and Eric Yuan.

Lapeyre said Gorgias will use the money to build out the product with new  features while also bringing on more merchants.

26 Nov 2019

Disney+ adds ‘Continue Watching’ feature

The launch of Disney+ has been successful in terms of sign-ups and adoption, but the user experience wasn’t quite up to par with what you get from other streaming services out the gate. Most noticeable was the lack of an easy way to pick up streaming where you left off – but that changes with a new “Continue Watching” section being added to the app’s homepage across all platforms where Disney+ is available as of today.

It should show up automatically as a new fourth row, under the “Originals” section. It behaves just as you’d expect, giving you a list of in-progress movies and shows that you’re watching, with a progress bar and the amount of time remaining. Tapping any of the images will jump right back into that content at the place where you left off, and the resume feature works across your logged in devices.

Turns out that this feature was supposed to be live at launch but was removed temporarily prior to the service going live so that the service’s engineers could focus on making sure other elements worked as intended for consumers. Disney+ still had its share of launch issues, including temporary inaccessibility due to overwhelming volume.

26 Nov 2019

Disney+ adds ‘Continue Watching’ feature

The launch of Disney+ has been successful in terms of sign-ups and adoption, but the user experience wasn’t quite up to par with what you get from other streaming services out the gate. Most noticeable was the lack of an easy way to pick up streaming where you left off – but that changes with a new “Continue Watching” section being added to the app’s homepage across all platforms where Disney+ is available as of today.

It should show up automatically as a new fourth row, under the “Originals” section. It behaves just as you’d expect, giving you a list of in-progress movies and shows that you’re watching, with a progress bar and the amount of time remaining. Tapping any of the images will jump right back into that content at the place where you left off, and the resume feature works across your logged in devices.

Turns out that this feature was supposed to be live at launch but was removed temporarily prior to the service going live so that the service’s engineers could focus on making sure other elements worked as intended for consumers. Disney+ still had its share of launch issues, including temporary inaccessibility due to overwhelming volume.

26 Nov 2019

Instagram founders join $30M raise for Loom work video messenger

Why are we all trapped in enterprise chat apps if we talk 6X faster than we type, and our brain processes visual info 60,000X faster than text? Thanks to Instagram, we’re not as camera-shy anymore. And everyone’s trying to remain in flow instead of being distracted by multi-tasking.

That’s why now is the time for Loom. It’s an enterprise collaboration video messaging service that lets you send quick clips of yourself so you can get your point across and get back to work. Talk through a problem, explain your solution, or narrate a screenshare. Some engineering hocus pocus sees videos start uploading before you finish recording so you can share instantly viewable links as soon as you’re done.

“What we felt was that more visual communication could be translated into the workplace and deliver disproportionate value” co-founder and CEO Joe Thomas tells me. He actually conducted our whole interview over Loom, responding to emailed questions with video clips.

Launched in 2016, Loom is finally hitting its growth spurt. It’s up from 1.1 million users and 18,000 companies in February to 1.8 million people at 50,000 businesses sharing 15 million minutes of Loom videos per month. Remote workers are especially keen on Loom since it gives them face-to-face time with colleagues without the annoyance of scheduling synchronous video calls. “80% of our professional power users had primarily said that they were communicating with people that they didn’t share office space with” Thomas notes.

A smart product, swift traction, and a shot at riding the consumerization of enterprise trend has secured Loom a $30 million Series B. The round that’s being announced later today was led by prestigious SAAS investor Sequoia and joined by Kleiner Perkins, Figma CEO Dylan Field, Front CEO Mathilde Collin, and Instagram co-founders Kevin Systrom and Mike Krieger.

“At Instagram, one of the biggest things we did was focus on extreme performance and extreme ease of use and that meant optimizing every screen, doing really creative things about when we started uploading, optimizing everything from video codec to networking” Krieger says. “Since then I feel like some products have managed to try to capture some of that but few as much as Loom did. When I first used Loom I turned to Kevin who was my Instagram co-founder and said, ‘oh my god, how did they do that? This feels impossibly fast.'”

Systrom concurs about the similarities, saying “I’m most excited because I see how they’re tackling the problem of visual communication in the same way that we tried to tackle that at Instagram.” Loom is looking to double-down there, potentially adding the ability to Like and follow videos from your favorite productivity gurus or sharpest co-workers.

Loom is also prepping some of its most requested features. The startup is launching an iOS app next month with Android coming the first half of 2020, improving its video editor with blurring for hiding your bad hair day and stitching to connect multiple takes. New branding options will help external sales pitches and presentations look right. What I’m most excited for is transcription, which is also slated for the first half of next year through a partnership with another provider, so you can skim or search a Loom. Sometimes even watching at 2X speed is too slow.

But the point of raising a massive $30 million Series B just a year after Loom’s $11 million Kleiner-led Series A is to nail the enterprise product and sales process. To date, Loom has focused on a bottom-up distribution strategy similar to Dropbox. It tries to get so many individual employees to use Loom that it becomes a team’s default collaboration software. Now it needs to grow up so it can offer the security and permissions features IT managers demand. Loom for teams is rolling out in beta access this year before officially launching in early 2020.

Loom’s bid to become essential to the enterprise, though, is its team video library. This will let employees organize their Looms into folders of a knowledge base so they can explain something once on camera, and everyone else can watch whenever they need to learn that skill. No more redundant one-off messages begging for a team’s best employees to stop and re-teach something. The Loom dashboard offers analytics on who’s actually watching your videos. And integration directly into popular enterprise software suites will let recipients watch without stopping what they’re doing.

To build out these features Loom has already grown to a headcount of 45. It’s also hired away former head of growth at Dropbox Nicole Obst, head of design for Slack Joshua Goldenberg, and VP of commercial product strategy for Intercom Matt Hodges.

Still, the elephants in the room remain Slack and Microsoft Teams. Right now, they’re mainly focused on text messaging with some additional screensharing and video chat integrations. They’re not building Loom-style asynchronous video messaging…yet. “We want to be clear about the fact that we don’t think we’re in competition with Slack or Microsoft Teams at all. We are a complementary tool to chat” Thomas insists. But given the similar productivity and communication ethos, those incumbents could certainly opt to compete.

Loom co-founder and CEO Joe Thomas

Hodges, Loom’s head of marketing, tells me “I agree Slack and Microsoft could choose to get into this territory, but what’s the opportunity cost for them in doing so? It’s the classic build vs. buy vs. integrate argument.” Slack bought screensharing tool Screenhero, but partners with Zoom and Google for video chat. Loom will focus on being easily integratable so it can plug into would-be competitors. And Hodges notes that “Delivering asynchronous video recording and sharing at scale is non-trivial. Loom holds a patent on its streaming, transcoding, and storage technology, which has proven to provide a competitive advantage to this day.”

The tea leaves point to video invading more and more of our communication, so I expect rival startups and features to Loom will crop up. As long as it has the head start, it needs to move as fast as it can. “It’s really hard to maintain focus to deliver on the core product experience that we set out to deliver versus spreading ourselves too thin. And this is absolutely critical” Thomas tells me.

One thing that could set Loom apart? A commitment to financial fundamentals. “When you grow really fast, you can sometimes lose sight of what is the core reason for a business entity to exist, which is to become profitable. . . Even in a really bold market where cash can be cheap, we’re trying to keep profitability at the top of our minds.”