Category: UNCATEGORIZED

26 Nov 2019

Gift Guide: 10 suitcase-friendly gifts for frequent flyers

Welcome to TechCrunch’s 2019 Holiday Gift Guide! Need help with gift ideas? We’re here to help! We’ll be rolling out gift guides from now through the end of December, so check back regularly.

Once again, TechCrunch has asked me to put together a list of travel-friendly gadgets, and once again, I find myself between back-to-back international flights. If nothing else, all of the travel for this gig has made me much better at figuring out what to pack and what to leave behind.

There’s a science to traveling. It requires a lot of trial and error to maintain your sanity through 12 hour flights, powering through jet lag and generally making it through in one piece. Common wisdom posits that it’s the journey, not the destination that counts, but when it comes to the rigors of being in the air and on the road half your life, the journey is usually the worst part.

I’ve narrowed the list down to ten, but there was plenty of stuff I could have included. A good spare battery pack is always a must. There are a million good, cheap ones online. I’ve been carrying around an OmniCharge, myself. Ditto for solid wheeled suitcase. You can spend a ton of money on an Away bag, or you can can buy three bags at a fraction of the price.

Chromebooks are worth a look, as well, for battery life alone. I took the Pixelbook Go with me to China earlier this month. They’re also great for security reasons. Oh, and I won’t include it because it’s a terrible gift, but far and away my most important travel companion is a pack of Wet Ones hand wipes. Hand sanitizer is fine, but if you’re not wiping down your phone a couple of times a day while traveling, you’re not doing yourself any favors.

Amazon Kindle Oasis 

dscf3053

I’ve mostly attempted to avoid repeats from last year, but I couldn’t not include the Oasis. An e-reader is absolutely invaluable for travels. The Oasis is the best on the market and the new version (while largely the same) gets a few tweaks, including an E Ink display with a faster refresh rate and adjustable colors on the front light to reduce blue light. That’s key when attempting to adjust to a new timezone. Also, shoutout to Calibre, a fantastic open-source e-book software I use to convert e-pub files to read on my Kindle.

Anker PowerPort Atom 

DSCF3791

Everyone knows the importance of traveling with a good battery pack. But what of a plug? Anker’s PowerPort Atom is tiny and useful for a MacBook Pro 13 inch or smaller. At 30W, it packs a punch for its tiny size. That’s key for navigating around crowded outlets, not to mention the plugs under your seat on the plane. I’ve been using a Pixelbook charger as a go-to plug, but the weight of the thing means it’s constantly falling out.

Apple iPad Pro

3Q5A5611

Obvious choice, I realize, but I’m actually not including the iPad for the obvious reason. The arrival of Sidecar on MacOS Catalina has made the iPad Pro a terrific second screen for the Mac, in addition to all of its usual tablet functionality. I use it all the time for working on the road. Carrying an actual second monitor in a suitcase is an obvious non-starter, so the iPad works well in a pinch.

DJI Osmo Pocket

DSCF3479

I wasn’t prepared for how much I’d love this little gimbal. It’s terrific when tethered to a phone or on its own. Smartphone cameras are really quite good these days, and I imagine most people leave the standalones at home for a trip. The Osmo’s small size makes it perfect for throwing in a backpack, and the things it can do are really quite stunning.

Dreamlight Ease

easevideo

Full disclosure: I used to be too embarrassed to wear a sleep mask on a plane. Enough international flights, however, and you’ll start to get over it pretty quickly. The Dreamlight ease is the best and most comfortable i’ve used to date. It’s made of form fitting, stretchable material, combined with “3D facial mapping” tech that comfortable conforms to the face without letting light in underneath. I actually wear it at home sometimes. Also, the side padding makes it much more comfortable to lean your head against the plane window.

LARQ Self Cleaning Water Bottle

Screen Shot 2019 10 30 at 1.36.26 PM

Plastic bottles are bad. This much we know. And SFO recently banned their sale. Bring a water bottle to stay hydrated on the road and maybe cut down on some waste in the process. The $100 price tag is pretty lofty as far as these things go, but as someone who recently found a small forest growing in my metal work bottle, the addition of a self-cleaning element seems worth the cost.

Nintendo Switch Lite/Switch Online

CMB 7883

I’ve been waiting for a Switch Lite ever since the original Switch was unveiled three year back. It’s the perfect size for travel, and the attached Joy-Cons mean you don’t have to worry about them coming loose in your bag. It’s great for flights and lonely hotel stays. The battery life leaves a bit to be desire, but it’s otherwise a fantastic handheld. Pair it with the ridiculously cheap Switch Online, and you’ve got access to a ton of original NES and SNES titles. A Link to the Past, anyone?

Sony Earbuds WF-1000XM3

IMG 4496

I admit I chose these before having a chance to properly test out the similarly priced AirPods Pro. That said, I can still wholly heartedly recommend Sony’s based on terrific sound quality and noise cancelation that’s perfect for the plane. Powerbeats Pro get a hearty recommendation as well, for battery life and comfort. Just don’t forget to bring a wired set for the plane entertainment system.

Timbuk2 Never Check Backpack

DSCF3052

I tested this backpack on a trip to Tokyo around this time last year, and I’m still smitten. It’s easily the best travel backpack I’ve ever owned, courtesy of plenty of pockets and an expandable body. It’s well made, rugged and nice to look at, making for a perfect carry-on companion.

Tripley Compression Packing Cubes

91e3kyx0BZL. SL1500

Okay, okay, not a gadget, I realize, but invaluable nonetheless. When I first started traveling a lot for work, I was wavering between packing cubes and compression bags. One is great for organization and keeping clothes (relatively) unwrinkled. The other is terrific for space. Tripley product is a nice compromise between the two. Just make sure not to get that zipper jammed.

26 Nov 2019

Disney’s cringe-worthy Baby Yoda merch goes on sale

Who could have guessed an adorable, big-eyed baby Star Wars alien would have generated a ton of demand for toys? Apparently not Disney, which today started to sell merchandise based on The Child from new Disney+ show The Mandalorian, commonly known as “Baby Yoda”. The shirts, bags, mugs, and phone cases all feel…forced, like Disney rushed to print them on CafePress.

“The laziest merch ever” one TechCrunch staffer said. “If only there was 40 years of Star Wars Merchandise as a precedent. They would sell ten billion yoda beanie babies” quipped another. The lack of a plush doll, baby clothes, chew-safe rubber toys for tots and dogs, or original artwork indicate Disney was so busy getting its streaming service off the ground that it didn’t realize it already had a mascot. Yoda backpacks have been a hit for decades. Where’s the Yoda baby bjorn chest pack?

Just because the little green bundle of joy isn’t technically ‘Baby Yoda’, since The Mandalorian is set after the real Yoda’s death in Return Of The Jedi, doesn’t mean Disney isn’t exploiting the term for SEO. “He may look like a ‘Baby Yoda,’ but this lovable creature is referred to as ‘The Child'” Disney notes on all the product pages.

The Disney entertainment empire has suffered these failures to predict demand before. Frozen 1 merchandise sold out everywhere as tykes around the world screamed “Let It Go”. And Guardians Of The Galaxy 2’s Baby Groot also saw demand outstrip supply until Disney started sticking the tiny tree on everything. Hopefully it won’t be long until we can get a magnetic The Child shoulder buddy so he can ride around with us like we’re his Bobasitter.

26 Nov 2019

FedEx robot sent packing by NYC

FedEx’s autonomous delivery bot got a cold reception from New York City officials.

After the company’s SameDay Bots — named Roxo — popped up on New York City streets last week, Mayor Bill de Blasio and transportation officials delivered a sharp response: Get out.

FedEx told TechCrunch that the bots were there for a preview party for its Small Business Saturday event and are not testing in New York. Even this promotional event was too much for city officials concerned with congestion and bots taking jobs from humans.

After reports of the bot sightings, the mayor tweeted that FedEx didn’t receive permission to deploy the robots; he also criticized the company for using a bot to perform a task that a New Yorker could do. The New York Department of Transportation has sent FedEx a cease-and-desist order to stop operations the bots,  which TechCrunch has viewed.

The letter informs FedEx that its bots violate several vehicle and traffic laws, including that motor vehicles are prohibited on sidewalks. Vehicles that receive approval to operate on sidewalks must receive a special exemption and be registered. 

FedEx has been experimenting with autonomous delivery bots. Postmates and Amazon also have been testing autonomous delivery robots.

FedEx first unveiled its SameDay Bot in February 2019. The company said at the time it planned to work with AutoZone, Lowe’s, Pizza Hut,  Target, Walgreens and Walmart to figure out how autonomous robots might fit into its delivery business. The idea was for FedEx to provide a way for retailers to accept orders from nearby customers and deliver them by bot directly to customers’ homes or businesses the same day.

FedEx said its initials test would involve deliveries between selected FedEx Office locations. Ultimately, the FedEx bot will complement the FedEx SameDay City service, which operates in 32 markets and 1,900 cities.

The company has tested the bots in Memphis, Tennessee as well as Plano and Frisco, Texas and Manchester, New Hampshire, according to a spokesperson.

The underlying roots of the SameDay Bot is the iBot. The FedEx bot was developed in collaboration with DEKA Development & Research Corp. and its founder Dean Kamen who invented the Segway  and iBot wheelchair.

DEKA built upon the power base of the iBot, an FDA-approved mobility device for the disabled population, to develop FedEx’s product.

The FedEx bot is equipped with sensing technology such as LiDAR and multiple cameras, which when combined with machine learning algorithms should allow the device to detect and avoid obstacles and plot a safe path, all while following the rules of the road (or sidewalk).

26 Nov 2019

Cocoon’s social app for close friends gets VC backing to chase Path’s dream

You may have heard the pitch before, Facebook, Twitter and Instagram aren’t homes for your real friends anymore because they’re too big, too commercial and too influencer-y, the result is that your most important relationships have been relegated to the lowest common denominator tool on your phones: your texting app.

Cocoon, a startup from a couple of ex-Facebook employees that went through YC earlier this year, is hoping to create the dedicated software that you use for that most important group chat in your life. The iOS-only app is a bit of a cross between Life360, Slack and Path.

While Life360 is the app for concerned parents, Cocoon wants to be the app for curious long-distance families who want to check on their family and closest friends more easily. The app is structured around a Slack channel-like feed where photo, text and location updates can be pushed alongside threaded replies. Like Life360, you can can also access a dashboard of a group’s users and see where they are located in the world and whether they’re at home or work based on group-designated locations. It’s the app’s focus on close friends that has drawn comparisons to Dave Morin’s oft-loved social networking app Path.

“I am always super open and welcome to comparisons to Path because I loved it and it was totally an awesome app,” co-founder Alex Cornell tells TechCrunch. “When you look at our narratives and what we’re trying to accomplish — the goals of supporting close friends and family — there is a lot of similarity there. But at the core, our solution is actually quite different.”

That core difference, the founders tell me, is that Cocoon isn’t a social network. People are signing up to be in this small group with a few close friends of family members but the groups are closed and users aren’t (currently) logging into multiple groups.

[gallery ids="1917079,1917078,1917080"]

“The main thing with a network is like that people aren’t necessarily all connected to one another, it’s asymmetrical so my friends aren’t friends with your friends and when I post a photo, you’re seeing comments from people you don’t know,” Cornell adds.

There are some clean parallels to other consumer apps, but the biggest competitor to Cocoon is what goes down in the small groups you have in iMessage or any of your other chat apps. Cocoon wants to be a properly-interfaced social network inside a group chat where everything is for the group’s benefit only. A lot is still in flux just one day after launch and the founders are hoping they can learn more about what people want from the app from its earliest users.

Like Path, the startup has a noble goal but a social app with dramatically lessened network effects certainly seems like it might have some sustainability issues. The app is currently free, but the founders say that they won’t be selling any user data or surfacing ads, hoping to add in a subscription pricing model to sustain the business. “It’s definitely top of mind and something that we want to do sooner rather than later,” CEO Sachin Monga tells us.

The company has a bit of cash to sustain things on their own for a while. Cocoon wrapped a $3 million seed round in May led by Lerer Hippeau with Y Combinator, Susa Ventures, Norwest Venture Partners, Advancit Capital, Foundation Capital, iNovia, Shrug Capital and SV Angel also participating.

26 Nov 2019

Studs aims to modernize the ear piercing experience for Gen Z teens

A startup called Studs wants to reinvent the ear-piercing experience for Generation Z. Today, consumers only have two options to choose from when they want their ears pierced — the traditional “mall piercing” experience that uses piercing guns often wielded by novices, or professional piercing parlors whose wide range of services often means that only a limited selection of jewelry for ears is made available. Studs, instead, aims to combine brick-and-mortar storefronts for needle piercing with an online retail destination where customers can shop for after-care items, single earrings, collections, earscapes, and more.

The company has now opened its first retail store in New York’s Nolita neighborhood as well as its online shopping site, and plans to expand to more physical locations by 2020.

The idea for Studs comes from entrepreneurs Anna Harman and Lisa Bubbers, both of who have backgrounds with in-person service startups. Harman, now Studs CEO, was previously the Chief Customer Officer at Walmart’s personal shopping service Jetblack, as well as Head of Operations at in-home closet organizing startup Fitz. Bubbers, now CMO at Studs, was previously VP of Marketing at interior design startup Homepolish.

Harman believes the market for ear piercing is split between the offline retailers who do the piercings themselves — either at mall shops or tattoo and piercing parlors — and the online retail side of the business, which makes it difficult to develop a relationship with customers.

“Earring retail is an entirely separate entity becoming increasingly dominated by [direct-to-consumer] brands exclusively leveraging Instagram ads to target and engage with consumers. It’s more competitive than ever to capture customers in the multi-billion dollar fashion jewelry industry,” Harman explains. “Without an authentic offline service to build a meaningful customer relationship and capture data for re-engagement, it’s almost impossible to survive in today’s retail climate as an online-only brand,” she says.

Studs, on the other hand, aims to connect the experience of getting pierced to the next intuitive step of purchasing earrings, Harman adds. 

“We give consumers an easy way to navigate their piercing and jewelry options and are the first to combine a brick and mortar retail experience with an e-commerce platform, so customers can seamlessly continue the experience,” she says.

Like professional tattoo and piercing parlors, Studs only employs professionals who are trained to pierce with needles, not guns. The cost ranges from $35 for one hole to $50 for two, on any part of the ear. Piercing jewelry is $30 to $180 per earring, while Studs’ fashion jewelry is $14-$175 per earring.

After getting pierced at Studs, customers are then directed to the website for after-care information and resources, as well as a shoppable destination for buying new products. In addition, the site is open to anyone — not just those who already got pierced at Studs’s shop.

In addition to traditional earring options, the websites find “earscapes,” which are personalized combinations of piercings where you mix-and-match different jewelry to create unique looks, often across a larger number of holes going up the ear. Studs also collaborates on collections with indie designers like Susan Alexandria, Yumono, and Man Repeller. At launch, it’s offering an exclusive collection from Anna Sheffield, the founder and designer at NYC jewelry brand, Bing Bang.

Though not limited to anyone of any gender, Studs was designed with the goal of better catering to Gen Z teenagers who are getting pierced for the first time or perhaps adding additional piercings further up the ear.  Studs says its “sweet spot” is anyone ages 14 to 25. However, parents can bring in a child as young as 8 to get pierced.

In other words, it’s a step up from a store like Claire’s, where parents are often turned off by the use of piercing guns wielded by non-professionals. Instead, it offers the safe, more hygienic, and more precise needles that many of today’s first-time-piercers prefer.

The NYC area store is meant to test out this concept, but if all goes well, future locations may involve in-mall shops, kiosks, or even mobile units.

To date, the startup has raised $3 million in funding led by First Round Capital, with participation from Lerer Hippaeu and other angel investors. The company plans to use the funds for its retail locations, enhancing its e-commerce site, and expanding its team.

The Studs Studio, located at 12 Prince St. in NYC, opened alongside the Studs website on November 19.

26 Nov 2019

New Amazon capabilities put machine learning in reach of more developers

Today, Amazon announced a new approach that it says will put machine learning technology in reach of more developers and line of business users. Amazon has been making a flurry of announcements ahead of its re:Invent customer conference next week in Las Vegas.

While the company offers plenty of tools for data scientists to build machine learning models and process, store and visualize data, it wants to put that capability directly in the hands of developers with the help of the popular database query language, SQL.

By taking advantage of tools like Amazon QuickSight, Aurora and Athena in combination with SQL queries, developers can have much more direct access to machine learning models and underlying data without any additional coding, says VP of artificial intelligence at AWS, Matt Wood.

“This announcement is all about is making it easier for developers to add machine learning predictions to their products and their processes by integrating those predictions directly with their databases,” Wood told TechCrunch.

For starters, Wood says developers can take advantage of Aurora, the company’s SQL (and Postgres) compatible database to build a simple SQL query into an application, which will automatically pull the data into the application and run whatever machine learning model the developer associates with it.

The second piece involves Athena, the company’s serverless query service. As with Aurora, developers can write a SQL query — in this case, against any data store — and based on a machine learning model they choose, return a set of data for use in an application.

The final piece is QuickSight, which is Amazon’s data visualization tool. Using one of the other tools to return some set of data, developers can use that data to create visualizations based on it inside whatever application they are creating.

“By making sophisticated ML predictions more easily available through SQL queries and dashboards, the changes we’re announcing today help to make ML more usable and accessible to database developers and business analysts. Now anyone who can write SQL can make — and importantly use — predictions in their applications without any custom code,” Amazon’s Matt Assay wrote in a blog post announcing these new capabilities.

Assay added that this approach is far easier than what developers had to do in the past to achieve this. “There is often a large amount of fiddly, manual work required to take these predictions and make them part of a broader application, process or analytics dashboard,” he wrote.

As an example, Wood offers a lead-scoring model you might use to pick the most likely sales targets to convert. “Today, in order to do lead scoring you have to go off and wire up all these pieces together in order to be able to get the predictions into the application,” he said. With this new capability, you can get there much faster.

“Now, as a developer I can just say that I have this lead scoring model which is deployed in SageMaker, and all I have to do is write literally one SQL statement that I do all day long into Aurora, and I can start getting back that lead scoring information. And then I just display it in my application and away I go,” Wood explained.

As for the machine learning models, these can come pre-built from Amazon, be developed by an in-house data science team or purchased in a machine learning model marketplace on Amazon, says Wood.

Today’s announcements from Amazon are designed to simplify machine learning and data access, and reduce the amount of coding to get from query to answer faster.

26 Nov 2019

New Amazon capabilities put machine learning in reach of more developers

Today, Amazon announced a new approach that it says will put machine learning technology in reach of more developers and line of business users. Amazon has been making a flurry of announcements ahead of its re:Invent customer conference next week in Las Vegas.

While the company offers plenty of tools for data scientists to build machine learning models and process, store and visualize data, it wants to put that capability directly in the hands of developers with the help of the popular database query language, SQL.

By taking advantage of tools like Amazon QuickSight, Aurora and Athena in combination with SQL queries, developers can have much more direct access to machine learning models and underlying data without any additional coding, says VP of artificial intelligence at AWS, Matt Wood.

“This announcement is all about is making it easier for developers to add machine learning predictions to their products and their processes by integrating those predictions directly with their databases,” Wood told TechCrunch.

For starters, Wood says developers can take advantage of Aurora, the company’s SQL (and Postgres) compatible database to build a simple SQL query into an application, which will automatically pull the data into the application and run whatever machine learning model the developer associates with it.

The second piece involves Athena, the company’s serverless query service. As with Aurora, developers can write a SQL query — in this case, against any data store — and based on a machine learning model they choose, return a set of data for use in an application.

The final piece is QuickSight, which is Amazon’s data visualization tool. Using one of the other tools to return some set of data, developers can use that data to create visualizations based on it inside whatever application they are creating.

“By making sophisticated ML predictions more easily available through SQL queries and dashboards, the changes we’re announcing today help to make ML more usable and accessible to database developers and business analysts. Now anyone who can write SQL can make — and importantly use — predictions in their applications without any custom code,” Amazon’s Matt Assay wrote in a blog post announcing these new capabilities.

Assay added that this approach is far easier than what developers had to do in the past to achieve this. “There is often a large amount of fiddly, manual work required to take these predictions and make them part of a broader application, process or analytics dashboard,” he wrote.

As an example, Wood offers a lead-scoring model you might use to pick the most likely sales targets to convert. “Today, in order to do lead scoring you have to go off and wire up all these pieces together in order to be able to get the predictions into the application,” he said. With this new capability, you can get there much faster.

“Now, as a developer I can just say that I have this lead scoring model which is deployed in SageMaker, and all I have to do is write literally one SQL statement that I do all day long into Aurora, and I can start getting back that lead scoring information. And then I just display it in my application and away I go,” Wood explained.

As for the machine learning models, these can come pre-built from Amazon, be developed by an in-house data science team or purchased in a machine learning model marketplace on Amazon, says Wood.

Today’s announcements from Amazon are designed to simplify machine learning and data access, and reduce the amount of coding to get from query to answer faster.

26 Nov 2019

Android’s Ambient Mode will soon come to ‘select devices’

You’ve probably heard murmurs about Google’s forthcoming Ambient Mode for Android . The company first announced this feature, which essentially turns an Android device into a smart display while it’s charging, in September. Now, in a Twitter post, Google confirmed that it will launch soon, starting with a number of select devices that run Android 8.0 or later.

At the time, Google said Ambient Mode was coming to the Lenovo Smart Tab M8 HD and Smart Tab tablets, as well as the Nokia 7.2 and 6.2 phones. According to the Verge, it’ll also come to Sony, Nokia, Transsion and Xiaomi phones, though Google’s own Pixels aren’t on the company’s list yet.

“The ultimate goal for proactive Assistant is to help you get things done faster, anticipate your needs and accomplish your tasks as quickly and as easily as possible,” said Google Assistant product manager Arvind Chandrababu in the announcement. “It’s fundamentally about moving from an app-based way of doing things to an intent-based way of doing things. Right now, users can do most things with their smartphones, but it requires quite a bit of mental bandwidth to figure out, hey, I need to accomplish this task, so let me backtrack and figure out all the steps that I need to do in order to get there.”

Those are pretty lofty goals. In practice, what this means, for now, is that you will be able to set an alarm with just a few taps from the ambient screen, see your upcoming appointments, turn off your connected lights and see a slideshow of your images in the background. I don’t think that any of those tasks really consumed a lot of mental bandwidth in the first place, but Google says it has more proactive experiences planned for the future.

 

26 Nov 2019

Android’s Ambient Mode will soon come to ‘select devices’

You’ve probably heard murmurs about Google’s forthcoming Ambient Mode for Android . The company first announced this feature, which essentially turns an Android device into a smart display while it’s charging, in September. Now, in a Twitter post, Google confirmed that it will launch soon, starting with a number of select devices that run Android 8.0 or later.

At the time, Google said Ambient Mode was coming to the Lenovo Smart Tab M8 HD and Smart Tab tablets, as well as the Nokia 7.2 and 6.2 phones. According to the Verge, it’ll also come to Sony, Nokia, Transsion and Xiaomi phones, though Google’s own Pixels aren’t on the company’s list yet.

“The ultimate goal for proactive Assistant is to help you get things done faster, anticipate your needs and accomplish your tasks as quickly and as easily as possible,” said Google Assistant product manager Arvind Chandrababu in the announcement. “It’s fundamentally about moving from an app-based way of doing things to an intent-based way of doing things. Right now, users can do most things with their smartphones, but it requires quite a bit of mental bandwidth to figure out, hey, I need to accomplish this task, so let me backtrack and figure out all the steps that I need to do in order to get there.”

Those are pretty lofty goals. In practice, what this means, for now, is that you will be able to set an alarm with just a few taps from the ambient screen, see your upcoming appointments, turn off your connected lights and see a slideshow of your images in the background. I don’t think that any of those tasks really consumed a lot of mental bandwidth in the first place, but Google says it has more proactive experiences planned for the future.

 

26 Nov 2019

Leading robotics VCs talk about where they’re investing

The Valley’s affinity for robotics shows no signs of cooling. Technical enhancements through innovations like AI/ML, compute power and big data utilization continue to drive new performance milestones, efficiencies and use cases.

Despite the old saying, “hardware is hard,” investment in the robotics space continues to expand. Money is pouring in across robotics’ billion-dollar sub verticals, including industrial and labor automation, drone delivery, machine vision and a wide range of others.

According to data from Pitchbook and Crunchbase, 2018 saw new highs for the number of venture deals and total invested capital in the space, with roughly $5 billion in investment coming from nearly 400 deals. With robotics well on its way to again set new investment peaks in 2019, we asked 13 leading VCs who work at firms spanning early to growth stages to share what’s exciting them most and where they see opportunity in the sector:

Participants discuss the compelling business models for robotics startups (such as “Robots as a Service”), current valuations, growth tactics and key robotics KPIs, while also diving into key trends in industrial automation, human replacement, transportation, climate change, and the evolving regulatory environment.

Shahin Farshchi, Lux Capital

Which trends are you most excited in robotics from an investing perspective?

The opportunity to unlock human superpowers:

  • Increase productivity to enhance creativity leading to new products and businesses.
  • Automating dangerous tasks and eliminating undesirable, dangerous jobs in mining, manufacturing, and shipping/logistics.
  • Making the most deadly mode of transport: driving, 100% safe.

How much time are you spending on robotics right now? Is the market under-heated, overheated, or just right?

  • Three-quarters of the new opportunities I look at involve some sort of automation.
  • The market for robot startups attempting direct human labor replacement, floor-sweeping, and dumb-waiter robots, and robotic lawnmowers and vacuums is OVER heated (too many startups).
  • The market for robot startups that assist human workers, increase human productivity, and automate undesirable human tasks is UNDER heated (not enough startups).

Are there startups that you wish you would see in the industry but don’t? Plus any other thoughts you want to share with TechCrunch readers.

I want to see more founders that are building robotics startups that:

  • Solve LATENT pain points in specific, well-understood industries (vs. building a cool robot that can do cool things).
  • Focus on increasing HUMAN productivity (vs. trying to replace humans).
  • Are solving for building interesting BUSINESSES (vs. emphasizing cool robots).

Kelly Chen, DCVC

Three years ago, the most compelling companies to us in the industrial space were in software. We now spend significantly more time in verticalized AI and hardware. Robotic companies we find most exciting today are addressing key driver areas of (1) high labor turnover and shortage and (2) new research around generalization on the software side. For many years, we have seen some pretty impressive science projects out of labs, but once you take these into the real world, they fail. In these changing environmental conditions, it’s crucial that robots work effectively in-the-wild at speeds and economics that make sense. This is an extremely difficult combination of problems, and we’re now finally seeing it happen. A few verticals we believe will experience a significant overhaul in the next 5 years include logistics, waste, micro-fulfillment, and construction.

With this shift in robotic capability, we’re also seeing a shift in customer sentiment. Companies who are used to buying outright machines are now more willing to explore RaaS (Robot as a Service) models for compelling robotic solutions – and that repeat revenue model has opened the door for some formerly enterprise software-only investors. On the other hand, companies exploring robotics in place of tasks with high labor shortages, such as trucking or agriculture, are more willing to explore per hour or per unit pick models.

Adoption won’t be overnight, but in the medium term, we are very enthusiastic about the ways robotics will transform industries. We do believe investing in this space requires the right technical know-how and network to evaluate and support companies, so momentum investors looking to dip their hand into a hot space may be disappointed.

Rob Coneybeer, Shasta Ventures

We’re entering the early stages of the golden age of robotics. Robotics is already a huge, multibillion-dollar market – but today that market is dominated by industrial robotics, such as welding and assembly robots found on automotive assembly lines around the world. These robots repeat basic tasks, over and over, and are usually separated by caged walls from humans for safety. However, this is rapidly changing. Advances in perception, driven by deep learning, machine vision and inexpensive, high-performance cameras allow robots to safely navigate the real world, escape the manufacturing cages, and closely interact with humans.

I think the biggest opportunities in robotics are those which attack enormous markets where it’s difficult to hire and retain labor. One great example is long-haul trucking. Highway driving represents one of the easiest problems for autonomous vehicles, since the lanes tend to be well-marked, the roads have gentle curves, and all traffic runs in the same direction. In the United States alone, long haul trucking is a multi-hundred billion dollar market every year. The customer set is remarkably scalable with standard trailer sizes and requirements for shipping freight. Yet at the same time, trucking companies have trouble hiring and retaining drivers. It’s the perfect recipe for robotic opportunity.

I’m intrigued by agricultural robots. I’ve seen dozens of companies attacking every part of the farming equation – from field clearing and preparation, to seeding, to weeding, applying fertilizer, and eventually harvesting. I think there’s a lot of value to be “harvested” here by robots, especially since seasonal field labor is becoming harder to find and increasingly expensive. One enormous challenge in this market, however, is that growing seasons mean that the robotic machinery has a lot of downtime and the cost of equipment isn’t as easily amortized in other markets with higher utilization. The other big challenge is that fields are very, very tough on hardware and electronics due to environmental conditions like rain, dust and mud.

There are a ton of important problems to be solved in robotics. The biggest open challenges in my mind are locomotion and grasping. Specifically, I think that for in-building applications, robots need to be able to do all the thing which humans can do – specifically opening and closing doors, climbing stairs, and picking items off of shelves and putting them down gently. Plenty of startups have tackled subsets of these problems, but to date no one has built a generalized solution. To be fair, to get to parity with humans on generalized locomotion and grasping, it’s probably going to take another several decades.

Overall, I feel like the funding environment for robotics is about right, with a handful of overfunded areas (like autonomous passenger vehicles). I think that the most overlooked near-term opportunity in robotics is teleoperation. Specifically, pairing fully automated robotic operations with occasional human remote operation of individual robots. Starship Technologies is a perfect example of this. Starship is actively deploying local delivery robots around the world today. Their first major deployment is at George Mason University in Virginia. They have nearly 50 active robots delivering food around the campus. They’re autonomous most of the time, but when they encounter a problem or obstacle they can’t solve, a human operator in a teleoperation center manually controls the robot remotely. At the same time. Starship tracks and prioritizes these problems for engineers to solve, and slowly incrementally reduces the number of problems the robots can’t solve on their own. I think people view robotics as a “zero or one” solution when in fact there’s a world where humans and robots work together for a long time.