Author: azeeadmin

12 Apr 2021

Chinese autonomous vehicle startup WeRide scores permit to test driverless cars in San Jose

WeRide, the Chinese autonomous vehicle startup that recently raised $310 million, has received a permit to test driverless vehicles on public roads in San Jose, California. WeRide is the seventh company, following AutoX, Baidu, Cruise, Nuro Waymo and Zoox, to receive a driverless testing permit.

In the early days of autonomous vehicle development, testing permits required human safety drivers behind the wheel. Some 56 companies have an active permit to test autonomous vehicles with a safety driver. Driverless testing permits, in which a human operator is not behind the wheel, have become the new milestone and a required step for companies that want to launch a commercial robotaxi or delivery service in the state.

The California DMV, the agency that regulates autonomous vehicle testing in the state, said the permit allows WeRide to test two autonomous vehicles without a driver behind the wheel on specified streets within San Jose. WeRide has had a permit to test autonomous vehicles with safety drivers behind the wheel since 2017. WeRide is also restricted to how and when it tests these vehicles. The driverless vehicles are designed to operate on roads with posted speed limits not exceeding 45 miles per hour. Testing will be conducted during the day Monday through Friday, but will not occur in heavy fog or rain, according to the DMV.

To reach driverless testing status in California, companies have to meet a number of safety, registration and insurance requirements. Any company applying for a driverless permit must provide evidence of insurance or a bond equal to $5 million, verify vehicles are capable of operating without a driver, meet federal Motor Vehicle Safety Standards or have an exemption from the National Highway Traffic Safety Administration, and be an SAE Level 4 or 5 vehicle. The test vehicles must be continuously monitored and train remote operators on the technology.

Driverless testing permit holders must also report to the DMV any collisions involving a driverless test vehicle within 10 days and submit an annual report of disengagements.

While the vast majority of WeRide’s operations are in China, the permit does signal its continued interest in the United States. WeRide, which is headquartered in Guangzhou, China, maintains R&D and operation centers in Beijing, Shanghai, Nanjing, Wuhan, Zhengzhou and Anqing, as well as in Silicon Valley. The startup, which was founded in 2017, received a permit in February to operate a ride-hailing operation in Guangzhou.

The company is one of China’s most-funded autonomous vehicle technology startups with backers that include bus maker Yutong, Chinese facial recognition company SenseTime and Alliance Ventures, the strategic venture capital arm of Renault-Nissan-Mitsubishi. Other WeRide investors include CMC Capital Partners, CDB Equipment Manufacturing Fund, Hengjian Emerging Industries Fund, Zhuhai Huajin Capital, Flower City Ventures and Tryin Capital. Qiming Venture Partners, Sinovation Ventures and Kinzon Capital.

12 Apr 2021

Tech and auto execs tackle global chip shortage at White House summit

A collection of tech and auto industry executives met with the White House to discuss solutions for the worldwide chip shortage Monday.

CEOs from Google, Intel, HP, Dell, Ford, and General Motors attended the virtual summit on semiconductors and resilience in supply chains. National security adviser Jake Sullivan, Secretary of Commerce Gina Raimondo and National Economic Council Director Brian Deese hosted the meeting, which President Biden also attended briefly.

Ahead of the summit, Intel CEO Pat Gelsinger said he hoped the U.S. could increase its semiconductor production to encompass a third of all chips sold in the U.S. Intel is in discussions to make chips designed specifically for automakers within its own facilities, a project that could take some pressure off of supply lines.

An ongoing dearth of the tiny, high tech components used in everything from car entertainment systems to smartphones has stretched supply thin. Consumers have been feeling it for months. Soaring demand has made new gaming consoles and graphics cards scarce, even months after some of those devices are released. But with semiconductors omnipresent in devices these days, the supply shortages are disrupting industries well beyond gaming.

President Biden signed an Executive Order taking aim at the supply issues in February. That order initiated a 100-day review of supply chains for semiconductors as well as advanced batteries like those found in electric vehicles, key minerals required for tech products, and pharmaceuticals and their ingredients.

Biden noted that the chip shortages have “caused delays in productions of automobiles and has resulted in reduced hours for American workers.” He also cited supply shortages for PPE during the early months of the pandemic, when many health workers were forced to work without proper protection.

The order also kicks off a longer review in cooperation with industry leaders that will look for solutions that can be implemented right away to alleviate ongoing supply chain issues.

Supply chain issues for tech components also highlight tensions with China, a fact that Sullivan’s presence at the White House summit makes clear. Biden cited concerns around “longterm competitiveness” as one motivation for undergoing a major audit of supply chains for critical tech components.

Sen. Mark Warner (D-VA) has called the shortage “a national security issue as well as an economic one,” citing the need for semiconductors in defense tech.

Warner has emphasized the need for legislative solutions that would move the U.S. toward self reliance and push back on China’s influence, pointing to a semiconductor production bill he introduced with Sen. John Cornyn (R-TX) last summer.

Biden previously said that the administration will work toward solutions to the current shortfall of the critical chips and will be leaning on political allies “to ramp up production to help us resolve the bottlenecks we face now.”

12 Apr 2021

Tech and auto execs tackle global chip shortage at White House summit

A collection of tech and auto industry executives met with the White House to discuss solutions for the worldwide chip shortage Monday.

CEOs from Google, Intel, HP, Dell, Ford, and General Motors attended the virtual summit on semiconductors and resilience in supply chains. National security adviser Jake Sullivan, Secretary of Commerce Gina Raimondo and National Economic Council Director Brian Deese hosted the meeting, which President Biden also attended briefly.

Ahead of the summit, Intel CEO Pat Gelsinger said he hoped the U.S. could increase its semiconductor production to encompass a third of all chips sold in the U.S. Intel is in discussions to make chips designed specifically for automakers within its own facilities, a project that could take some pressure off of supply lines.

An ongoing dearth of the tiny, high tech components used in everything from car entertainment systems to smartphones has stretched supply thin. Consumers have been feeling it for months. Soaring demand has made new gaming consoles and graphics cards scarce, even months after some of those devices are released. But with semiconductors omnipresent in devices these days, the supply shortages are disrupting industries well beyond gaming.

President Biden signed an Executive Order taking aim at the supply issues in February. That order initiated a 100-day review of supply chains for semiconductors as well as advanced batteries like those found in electric vehicles, key minerals required for tech products, and pharmaceuticals and their ingredients.

Biden noted that the chip shortages have “caused delays in productions of automobiles and has resulted in reduced hours for American workers.” He also cited supply shortages for PPE during the early months of the pandemic, when many health workers were forced to work without proper protection.

The order also kicks off a longer review in cooperation with industry leaders that will look for solutions that can be implemented right away to alleviate ongoing supply chain issues.

Supply chain issues for tech components also highlight tensions with China, a fact that Sullivan’s presence at the White House summit makes clear. Biden cited concerns around “longterm competitiveness” as one motivation for undergoing a major audit of supply chains for critical tech components.

Sen. Mark Warner (D-VA) has called the shortage “a national security issue as well as an economic one,” citing the need for semiconductors in defense tech.

Warner has emphasized the need for legislative solutions that would move the U.S. toward self reliance and push back on China’s influence, pointing to a semiconductor production bill he introduced with Sen. John Cornyn (R-TX) last summer.

Biden previously said that the administration will work toward solutions to the current shortfall of the critical chips and will be leaning on political allies “to ramp up production to help us resolve the bottlenecks we face now.”

12 Apr 2021

Apple and Google will both attend Senate hearing on app store competition

After it looked like Apple might no-show, the company has committed to sending a representative to a Senate antitrust hearing on app store competition later this month.

Last week, Senators Amy Klobuchar (D-MN) and Mike Lee (R-UT) put public pressure on the company to attend the hearing, which will be held by the Senate Judiciary Subcommittee on Competition Policy, Antitrust, and Consumer Rights. Klobuchar chairs that subcommittee, and has turned her focus toward antitrust worries about the tech industry’s most dominant players.

The hearing, which Google will also attend, will delve into Apple and Google’s control over “the cost, distribution, and availability of mobile applications on consumers, app developers, and competition.”

App stores are one corner of tech that looks the most like a duopoly, a perception that Apple’s high profile battle with Fortnite-maker Epic is only elevating. Meanwhile, with a number of state-level tech regulation efforts brewing, Arizona is looking to relieve developers from Apple and Google’s hefty cut of app store profits.

In a letter last week, Klobuchar and Lee, the subcommittee’s ranking member, accused Apple of “abruptly” deciding that it wouldn’t send a witness to the hearing, which is set for April 21.

“Apple’s sudden change in course to refuse to provide a witness to testify before the Subcommittee on app store competition issues in April, when the company is clearly willing to discuss them in other public forums, is unacceptable,” the lawmakers wrote.

By Monday, that pressure had apparently done its work, with Apple agreeing to attend the hearing. Apple didn’t respond to a request for comment.

While the lawmakers are counting Apple’s acquiescence as a win, that doesn’t mean they’ll be sending their chief executive. Major tech CEOs have been called before Congress more often over the last few years, but those appearances might have diminishing returns.

Tech CEOs, Apple’s Tim Cook included, are thoroughly trained in the art of saying little when pressed by lawmakers. Dragging in a CEO might work as a show of force, but tech execs generally reveal little over the course of their lengthy testimonies, particularly when a hearing isn’t accompanied by a deeper investigation.

12 Apr 2021

Atomico’s talent partners share 6 tips for early-stage people ops success

In the earliest stages of building a startup, it can be hard to justify focusing on anything other than creating a great product or service and meeting the needs of customers or users. However, there are still a number of surefire measures that any early-stage company can and should put in place to achieve “people ops” success as they begin scaling, according to venture capital firm Atomico‘s talent partners, Caro Chayot and Dan Hynes.

You need to recruit for what you need, but you also need to think about what is coming down the line.

As members of the VC’s operational support team, both work closely with companies in the Atomico portfolio to “find, develop and retain” the best employees in their respective fields, at various stages of the business. They’re operators at heart, and they bring a wealth of experience from time spent prior to entering VC.

Before joining Atomico, Chayot led the EMEA HR team at Twitter, where she helped scale the business from two to six markets and grew the team from 80 based in London to 500 across the region. Prior to that, she worked at Google in people ops for nine years.

Hynes was responsible for talent and staffing at well-known technology companies including Google, Cisco and Skype. At Google, he grew the EMEA team from 60 based in London to 8,500 across Europe by 2010, and at Skype, he led a talent team that scaled from 600 to 2,300 in three years.

Caro Chayot’s top 3 tips

1. Think about your long-term org design (18 months down the line) and hire back from there

When most founders think about hiring, they think about what they need now and the gaps that exist in their team at that moment. Dan and I help founders see things a little differently. You need to recruit for what you need, but you also need to think about what is coming down the line. What will your company look like in a year or 18 months? Functions and team sizes will depend on the sector — whether you are building a marketplace, a SaaS business or a consumer company. Founders also need to think about how the employees they hire now can develop over the next 18 months. If you hire people who are at the top of their game now, they won’t be able to grow into the employees you need in the future.

2. Spend time defining what your culture is. Use that for hiring and everything else people-related

If org design is the “what,” then culture is the “how.” It’s about laying down values and principles. It may sound fluffy, but capturing what it means to work at your company is key to hiring and retaining the best talent. You can use clearly articulated values at every stage of talent-building to shape your employer brand. What do you want potential employees to feel when they see your website? What do you want to look for in the interview process to make sure you are hiring people who are additive to the culture? How do you develop people and compensate them? These are all expressions of culture.

12 Apr 2021

Docugami’s new model for understanding documents cuts its teeth on NASA archives

You hear so much about data these days that you might forget that a huge amount of the world runs on documents: a veritable menagerie of heterogeneous files and formats holding enormous value yet incompatible with the new era of clean, structured databases. Docugami plans to change that with a system that intuitively understands any set of documents and intelligently indexes their contents — and NASA is already on board.

If Docugami’s product works as planned, anyone will be able to take piles of documents accumulated over the years and near-instantly convert them to the kind of data that’s actually useful to people.

Because it turns out that running just about any business ends up producing a ton of documents. Contracts and briefs in legal work, leases and agreements in real estate, proposals and releases in marketing, medical charts, etc, etc. Not to mention the various formats: Word docs, PDFs, scans of paper printouts of PDFs exported from Word docs, and so on.

Over the last decade there’s been an effort to corral this problem, but movement has largely been on the organizational side: put all your documents in one place, share and edit them collaboratively. Understanding the document itself has pretty much been left to the people who handle them, and for good reason — understanding documents is hard!

Think of a rental contract. We humans understand when the renter is named as Jill Jackson, that later on, “the renter” also refers to that person. Furthermore, in any of a hundred other contracts, we understand that the renters in those documents are the same type of person or concept in the context of the document, but not the same actual person. These are surprisingly difficult concepts for machine learning and natural language understanding systems to grasp and apply. Yet if they could be mastered, an enormous amount of useful information could be extracted from the millions of documents squirreled away around the world.

What’s up, .docx?

Docugami founder Jean Paoli says they’ve cracked the problem wide open, and while it’s a major claim, he’s one of few people who could credibly make it. Paoli was a major figure at Microsoft for decades, and among other things helped create the XML format — you know all those files that end in x, like .docx and .xlsx? Paoli is at least partly to thank for them.

“Data and documents aren’t the same thing,” he told me. “There’s a thing you understand, called documents, and there’s something that computers understand, called data. Why are they not the same thing? So my first job [at Microsoft] was to create a format that can represent documents as data. I created XML with friends in the industry, and Bill accepted it.” (Yes, that Bill.)

The formats became ubiquitous, yet 20 years later the same problem persists, having grown in scale with the digitization of industry after industry. But for Paoli the solution is the same. At the core of XML was the idea that a document should be structured almost like a webpage: boxes within boxes, each clearly defined by metadata — a hierarchical model more easily understood by computers.

Illustration showing a document corresponding to pieces of another document.

Image Credits: Docugami

“A few years ago I drank the AI kool-aid, got the idea to transform documents into data. I needed an algorithm that navigates the hierarchical model, and they told me that the algorithm you want does not exist,” he explained. “The XML model, where every piece is inside another, and each has a different name to represent the data it contains — that has not been married to the AI model we have today. That’s just a fact. I hoped the AI people would go and jump on it, but it didn’t happen.” (“I was busy doing something else,” he added, to excuse himself.)

The lack of compatibility with this new model of computing shouldn’t come as a surprise — every emerging technology carries with it certain assumptions and limitations, and AI has focused on a few other, equally crucial areas like speech understanding and computer vision. The approach taken there doesn’t match the needs of systematically understanding a document.

“Many people think that documents are like cats. You train the AI to look for their eyes, for their tails… documents are not like cats,” he said.

It sounds obvious, but it’s a real limitation: advanced AI methods like segmentation, scene understanding, multimodal context, and such are all a sort of hyper-advanced cat detection that has moved beyond cats to detect dogs, car types, facial expressions, locations, etc. Documents are too different from one another, or in other ways too similar, for these approaches to do much more than roughly categorize them.

And as for language understanding, it’s good in some ways but not in the ways Paoli needed. “They’re working sort of at the English language level,” he said. “They look at the text but they disconnect it from the document where they found it. I love NLP people, half my team is NLP people — but NLP people don’t think about business processes. You need to mix them with XML people, people who understand computer vision, then you start looking at the document at a different level.”

Docugami in action

Illustration showing a person interacting with a digital document.

Image Credits: Docugami

Paoli’s goal couldn’t be reached by adapting existing tools (beyond mature primitives like optical character recognition), so he assembled his own private AI lab, where a multi-disciplinary team has been tinkering away for about two years.

“We did core science, self-funded, in stealth mode, and we sent a bunch of patents to the patent office,” he said. “Then we went to see the VCs, and Signalfire basically volunteered to lead the seed round at $10 million.”

Coverage of the round didn’t really get into the actual experience of using Docugami, but Paoli walked me through the platform with some live documents. I wasn’t given access myself and the company wouldn’t provide screenshots or video, saying it is still working on the integrations and UI, so you’ll have to use your imagination… but if you picture pretty much any enterprise SaaS service, you’re 90 percent of the way there.

As the user, you upload any number of documents to Docugami, from a couple dozen to hundreds or thousands. These enter a machine understanding workflow that parses the documents, whether they’re scanned PDFs, Word files, or something else, into an XML-esque hierarchical organization unique to the contents.

“Say you’ve got 500 documents, we try to categorize it in document sets, these 30 look the same, those 20 look the same, those 5 together. We group them with a mix of hints coming from how the document looked, what it’s talking about, what we think people are using it for, etc,” said Paoli. Other services might be able to tell the difference between a lease and an NDA, but documents are too diverse to slot into pre-trained ideas of categories and expect it to work out. Every set of documents is potentially unique, and so Docugami trains itself anew every time, even for a set of one. “Once we group them, we understand the overall structure and hierarchy of that particular set of documents, because that’s how documents become useful: together.”

Illustration showing a document being turned into a report and a spreadsheet.

Image Credits: Docugami

That doesn’t just mean it picks up on header text and creates an index, or lets you search for words. The data that is in the document, for example who is paying whom, how much and when, and under what conditions, all that becomes structured and editable within the context of similar documents. (It asks for a little input to double check what it has deduced.)

It can be a little hard to picture, but now just imagine that you want to put together a report on your company’s active loans. All you need to do is highlight the information that’s important to you in an example document — literally, you just click “Jane Roe” and “$20,000” and “5 years” anywhere they occur — and then select the other documents you want to pull corresponding information from. A few seconds later you have an ordered spreadsheet with names, amounts, dates, anything you wanted out of that set of documents.

All this data is meant to be portable too, of course — there are integrations planned with various other common pipes and services in business, allowing for automatic reports, alerts if certain conditions are reached, automated creation of templates and standard documents (no more keeping an old one around with underscores where the principals go).

Remember, this is all half an hour after you uploaded them in the first place, no labeling or pre-processing or cleaning required. And the AI isn’t working from some preconceived notion or format of what a lease document looks like. It’s learned all it needs to know from the actual docs you uploaded — how they’re structured, where things like names and dates figure relative to one another, and so on. And it works across verticals and uses an interface anyone can figure out a few minutes. Whether you’re in healthcare data entry or construction contract management, the tool should make sense.

The web interface where you ingest and create new documents is one of the main tools, while the other lives inside Word. There Docugami acts as a sort of assistant that’s fully aware of every other document of whatever type you’re in, so you can create new ones, fill in standard information, comply with regulations, and so on.

Okay, so processing legal documents isn’t exactly the most exciting application of machine learning in the world. But I wouldn’t be writing this (at all, let alone at this length) if I didn’t think this was a big deal. This sort of deep understanding of document types can be found here and there among established industries with standard document types (such as police or medical reports), but have fun waiting until someone trains a bespoke model for your kayak rental service. But small businesses have just as much value locked up in documents as large enterprises — and they can’t afford to hire a team of data scientists. And even the big organizations can’t do it all manually.

NASA’s treasure trove

Image Credits: NASA

The problem is extremely difficult, yet to humans seems almost trivial. You or I could glance through 20 similar documents and a list of names and amounts easily, perhaps even in less time than it takes for Docugami to crawl them and train itself.

But AI, after all, is meant to imitate and excel human capacity, and it’s one thing for an account manager to do monthly reports on 20 contracts — quite another to do a daily report on a thousand. Yet Docugami accomplishes the latter and former equally easily — which is where it fits into both the enterprise system, where scaling this kind of operation is crucial, and to NASA, which is buried under a backlog of documentation from which it hopes to glean clean data and insights.

If there’s one thing NASA’s got a lot of, it’s documents. Its reasonably well maintained archives go back to its founding, and many important ones are available by various means — I’ve spent many a pleasant hour perusing its cache of historical documents.

But NASA isn’t looking for new insights into Apollo 11. Through its many past and present programs, solicitations, grant programs, budgets, and of course engineering projects, it generates a huge amount of documents — being, after all, very much a part of the federal bureaucracy. And as with any large organization with its paperwork spread over decades, NASA’s document stash represents untapped potential.

Expert opinions, research precursors, engineering solutions, and a dozen more categories of important information are sitting in files searchable perhaps by basic word matching but otherwise unstructured. Wouldn’t it be nice for someone at JPL to get it in their head to look at the evolution of nozzle design, and within a few minutes have a complete and current list of documents on that topic, organized by type, date, author, and status? What about the patent advisor who needs to provide a NIAC grant recipient information on prior art — shouldn’t they be able to pull those old patents and applications up with more specificity than any with a given keyword?

The NASA SBIR grant, awarded last summer, isn’t for any specific work, like collecting all the documents of such and such a type from Johnson Space Center or something. It’s an exploratory or investigative agreement, as many of these grants are, and Docugami is working with NASA scientists on the best ways to apply the technology to their archives. (One of the best applications may be to the SBIR and other small business funding programs themselves.)

Another SBIR grant with the NSF differs in that, while at NASA the team is looking into better organizing tons of disparate types of documents with some overlapping information, at NSF they’re aiming to better identify “small data.” “We are looking at the tiny things, the tiny details,” said Paoli. “For instance, if you have a name, is it the lender or the borrower? The doctor or the patient name? When you read a patient record, penicillin is mentioned, is it prescribed or prohibited? If there’s a section called allergies and another called prescriptions, we can make that connection.”

“Maybe it’s because I’m French”

When I pointed out the rather small budgets involved with SBIR grants and how his company couldn’t possibly survive on these, he laughed.

“Oh, we’re not running on grants! This isn’t our business. For me, this is a way to work with scientists, with the best labs in the world,” he said, while noting many more grant projects were in the offing. “Science for me is a fuel. The business model is very simple – a service that you subscribe to, like Docusign or Dropbox.”

The company is only just now beginning its real business operations, having made a few connections with integration partners and testers. But over the next year it will expand its private beta and eventually open it up — though there’s no timeline on that just yet.

“We’re very young. A year ago we were like five, six people, now we went and got this $10M seed round and boom,” said Paoli. But he’s certain that this is a business that will be not just lucrative but will represent an important change in how companies work.

“People love documents. Maybe it’s because I’m French,” he said, “but I think text and books and writing are critical — that’s just how humans work. We really think people can help machines think better, and machines can help people think better.”

12 Apr 2021

How to choose and deploy industry-specific AI models

As artificial intelligence becomes more advanced, previously cutting-edge — but generic — AI models are becoming commonplace, such as Google Cloud’s Vision AI or Amazon Rekognition.

While effective in some use cases, these solutions do not suit industry-specific needs right out of the box. Organizations that seek the most accurate results from their AI projects will simply have to turn to industry-specific models.

Any team looking to expand its AI capabilities should first apply its data and use cases to a generic model and assess the results.

There are a few ways that companies can generate industry-specific results. One would be to adopt a hybrid approach — taking an open-source generic AI model and training it further to align with the business’ specific needs. Companies could also look to third-party vendors, such as IBM or C3, and access a complete solution right off the shelf. Or — if they really needed to — data science teams could build their own models in-house, from scratch.

Let’s dive into each of these approaches and how businesses can decide which one works for their distinct circumstances.

Generic models alone often don’t cut it

Generic AI models like Vision AI or Rekognition and open-source ones from TensorFlow or Scikit-learn often fail to produce sufficient results when it comes to niche use cases in industries like finance or the energy sector. Many businesses have unique needs, and models that don’t have the contextual data of a certain industry will not be able to provide relevant results.

Building on top of open-source models

At ThirdEye Data, we recently worked with a utility company to tag and detect defects in electric poles by using AI to analyze thousands of images. We started off using Google Vision API and found that it was unable to produce our desired results — with the precision and recall values of the AI models completely unusable. The models were unable to read the characters within the tags on the electric poles 90% of the time because it didn’t identify the nonstandard font and varying background colors used in the tags.

So, we took base computer vision models from TensorFlow and optimized them to the utility company’s precise needs. After two months of developing AI models to detect and decipher tags on the electric poles, and another two months of training these models, the results are displaying accuracy levels of over 90%. These will continue to improve over time with retraining iterations.

Any team looking to expand its AI capabilities should first apply its data and use cases to a generic model and assess the results. Open-source algorithms that companies can start off with can be found on AI and ML frameworks like TensorFlow, Scikit-learn or Microsoft Cognitive Toolkit. At ThirdEye Data, we used convolutional neural network (CNN) algorithms on TensorFlow.

Then, if the results are insufficient, the team can extend the algorithm by training it further on their own industry-specific data.

12 Apr 2021

GV partner Terri Burns is joining us to judge the Startup Battlefield

One of the best parts of TechCrunch Disrupt is the Startup Battlefield competition, and one of the most important pieces of the Startup Battlefield is our lineup of expert judges — they’re the ones the founders are trying to impress. Once the demos and presentations are done, the judges need to think quickly and ask probing questions about each startup. And then, of course, they choose the winner who gets to take home $100k and the Disrupt Cup.

This year, at our second virtual Startup Battlefield, GV partner Terri Burns will be joining us as one of our judges. Burns joined the firm (formerly known as Google Ventures) in 2017 as a principal, then was promoted to partner last year — making her the first Black female partner at GV, and its youngest partner as well.

Burns previously worked as a developer evangelist and front-end engineer at Venmo and an associate product manager at Twitter. At GV, she’s invested in high school social app HAGS and social audio app Locker Room.

During an interview about her role last fall, Burns told us she’s interested in backing Gen Z founders, and she pointed to HAGS as a good example of a product that was “built by and for Gen Z.”

“That generation is coming to an age where they are building and they are creating and they are at the forefront of the cultural landscape,” she said. “So to find founders and builders and engineers, and designers who are part of that generation and building for their own demographic, I think it’s just a new wave of entrepreneurship and builders that are coming into technology and in Silicon Valley.”

Disrupt 2021 runs September 21-23 and will be 100% virtual this year. Get your pass to attend with the rest of the TechCrunch community for less than $100 if you secure your seat before next month. Applications to compete in the Startup Battlefield are also open now until May 13.

12 Apr 2021

UiPath’s first IPO pricing could be a warning to late-stage investors

A few months back, robotic process automation (RPA) unicorn UiPath raised a huge $750 million round at a valuation of around $35 billion. The capital came ahead of the company’s expected IPO, so its then-new valuation helped provide a measuring stick for where its eventual flotation could price.

UiPath then filed to go public. But today the company’s first IPO price range was released, failing to value the company where its final private backers expected it to.

In an S-1/A filing, UiPath disclosed that it expects its IPO to price between $43 and $50 per share. Using a simple share count of 516,545,035, the company would be worth $22.2 billion to $25.8 billion at the lower and upper extremes of its expected price interval. Neither of those numbers is close to what it was worth, in theory, just a few months ago.

According to IPO watching group Renaissance Capital, UiPath is worth up to $26.0 billion on a fully diluted basis. That’s not much more than its simple valuation.

For UiPath, its initial IPO price interval is a disappointment, though the company could see an upward revision in its valuation before it does sell shares and begin to trade. But more to the point, the company’s private-market valuation bump followed by a quick public-market correction stands out as a counter-example to something that we’ve seen so frequently in recent months.

Is UiPath’s first IPO price interval another indicator that the IPO market is cooling?

Remember Roblox?

If you think back to the end of 2020, Roblox decided to cancel its IPO and pursue a direct listing instead. Why? Because a few companies like Airbnb had gone public at what appeared to be strong valuation marks only to see their values rocket once they began to trade. So, Roblox decided to raise a huge amount of private capital, and then direct list.

12 Apr 2021

Biden’s cybersecurity dream team takes shape

President Biden has named two former National Security Agency veterans to senior government cybersecurity positions, including the first national cyber director.

The appointments, announced Monday, land after the discovery of two cyberattacks linked to foreign governments earlier this year — the Russian espionage campaign that planed backdoors in U.S. technology giant SolarWinds’ technology to hack into at least nine federal agencies, and the mass exploitation of Microsoft Exchange servers linked to hackers backed by China.

Jen Easterly, a former NSA official under the Obama administration who helped to launch U.S. Cyber Command, has been nominated as the new head of CISA, the cybersecurity advisory unit housed under Homeland Security. CISA has been without a head for six months after then-President Trump fired former director Chris Krebs, who Trump appointed to lead the agency in 2018, for disputing Trump’s false claims of election hacking.

Biden has also named former NSA deputy director John “Chris” Inglis as national cyber director, a new position created by Congress late last year to be housed in the White House, charged with overseeing the defense and cybersecurity budgets of civilian agencies.

Inglis is expected to work closely with Anne Neuberger, who in January was appointed as the deputy national security adviser for cyber on the National Security Council. Neuberger, a former NSA executive and its first director of cybersecurity, was tasked with leading the government’s response to the SolarWinds attack and Exchange hacks.

Biden has also nominated Rob Silvers, a former Obama-era assistant secretary for cybersecurity policy, to serve as undersecretary for strategy, policy, and plans at Homeland Security. Silvers was recently floated for the top job at CISA.

Both Easterly and Silvers’ positions are subject to Senate confirmation. The appointments were first reported by The Washington Post.

Former CISA director Krebs praised the appointments as “brilliant picks.” Dmitri Alperovitch, a former CrowdStrike executive and chair of Silverado Policy Accelerator, called the appointments the “cyber equivalent of the dream team.” In a tweet, Alperovitch said: “The administration could not have picked three more capable and experienced people to run cyber operations, policy and strategy alongside Anne Neuberger.”

Neuberger is replaced by Rob Joyce, a former White House cybersecurity czar, who returned from a stint at the U.S. Embassy in London earlier this year to serve as NSA’s new cybersecurity director.

Last week, the White House asked Congress for $110 million in new funding for next year to help Homeland Security to improve its defenses and hire more cybersecurity talent. CISA hemorrhaged senior staff last year after several executives were fired by the Trump administration or left for the private sector.