Author: azeeadmin

22 Apr 2021

To sell or not to sell: Lessons from a bootstrapped CEO

The clock begins ticking on a startup the day the doors open. Regardless of a young company’s struggles or success, sooner or later the question of when, how or whether to sell the enterprise presents itself. It’s possibly the biggest question an entrepreneur will face.

For founders who self-funded (bootstrapped) their startup, a boardroom full of additional factors come into play. Some are the same as for investor-funded firms, but many are unique.

Put happiness at the center of the decision, and let your intuition — the instincts that made you the person you are today — be your guide.

After 18 years of bootstrapping a BI software firm into a business that now serves 28,000 companies and three million users in 75 countries, here’s what I’ve learned about myself, my company, about entrepreneurship and about when to grab for that brass ring.

Profitable or bust

Starting a software company 7,900 miles southwest of Silicon Valley requires some forethought and not a small amount of crazy. When we opened, it didn’t occur to us that one could have an idea and then go knock on someone’s door and ask for money.

Bootstrapping forced us to be a bit more creative about how we would go about building our company. In the early days, it was a distraction to growth, because we were doing other revenue-generating activities like consulting, development work, whatever we could find to keep ourselves afloat while we built Yellowfin. It meant we couldn’t be 100% focused on our idea.

However, it also meant we had to generate income from our new company from Day One — something funded companies don’t have to do. We never got into the mindset that it was okay to burn lots of cash and then cross our fingers and hope that it worked.

22 Apr 2021

Forerunner’s Eurie Kim will share why she invested in Oura on Extra Crunch Live

When it comes to building a successful startup, biotech and hardware happen to be two of the most difficult verticals in the tech industry. But Oura is doing it anyway.

The health and fitness tracking ring has been used in a number of studies around COVID-19 and been worn by NBA and WNBA players to help prevent outbreaks in the league. Oura has raised nearly $50 million from investors including Lifeline Ventures, Bold Capital Partners and Forerunner.

So it should come as no surprise that we’re thrilled to have Forerunner’s Eurie Kim and Oura CEO Harpreet Rai join us on a forthcoming episode of Extra Crunch Live.

Kim is herself a former entrepreneur and joined Forerunner in 2012. She sits on the boards of companies like The Farmer’s Dog, Curology, Attabotics, Oura Ring, Eclipse, Juni, among others, and found herself on the Midas Brink List in 2020.

Rai, for his part, is CEO at Oura, where he leads a team of over 150 employees. Before Oura, Rai was a portfolio manager at Eminence Capital for nine years.

On Extra Crunch Live, the duo will talk about how Oura went about raising its $28 million Series B round and why Kim took a bet on the startup. We’ll also ask about tactical advice for founders looking to fundraise and grow their businesses.

Anyone can join the live event, which goes down on April 28 at noon PDT/3 p.m. EDT. REGISTER FOR FREE HERE!

22 Apr 2021

Apple downplays complaints about App Store scams in antitrust hearing

Apple was questioned on its inability to reign in subscription scammers on its App Store during yesterday’s Senate antitrust hearing. The tech giant has argued that one of the reasons it requires developers to pay App Store commissions is to help Apple fight marketplace fraud and protect consumers. But developers claim Apple is doing very little to stop obvious scams that are now raking in millions and impacting consumer trust in the overall subscription economy, as well as in their own legitimate, subscription-based businesses.

One developer in particular, Kosta Eleftheriou, has made it his mission to highlight some of the most egregious scams on the App Store. Functioning as a one-man bunco squad, Eleftheriou regularly tweets out examples of apps that are leveraging fake reviews to promote their harmful businesses.

Some of the more notable scams he’s uncovered as of late include a crypto wallet app that scammed a user out of his life savings (~$600,000) in bitcoin; a kids game that actually contained a hidden online casino; and a VPN app scamming users out of $5 million per year. And, of course, there’s the scam that lit the fire in the first place: a competitor to Eleftheriou’s own Apple Watch app that he alleges scammed users out of $2 million per year, after stealing his marketing materials, cloning his app, and buying fake reviews to make the scammer’s look like the better choice.

Eleftheriou’s tweets have caught the attention of the larger app developer community, who now email him other examples of scams they’ve uncovered. Eleftheriou more recently took his crusade a step further by filing a lawsuit against Apple over the revenue he’s lost to App Store scammers.

Though Eleftheriou wasn’t name-checked in yesterday’s antitrust hearing, his work certainly was.

In a line of questioning from Georgia’s Senator Jon Ossoff, Apple’s Chief Compliance Officer Kyle Andeer was asked why Apple was not able to locate scams, given that these fraudulent apps are, as Ossoff put it, “trivially easy to identify as scams.”

He asked why do we have rely upon “open source reporting and journalists” to find the app scams — a reference that likely, at least in part, referred to Eleftheriou’s recent activities.

Eleftheriou himself has said there’s not much to his efforts. You simply find the apps generating most revenues and then check them for suspicious user reviews and high subscription prices. When you find both, you’ve probably uncovered a scam.

Andeer demurred, responding to Ossoff’s questions by saying that Apple has invested “tens of millions, hundreds of millions of dollars” in hardening and improving the security of its App Store.

“Unfortunately, security and fraud is a cat and mouse game. Any retailer will tell you that. And so we’re constantly working to improve,” Andeer said. He also claimed Apple was investing in more resources and technologies to catch wrong-doers, and noted that the App Store rejected thousands of apps every year for posing a risk to consumers.

The exec then warned that if Apple wasn’t the intermediary, the problem would be even worse.

“…No one is perfect, but I think what we’ve shown over and over again that we do a better job than others. I think the real risks of opening up the iPhone to sideloading or third-party app stores is that this problem will only multiply. If we look at other app stores out there, we look at other distribution platforms, it scares us.”

Ossoff pressed on, noting the sideloading questions could wait, and inquired again about the scam apps.

“Apple is making a cut on those abusive billing practices, are you not?,” he asked.

Andeer said he didn’t believe that was the case.

“If we find fraud — if we find a problem, we’re able to rectify that very quickly. And we do each and every day,” he said.

But to what extent Apple was profiting from the App Store scams was less clear. Ossoff wanted to know if Apple refunded “all” of its revenues derived from the scam billing practices — in other words, if every customer who ever subscribed got their money back when a scam was identified.

Andeer’s answer was a little vague, however, as it could be interpreted to mean Apple refunds customers who report the scam or file a complaint — procedures it already has in place today. Instead of saying that Apple refunds “all customers” when scams are identified, he carefully worded his response to say Apple worked to make sure “the customer” is made whole.

“Senator, that’s my understanding. There’s obviously a dedicated team here at Apple who works this each and every day. But my understanding is that we work hard to make sure the customer is in a whole position. That’s our focus at the end of the day. If we lose the trust of our customers, that’s going to hurt us,” he said.

For what it’s worth, Eleftheriou wasn’t buying it.

“Apple’s non-answers to Senator Ossoff’s great questions in yesterday’s hearing should anger all of us. They did not offer any explanation for why it’s so easy for people like me to keep finding multi-million-dollar scams that have been going on unchecked on the App Store for years. They also gave no clear answer to whether they’re responsible for fraudulent activity in their store,” he told TechCrunch.

“Apple appears to profit from these scams, instead of refunding all associated revenues back to affected users when they belatedly take some of these down. We’ve been letting Apple grade their own homework for over a decade. I urge the committee to get to the bottom of these questions, including Apple’s baffling decision years ago to remove the ability for users to flag suspicious apps on the App Store,” Eleftheriou added.

Apple did not provide a comment.

22 Apr 2021

5 emerging use cases for productivity infrastructure in 2021

When the world flipped upside down last year, nearly every company in every industry was forced to implement a remote workforce in just a matter of days — they had to scramble to ensure employees had the right tools in place and customers felt little to no impact. While companies initially adopted solutions for employee safety, rapid response and short-term air cover, they are now shifting their focus to long-term, strategic investments that empower growth and streamline operations.

As a result, categories that make up productivity infrastructure — cloud communications services, API platforms, low-code development tools, business process automation and AI software development kits — grew exponentially in 2020. This growth was boosted by an increasing number of companies prioritizing tools that support communication, collaboration, transparency and a seamless end-to-end workflow.

Productivity infrastructure is on the rise and will continue to be front and center as companies evaluate what their future of work entails and how to maintain productivity, rapid software development and innovation with distributed teams.

According to McKinsey & Company, the pandemic accelerated the share of digitally enabled products by seven years, and “the digitization of customer and supply-chain interactions and of internal operations by three to four years.” As demand continues to grow, companies are taking advantage of the benefits productivity infrastructure brings to their organization both internally and externally, especially as many determine the future of their work.

Automate workflows and mitigate risk

Developers rely on platforms throughout the software development process to connect data, process it, increase their go-to-market velocity and stay ahead of the competition with new and existing products. They have enormous amounts of end-user data on hand, and productivity infrastructure can remove barriers to access, integrate and leverage this data to automate the workflow.

Access to rich interaction data combined with pre-trained ML models, automated workflows and configurable front-end components enables developers to drastically shorten development cycles. Through enhanced data protection and compliance, productivity infrastructure safeguards critical data and mitigates risk while reducing time to ROI.

As the post-pandemic workplace begins to take shape, how can productivity infrastructure support enterprises where they are now and where they need to go next?

22 Apr 2021

At Basis Set Ventures merging venture capital and software development yields a $165 million new fund

When Xuezhao Lan first formed Basis Set Ventures, the goal was to leverage technology to give venture capital investing super powers.

From the earliest days, when Lan hired former TechCrunch reporter John Mannes, and then built out the team with the data scientist, Rachel Wong, former Upfront Ventures partner, Chang Xu, and former vice president of growth, Sheila Vashee, the focus was as much on technology products as it was on dollar investments and advisory services.

Together with Niniane Wang, a former advisor who now serves as the firm’s chief technology officer, Basis Set grounded its decisions in the technical bona fides of Lan’s firm. And product development isn’t something that the firm simply pays lip service about. Basis Set ships about three different updates to a massive suite of internal and external products every week, according to the partnership.

That product development is one reason why the firm has managed to stay relatively lean and why it was two times oversubscribed when it went out to raise its second fund, a $165 million vehicle, which closed recently.

The fund isn’t that much larger than the $136 million Basis Set had previously raised, but Lan thinks its the right size for her goals, which are to return massive amounts of capital to her limited partners.

Internal technology programs like the company’s Pascal system have allowed Basis Set to review roughly 9,000 different software deals developing tech in the company’s core thesis, which is artificial intelligence-enabled software as a service and business-to-business deals.

Robotic arm carrying a mechanical part

Image Credit: Getty Images

Over the next year, those technology services are going to start paying dividends, according to Lan, in the form of a couple of initial public offerings that will soon make their way to market.

And Basis Set doesn’t limit itself geographically, thanks to the coverage support that its software provides. “Pascal is a major asset for finding companies outside of the Bay Area,” said Lan.

Once those companies are identified and in the portfolio, the startups have access to a tool called HyperGrowth, which links tech companies with mentors from the Bay Area to help those companies scale. It’s another example of Basis Set’s product-driven approach, said Lan.

“When I started BSV, we’re a pretty technical team and technical people. Every single person came back to us to ask how to grow, how to do sales, how to do tech, how to make the first hires, introductions to customers, introductions to advisors. The number one need for companies is go to market,” said Lan. “Over time we started scaling that efforts. Introductions manually, and then holding events, and then bringing in a growth partner, Sheila. We built this tool where hundreds of advisors would opt in to be connected to a portfolio company.”

Currently Basis Set has at least three different programs all aimed to recruit and nurture talent that typical Bay Area firms haven’t traditionally focused on. The first is its Persistence platform, which is designed to help women developers and founders network and nurture connections and foster ideas. The firm also has a service that it calls Founder Superpowers, to help entrepreneurs identify and develop areas of strength while looking for additional tools to augment their capabilities.

We had a data science team very early and we already started automating a bunch of things. These are the ones that made it to the end of the finish line. Because of the way we operate we are constantly iterating,” said Lan. 

Another key factor for the company is trying to find a more diverse set of founders with different background from the typical Silicon Valley biography, Mannes said.

“We talk a lot about how to find founders outside of the Bay Area… the entrepreneurs who aren’t hanging out at Blue Bottle 24-7,” said Mannes. The Persistence platform, the Founder Superpowers tool, and the work that Basis Set does with Dev Color, an organization designed to support under-represented members of the tech community are all manifestations of that, Mannes and Lan both said. 

Image via Getty Images / sorbetto

Companies in the firm’s current portfolio include: Ergeon, which helps simplify the process of staffing for the construction industry; the autonomous weed picking and crop dusting agtech startup, Farmwise; and Quince, which Basis Set has dubbed the anti-Amazon for its factory direct sales model. 

Meanwhile, companies like the conversational machine learning company, Rasa; the privacy and compliance automation toolkit developer, DataGrail; the workflow automation software developer for deskless workers, Workstream; and Assembled, which provides tech infrastructure support for teams, have all raised significant follow-on funding.

All of these investments are undergirded by the technical team’s work and the collaboration with investors, the firm stressed.

“We’re in lock sink about what we need to build and we’re literally shipping every day,” said Vashee. “That’s what best in class looks like and that’s what we try to achieve… It’s the iron man suit. That’s what we’re looking to build with products here.  Our goal is to automate every part of the process that we can while building the empathy with our founders.”

22 Apr 2021

Fraud prevention platform Sift raises $50M at over $1B valuation, eyes acquisitions

With the increase of digital transacting over the past year, cybercriminals have been having a field day.

In 2020, complaints of suspected internet crime surged by 61%, to 791,790, according to the FBI’s 2020 Internet Crime Report. Those crimes — ranging from personal and corporate data breaches to credit card fraud, phishing and identity theft — cost victims more than $4.2 billion.

For companies like Sift — which aims to predict and prevent fraud online even more quickly than cybercriminals adopt new tactics — that increase in crime also led to an increase in business.

Last year, the San Francisco-based company assessed risk on more than $250 billion in transactions, double from what it did in 2019. The company has over several hundred customers, including Twitter, Airbnb, Twilio, DoorDash, Wayfair and McDonald’s, as well a global data network of 70 billion events per month.

To meet the surge in demand, Sift said today it has raised $50 million in a funding round that values the company at over $1 billion. Insight Partners led the financing, which included participation from Union Square Ventures and Stripes.

While the company would not reveal hard revenue figures, President and CEO Marc Olesen said that business has tripled since he joined the company in June 2018. Sift was founded out of Y Combinator in 2011, and has raised a total of $157 million over its lifetime.

The company’s “Digital Trust & Safety” platform aims to help merchants not only fight all types of internet fraud and abuse, but to also “reduce friction” for legitimate customers. There’s a fine line apparently between looking out for a merchant and upsetting a customer who is legitimately trying to conduct a transaction.

Sift uses machine learning and artificial intelligence to automatically surmise whether an attempted transaction or interaction with a business online is authentic or potentially problematic.

Image Credits:

One of the things the company has discovered is that fraudsters are often not working alone.

“Fraud vectors are no longer siloed. They are highly innovative and often working in concert,” Olesen said. “We’ve uncovered a number of fraud rings.”

Olesen shared a couple of examples of how the company thwarted fraud incidents last year. One recently involved money laundering through donation sites where fraudsters tested stolen debit and credit cards through fake donation sites at guest checkout.

“By making small donations to themselves, they laundered that money and at the same tested the validity of the stolen cards so they could use it on another site with significantly higher purchases,” he said. 

In another case, the company uncovered fraudsters using Telegram, a social media site, to make services available, such as food delivery, with stolen credentials.

The data that Sift has accumulated since its inception helps the company “act as the central nervous system for fraud teams.” Sift says that its models become more intelligent with every customer that it integrates.

Insight Partners Managing Director Jeff Lieberman, who is a Sift board member, said his firm initially invested in Sift in 2016 because even at that time, it was clear that online fraud was “rapidly growing.” It was growing not just in dollar amounts, he said, but in the number of methods cybercriminals used to steal from consumers and businesses.

Sift has a novel approach to fighting fraud that combines massive data sets with machine learning, and it has a track record of proving its value for hundreds of online businesses,” he wrote via email.

When Olesen and the Sift team started the recent process of fundraising, Index actually approached them before they started talking to outside investors “because both the product and business fundamentals are so strong, and the growth opportunity is massive,” Lieberman added.

“With more businesses heavily investing in online channels, nearly every one of them needs a solution that can intelligently weed out fraud while ensuring a seamless experience for the 99% of transactions or actions that are legitimate,” he wrote. 

The company plans to use its new capital primarily to expand its product portfolio and to scale its product, engineering and sales teams.

Sift also recently tapped Eu-Gene Sung — who has worked in financial leadership roles at Integral Ad Science, BSE Global and McCann — to serve as its CFO.

As to whether or not that meant an IPO is in Sift’s future, Olesen said that Sung’s experience of taking companies through a growth phase such as what Sift is experiencing would be valuable. The company is also for the first time looking to potentially do some M&A.

“When we think about expanding our portfolio, it’s really a buy/build partner approach,” Olesen said.

22 Apr 2021

Google Fi turns 6 and gets a new unlimited plan

Google Fi, Google’s cell network, is turning six today and to celebrate, the team is launching a new pricing plan, dubbed ‘Simply Unlimited’ starting at $60 per month for a single line (down to $30 per line for 3 lines or more). The new plan features unlimited calls and texts in the U.S., plus unlimited data and texting in the U.S., Canada and Mexico.

Image Credits: Google

You may recall that Fi’s original promise was a single, affordable pay-as-you-go plan where you would pay a fixed price per month for the basic call and texting service and then pay an extra $10 per GB of data you used per billing cycle, capped at $80 per month. In 2019, Google then turned this into what is essentially an unlimited plan, dubbed Fi Unlimited, starting at $70 per month for a single line, with discounts for additional lines.

The new ‘Simply Unlimited’ plan is a pared-down version of the original Unlimited plan, which is now called the Unlimited Plus plan (yeah, that’s a lot of names). Now, that plan has still a lot of extra features that power users aren’t likely willing to give up for a slightly lower price. In addition to everything in the new Simply Unlimited plan, this plan still features free international calls to more than 50 countries and international data in more than 200 destinations, plus full-speed hotspot tethering and 100GB of Google One cloud storage.

The Flexible plan is also still an option, with its base fee of $20 per month for texting and calling for a single line (down to $17 per month for three lines) and $10 per GB of data, no matter whether you use if abroad or at home — or for hotspot tethering. Google says that’s the plan to choose if you’re mostly on WiFi — as most of us are right now.

Basically, if you’re not planning to use your phone outside of North America, the new Simply Unlimited plan looks like a good deal that, depending on your use case, compares favorably with similarly priced plans from other carriers — especially if international data is important to you.

Image Credits: Google

22 Apr 2021

Alexa von Tobel will join Disrupt 2021 as a Startup Battlefield judge

Alexa von Tobel, co-founder and managing partner of Inspired Capital, will be joining TechCrunch Disrupt 2021 taking place September 21-23 to help judge the startups competing in Startup Battlefield. NOTE: Applications are now open to don’t hesitate to throw your hat in the ring here!

Prior to Inspired Capital, Alexa founded LearnVest in 2008 with the goal of helping women in particular make better investments and learn financial planning. After raising $75 million in venture capital and growing the service to 1.5 million users, LearnVest was acquired by Northwestern Mutual in May 2015 for $250 million.

Following the acquisition, Alexa joined the management team of Northwestern Mutual as the company’s first chief digital officer. She later assumed the role of chief innovation officer, a position in which which she oversaw Northwestern Mutual’s venture arm.

Alexa, who holds a Certified Financial Planner designation, is also The New York Times-bestselling author of “Financially Fearless,” which debuted in December 2013, and its follow-up, “Financially Forward,” which arrived in May 2019. She is also the host of “The Founders Project with Alexa von Tobel,” a weekly podcast with Inc. that highlights entrepreneurs.

Alexa is a member of the 2016 Class of Henry Crown Fellows and an inaugural member of President Obama’s Ambassadors for Global Entrepreneurship. She has been honored with numerous recognitions, including: a Forbes Magazine cover story, Fortune’s 40 Under 40, Fortune’s Most Powerful Women, Inc. Magazine’s 30 Under 30 and World Economic Forum’s Young Global Leader.

Alexa recently joined us at TechCrunch Early Stage, where she led a breakout session on financial planning targeted specifically at startups. Join us at Disrupt this September and get your ticket for under $100 for a limited time!

22 Apr 2021

Proctorio sued for using DMCA to take down a student’s critical tweets

A university student is suing exam proctoring software maker Proctorio to “quash a campaign of harassment” against critics of the company, including an accusation that the company misused copyright laws to remove his tweets that were critical of the software.

The Electronic Frontier Foundation, which filed the lawsuit this week on behalf of Miami University student Erik Johnson, who also does security research on the side, accused Proctorio of having “exploited the DMCA to undermine Johnson’s commentary.”

Twitter hid three of Johnson’s tweets after Proctorio filed a copyright takedown notice under the Digital Millennium Copyright Act, or DMCA, alleging that three of Johnson’s tweets violated the company’s copyright.

Schools and universities have increasingly leaned on proctoring software during the pandemic to invigilate student exams, albeit virtually. Students must install the school’s choice of proctoring software to grant access to the student’s microphone and webcam to spot potential cheating. But students of color have complained that the software fails to recognize non-white faces and that the software also requires high-speed internet access, which many low-income houses don’t have. If a student fails these checks, the student can end up failing the exam.

Despite this, Vice reported last month that some students are easily cheating on exams that are monitored by Proctorio. Several schools have banned or discontinued using Proctorio and other proctoring software, citing privacy concerns.

Proctorio’s monitoring software is a Chrome extension, which unlike most desktop software can be easily downloaded and the source code examined for bugs and flaws. Johnson examined the code and tweeted what he found — including under what circumstances a student’s test would be terminated if the software detected signs of potential cheating, and how the software monitors for suspicious eye movements and abnormal mouse clicking.

Johnson’s tweets also contained links to snippets of the Chrome extension’s source code on Pastebin.

Proctorio claimed at the time, via its crisis communications firm Edelman, that Johnson violated the company’s rights “by copying and posting extracts from Proctorio’s software code on his Twitter account.” But Twitter reinstated Johnson’s tweets after finding Proctorio’s takedown notice “incomplete.”

“Software companies don’t get to abuse copyright law to undermine their critics,” said Cara Gagliano, a staff attorney at the EFF. “Using pieces [of] code to explain your research or support critical commentary is no different from quoting a book in a book review.”

The complaint argues that Proctorio’s “pattern of baseless DMCA notices” had a chilling effect on Johnson’s security research work, amid fears that “reporting on his findings will elicit more harassment.”

“Copyright holders should be held liable when they falsely accuse their critics of copyright infringement, especially when the goal is plainly to intimidate and undermine them,” said Gagliano. “We’re asking the court for a declaratory judgment that there is no infringement to prevent further legal threats and takedown attempts against Johnson for using code excerpts and screenshots to support his comments.”

The EFF alleges that this is part of a wider pattern that Proctorio uses to respond to criticism. Last year Olsen posted a student’s private chat logs on Reddit without their permission. Olsen later set his Twitter account to private following the incident. Proctorio is also suing Ian Linkletter, a learning technology specialist at the University of British Columbia, after posting tweets critical of the company’s proctoring software.

The lawsuit is filed in Arizona, where Proctorio is headquartered. Proctorio CEO Mike Olson did not respond to a request for comment.

22 Apr 2021

To ensure inclusivity, the Biden administration must double down on AI development initiatives

The National Security Commission on Artificial Intelligence (NSCAI) issued a report last month delivering an uncomfortable public message: America is not prepared to defend or compete in the AI era. It leads to two key questions that demand our immediate response: Will the U.S. continue to be a global superpower if it falls behind in AI development and deployment? And what can we do to change this trajectory?

Left unchecked, seemingly neutral artificial intelligence (AI) tools can and will perpetuate inequalities and, in effect, automate discrimination. Tech-enabled harms have already surfaced in credit decisions, health care services, and advertising.

To prevent this recurrence and growth at scale, the Biden administration must clarify current laws pertaining to AI and machine learning models — both in terms of how we will evaluate use by private actors and how we will govern AI usage within our government systems.

The administration has put a strong foot forward, from key appointments in the tech space to issuing an Executive Order on the first day in office that established an Equitable Data Working Group. This has comforted skeptics concerned both about the U.S. commitment to AI development and to ensuring equity in the digital space.

But that will be fleeting unless the administration shows strong resolve in making AI funding a reality and establishing leaders and structures necessary to safeguard its development and use.

Need for clarity on priorities

There has been a seismic shift at the federal level in AI policy and in stated commitments to equality in tech. A number of high profile appointments by the Biden administration — from Dr. Alondra Nelson as Deputy of OSTP, to Tim Wu at the NEC, to (our former senior advisor) Kurt Campbell at the NSC — signal that significant attention will be paid to inclusive AI development by experts on the inside.

The NSCAI final report includes recommendations that could prove critical to enabling better foundations for inclusive AI development, such as creating new talent pipelines through a U.S. Digital Service Academy to train current and future employees.

The report also recommends establishing a new Technology Competitiveness Council led by the Vice President. This could prove essential in ensuring that the nation’s commitment to AI leadership remains a priority at the highest levels. It makes good sense to have the administration’s leadership on AI spearheaded by VP Harris in light of her strategic partnership with the President, her tech policy savvy and her focus on civil rights.

The U.S. needs to lead by example

We know AI is powerful in its ability to create efficiencies, such as plowing through thousands of resumes to identify potentially suitable candidates. But it can also scale discrimination, such as the Amazon hiring tool that prioritized male candidates or “digital redlining” of credit based on race.

The Biden administration should issue an Executive Order (EO) to agencies inviting ideation on ways AI can improve government operations. The EO should also mandate checks on AI used by the USG to ensure it’s not spreading discriminatory outcomes unintentionally.

For instance, there must be a routine schedule in place where AI systems are evaluated to ensure embedded, harmful biases are not resulting in recommendations that are discriminatory or inconsistent with our democratic, inclusive values — and reevaluated routinely given that AI is constantly iterating and learning new patterns.

Putting a responsible AI governance system in place is particularly critical in the U.S. Government, which is required to offer due process protection when denying certain benefits. For instance, when AI is used to determine allocation of Medicaid benefits, and such benefits are modified or denied based on an algorithm, the government must be able to explain that outcome, aptly termed technological due process.

If decisions are delegated to automated systems without explainability, guidelines and human oversight, we find ourselves in the untenable situation where this basic constitutional right is being denied.

Likewise, the administration has immense power to ensure that AI safeguards by key corporate players are in place through its procurement power. Federal contract spending was expected to exceed $600 billion in fiscal 2020, even before including pandemic economic stimulus funds. The USG could effectuate tremendous impact by issuing a checklist for federal procurement of AI systems — this would ensure the government’s process is both rigorous and universally applied, including relevant civil rights considerations.

Protection from discrimination stemming from AI systems

The government holds another powerful lever to protect us from AI harms: its investigative and prosecutorial authority. An Executive Order instructing agencies to clarify applicability of current laws and regulations (e.g., ADA, Fair Housing, Fair Lending, Civil Rights Act, etc.) when determinations are reliant on AI-powered systems could result in a global reckoning. Companies operating in the U.S. would have unquestionable motivation to check their AI systems for harms against protected classes.

Low-income individuals are disproportionately vulnerable to many of the negative effects of AI. This is especially apparent with regard to credit and loan creation, because they are less likely to have access to traditional financial products or the ability to obtain high scores based on traditional frameworks. This then becomes the data used to create AI systems that automate such decisions.

The Consumer Finance Protection Bureau (CFPB) can play a pivotal role in holding financial institutions accountable for discriminatory lending processes that result from reliance on discriminatory AI systems. The mandate of an EO would be a forcing function for statements on how AI-enabled systems will be evaluated, putting companies on notice and better protecting the public with clear expectations on AI use.

There is a clear path to liability when an individual acts in a discriminatory way and a due process violation when a public benefit is denied arbitrarily, without explanation. Theoretically, these liabilities and rights would transfer with ease when an AI system is involved, but a review of agency action and legal precedent (or rather, the lack thereof) indicates otherwise.

The administration is off to a good start, such as rolling back a proposed HUD rule that would have made legal challenges against discriminatory AI essentially unattainable. Next, federal agencies with investigative or prosecutorial authority should clarify which AI practices would fall under their review and current laws would be applicable — for instance, HUD for illegal housing discrimination; CFPB on AI used in credit lending; and the Department of Labor on AI used in determinations made in hiring, evaluations and terminations.

Such action would have the added benefit of establishing a useful precedent for plaintiff actions in complaints.

The Biden administration has taken encouraging first steps signaling its intent to ensure inclusive, less discriminatory AI. However, it must put its own house in order by directing that federal agencies require the development, acquisition and use of AI — internally and by those it does business with — is done in a manner that protects privacy, civil rights, civil liberties, and American values.