Author: azeeadmin

05 Feb 2021

The SAFE TECH Act would overhaul Section 230, but law’s defenders warn of major side effects

The first major Section 230 reform proposal of the Biden era is out. In a new bill, Senate Democrats Mark Warner, Mazie Hirono and Amy Klobuchar propose changes to Section 230 of the Communications Decency Act that would fundamentally change the 1996 law widely credited with cultivating the modern internet.

Section 230 is a legal shield that protects internet companies from the user-generated content they host, from Facebook and TikTok to Amazon reviews and comments sections. The new proposed legislation, known as the SAFE TECH Act, would do a few different things to change how that works.

First, it would fundamentally alter the core language of Section 230 — and given how concise that snippet of language is to begin with, any change is a big change. Under the new language, Section 230 would no longer offer protections in situations where payment was involved.

Here’s the current version:

“No provider or user of an interactive computer service shall be treated as
the publisher or speaker of any information speech provided by another
information content provider.”

And here are the changes the SAFE TECH Act would make:

No provider or user of an interactive computer service shall be treated as
the publisher or speaker of any speech provided by another
information content provider, except to the extent the provider or user has
accepted payment to make the speech available or, in whole or in part, created
or funded the creation of the speech.

(B) (c)(1)(A) shall be an affirmative defense to a claim alleging that an interactive computer service provider is a publisher or speaker with respect to speech provided by another information content provider that an interactive computer service provider has a burden of proving by a preponderance of the evidence.

That might not sound like much, but it could be a massive change. This bit of the SAFE TECH Act appears to be targeting advertising. In a tweet promoting the bill, Sen. Warner called online ads a “a key vector for all manner of frauds and scams” so homing in on platform abuses in advertising is the ostensible goal here. But under the current language, it’s possible that many other kinds of paid services could be affected, from Substack, Patreon and other kinds of premium online content to web hosting.

“A good lawyer could argue that this covers many different types of arrangements that go far beyond paid advertisements,” Jeff Kosseff, a cybersecurity law professor at the U.S. Naval Academy who authored a book about Section 230, told TechCrunch. “Platforms accept payments from a wide range of parties during the course of making speech ‘available’ to the public. The bill does not limit the exception to cases in which platforms accept payments from the speaker.”

Internet companies big and small rely on Section 230 protections to operate, but some of them might have to rethink their businesses if rules proposed in the new bill come to pass. Oregon Senator Ron Wyden, one of Section 230’s original authors, noted that the new bill has some good intentions, but he issued a strong caution against the blowback its unintended consequences could cause.

“Unfortunately, as written, it would devastate every part of the open internet, and cause massive collateral damage to online speech,” Wyden told TechCrunch, likening the bill to a full repeal of the law with added confusion from a cluster of new exceptions.

“Creating liability for all commercial relationships would cause web hosts, cloud storage providers and even paid email services to purge their networks of any controversial speech,” Wyden said.

Fight for the Future Director Evan Greer echoed the sentiment that the bill is well intentioned but shared the same concerns. “… Unfortunately this bill, as written, would have enormous unintended consequences for human rights and freedom of expression,” Greer said.

“It creates a huge carveout in Section 230 that impacts not only advertising but essentially all paid services, such as web hosting and CDNs, as well as small services like Patreon, Bandcamp, and Etsy.”

Given its focus on advertising and instances in which a company has accepted payment, the bill might be both too broad and too narrow at once to offer effective reform. While online advertising, particularly political advertising, has become a hot topic in recent discussions about cracking down on platforms, the vast majority of violent conspiracies, misinformation, and organized hate is the result of organic content, not the stuff that’s paid or promoted. It also doesn’t address the role of algorithms, a particular focus of a narrow Section 230 reform proposal in the House from Reps. Anna Eshoo (D-CA) and Tom Malinowski (D-NJ).

The other part of the SAFE Tech Act, which attracted buy-in from a number of civil rights organizations including the Anti-Defamation League, the Center for Countering Digital Hate and Color Of Change, does address some of those ills. By appending 230, the new bill would open internet companies to more civil liability in some cases, allowing victims of cyber-stalking, targeted harassment, discrimination and wrongful death to the opportunity to file lawsuits against those companies rather than blocking those kinds of suits outright.

The SAFE Tech Act would also create a carve-out allowing individuals to seek court orders in cases when an internet company’s handling of material it hosts could cause “irreparable harm” as well as allowing lawsuits in U.S. courts against American internet companies for human rights abuses abroad.

In a press release, Sen. Warner said the bill was about updating the 1996 law to bring it up to speed with modern needs:

“A law meant to encourage service providers to develop tools and policies to support effective moderation has instead conferred sweeping immunity on online providers even when they do nothing to address foreseeable, obvious and repeated misuse of their products and services to cause harm,” Warner said.

There’s no dearth of ideas about reforming Section 230. Among them: the bipartisan PACT Act from Senators Brian Schatz (D-HI) and John Thune (R-SD), which focuses on moderation transparency and providing less cover for companies facing federal and state regulators, and the EARN IT Act, a broad bill from Sen. Lindsey Graham (R-SC) and Richard Blumenthal (D-CT) that 230 defenders and internet freedom advocates regard as unconstitutional, overly broad and disastrous.

With a number of proposed 230 reforms already floating around, it’s far from guaranteed that a bill like the SAFE TECH Act will prevail after there’s been ample time to dissect its potentially sweeping implications.

05 Feb 2021

The Founder Institute’s VC Lab is a free training program for budding venture capitalists

The Founder Institute isn’t just trying trying to help entrepreneurs launch new startups — with its new VC Lab, the accelerator says it’s also hoping to fuel the launch of 1,000 new venture capital funds.

Fi co-founder Jonathan Greechan described this as an attempt to bring more “alignment” to the startup ecosystem.”

“The thinking here is that if we can better align the startup and funding ecosystems towards impact, then we can create companies that are truly positive for humanity,” Greechan said.

In fact, FI has already held two sessions of the VC Lab, one in the spring of last year (which Greechan compared to a minimum viable product) and one in the summer-fall (which he compared to a beta test). Now the deadline to

Without desks and a demo day, are accelerators worth it?

While many of those funds are still in the fundraising process and cannot be disclosed publicly yet, he noted that one-third of those funds are being created by partners from underrepresented backgrounds and that more than half of them are focused on impact. And he pointed to Pacer Ventures — a $3 million fund targeting startups in sub-Saharan African — as an example of the kind of firm VC Lab is meant to support.

“What we found was that a lot of the issues that these new fund managers have are pretty similar to the issues that new entrepreneurs have,” Greechan said. “They don’t understand the sequence of steps to actually get them off the ground.”

VC Lab is meant to help them understand those steps, with a curriculum that includes webinars, virtual office hours and conversations on Slack. Participants are also required to take The Mensarius Oath, which commits them to a number of values including endeavoring “to help create positive outcomes for all of humanity” and opposing “any abuse of power that leads to unfair advantage, seduction, corruption or mistreatment.”

The program is free, although participants can also pay $5 a month for access to “premium office hours.” Founders Institute/VC Lab isn’t participating in the funds financially.

When I asked about the business model, Greechan said, “We’re not looking at this necessarily as the next phase of our business. We’ve been fostering this community of [people building interesting and impactful companies, and now we’re making sure that we build a correlated community of people funding impactful companies. It’s not for charity, but there’s long-term benefits that we see for all of our grads.”

05 Feb 2021

Microsoft PAC blacklists election objectors and shifts lobbying weight towards progressive organizations

After “pausing” political giving to any politician who voted to overturn the 2020 election, Microsoft has clarified changes to its lobbying policy, doubling down on its original intention and changing gears with an eye towards funding impactful organizations.

Microsoft, along with most other major companies in the tech sector and plenty others, announced a halt to political donations in the chaotic wake of the capitol riots and subsequent partisan clashes over the legitimacy of the election.

At the time, Microsoft said that it often pauses donations during the transition to a new Congress, but in this case it would only resume them “until after it assesses the implications of last week’s events” and “consult[s] with employees.”

Assessing and consulting can take a long time, especially in matters of allocating cash in politics, but Microsoft seems to have accomplished their goal in relatively short order. In a series of sessions over the last two weeks involving over 300 employees who contribute to the PAC, the company arrived at a new strategy that reflects their priorities.

In a word, they’re blacklisting any Senator, Representative, government official, or organization that voted for or supported the attempt to overturn the election. Fortunately there doesn’t seem to be a lot of grey area here, which simplifies the process somewhat. This restriction will remain in place until the 2022 election — which, frighteningly, happens next year.

In fact, as an alternative to donating to individual candidates and politicians in the first place, the PAC will establish a new fund to “support organizations that promote public transparency, campaign finance reform, and voting rights.”

More details on this are forthcoming, but it’s a significant change from direct support of candidates to independent organizations. One hardly knows what a candidate’s fund goes to (Superbowl ads this time of year), but giving half a million bucks to a group challenging voter suppression and gerrymandering in a hotly contested district can make a big difference. (Work like this on a large scale helped tip Georgia from red to blue, for instance, and it didn’t happen overnight, or for free.)

There’s even a hint of a larger change in the offing, as Microsoft’s communications head Frank X. Shaw suggests in the blog post that “we believe there is an opportunity to learn and work together” with like-minded companies and PACs. If that isn’t a sly invitation to create a coalition of the like-minded I don’t know what is.

The company also will be changing the name of the PAC to the Microsoft Corporation Voluntary PAC to better communicate that it’s funded by voluntary contributions from employees and stakeholders and isn’t just a big corporate lobbying slush fund.

As we saw around the time of the original “pause,” and indeed with many other actions in the tech industry over the last year, it’s likely that one large company (in this case Microsoft) getting specific with its political moves will trigger more who just didn’t want to be the first to go. It’s difficult to predict exactly what the long-term ramifications of these changes will be (as they are still quite general and tentative) but it seems safe to say that the political funding landscape of the next election period will look quite a bit different from the last one.

05 Feb 2021

‘Orwellian’ AI lie detector project challenged in EU court

A legal challenge was heard today in Europe’s Court of Justice in relation to a controversial EU-funded research project using artificial intelligence for facial ‘lie detection’ with the aim of speeding up immigration checks.

The transparency lawsuit against the EU’s Research Executive Agency (REA), which oversees the bloc’s funding programs, was filed in March 2019 by Patrick Breyer, MEP of the Pirate Party Germany and a civil liberties activist — who has successfully sued the Commission before over a refusal to disclose documents.

He’s seeking the release of documents on the ethical evaluation, legal admissibility, marketing and results of the project. And is hoping to set a principle that publicly funded research must comply with EU fundamental rights — and help avoid public money being wasted on AI ‘snake oil’ in the process.

“The EU keeps having dangerous surveillance and control technology developed, and will even fund weapons research in the future, I hope for a landmark ruling that will allow public scrutiny and debate on unethical publicly funded research in the service of private profit interests,” said Breyer in a statement following today’s hearing. “With my transparency lawsuit, I want the court to rule once and for all that taxpayers, scientists, media and Members of Parliament have a right to information on publicly funded research — especially in the case of pseudoscientific and Orwellian technology such as the ‘iBorderCtrl video lie detector’.”

The court has yet to set a decision date on the case but Breyer said the judges questioned the agency “intensively and critically for over an hour” — and revealed that documents relating to the AI technology involved, which have not been publicly disclosed but had been reviewed by the judges, contain information such as “ethnic characteristics”, raising plenty of questions.

The presiding judge went on to query whether it wouldn’t be in the interests of the EU research agency to demonstrate that it has nothing to hide by publishing more information about the controversial iBorderCtrl project, per Breyer.

AI ‘lie detection’

The research in question is controversial because the notion of an accurate lie detector machine remains science fiction, and with good reason: There’s no evidence of a ‘universal psychological signal’ for deceit.

Yet this AI-fuelled commercial R&D ‘experiment’ to build a video lie detector — which entailed testers being asked to respond to questions put to them by a virtual border guard as a webcam scanned their facial expressions and the system sought to detect what an official EC summary of the project describes as “biomarkers of deceit” in an effort to score the truthfulness of their facial expressions (yes, really?‍♀️) — scored over €4.5M/$5.4M in EU research funding under the bloc’s Horizon 2020 scheme.

The iBorderCtrl project ran between September 2016 and August 2019, with the funding spread between 13 private or for-profit entities across a number of Member States (including the UK, Poland, Greece and Hungary).

Public research reports the Commission said would be published last year, per a written response to Breyer’s questions challenging the lack of transparency, do not appear to have seen the light of day yet.

Back in 2019 The Intercept was able to test out the iBorderCtrl system for itself. The video lie detector falsely accused its reporter of lying — judging she had given four false answers out of 16, and giving her an overall score of 48 which it reported that a policeman who assessed the results said triggered a suggestion from the system she should be subject to further checks (though was not as the system was never run for real during border tests).

The Intercept said it had to file a data access request — a right that’s established in EU law — in order to obtain a copy of the reporter’s results. Its report quoted Ray Bull, a professor of criminal investigation at the University of Derby, who described the iBorderCtrl project as “not credible” — given the lack of n evidence that monitoring microgestures on people’s faces is an accurate way to measure lying.

“They are deceiving themselves into thinking it will ever be substantially effective and they are wasting a lot of money. The technology is based on a fundamental misunderstanding of what humans do when being truthful and deceptive,” Bull also told it.

The notion that AI can automagically predict human traits if you just pump in enough data is distressingly common — just look at recent attempts to revive phrenology by applying machine learning to glean ‘personality traits’ from face shape. So a face-scanning AI ‘lie detector’ sits in a long and ignoble anti-scientific ‘tradition’.

In the 21st century it’s frankly incredible that millions of euros of public money are being funnelled into rehashing terrible old ideas — before you even consider the ethical and legal blindspots inherent in the EU funding research that runs counter to fundamental rights set out in the EU’s charter. When you consider all the bad decisions involved in letting this fly it looks head-hangingly shameful.

The granting of funds to such a dubious application of AI also appears to ignore all the (good) research that has been done showing how data-driven technologies risk scaling bias and discrimination.

We can’t know for sure, though, because only very limited information has been released about how the consortia behind iBorderCtrl assessed ethics considerations in their experimental application — which is a core part of the legal complaint.

The challenge in front of the European Court of Justice in Luxembourg poses some very awkward questions for the Commission: Should the EU be pouring taxpayer cash into pseudoscientific ‘research’? Shouldn’t it be trying to fund actual science? And why does its flagship research program — the jewel in the EU crown — have so little public oversight?

The fact that a video lie detector made it through the EU’s ‘ethics self-assessment‘ process, meanwhile, suggests the claimed ‘ethics checks’ aren’t worth a second glance.

“The decision on whether to accept [an R&D] application or not is taken by the REA after Member States representatives have taken a decision. So there is no public scrutiny, there is no involvement of parliament or NGOs. There is no [independent] ethics body that will screen all of those projects. The whole system is set up very badly,” says Breyer.

“Their argument is basically that the purpose of this R&D is not to contribute to science or to do something for public good or to contribute to EU policies but the purpose of these programs really is to support the industry — to develop stuff to sell. So it’s really supposed to be an economical program, the way it has been devised. And I think we really actually need a discussion about whether this is right, whether this should be so.”

“The EU’s about to regulate AI and here it is actually funding unethical and unlawful technologies,” he adds.

No external ethics oversight

Not only does it look hypocritical for the EU to be funding rights-hostile research but — critics contend — it’s a waste of public money that could be spend on genuinely useful research (be it for a security purpose or, more broadly, for the public good; and for furthering those ‘European values’ EU lawmakers love to refer to).

“What we need to know and understand is that research that will never be used because it doesn’t work or it’s unethical or it’s illegal, that actually wastes money for other programs that would be really important and useful,” argues Breyer.

“For example in the security program you could maybe do some good in terms of police protective gear. Or maybe in terms of informing the population in terms of crime prevention. So you could do a lot of good if these means were used properly — and not on this dubious technology that will hopefully never be used.”

The latest incarnation of the EU’s flagship research and innovation program, which takes over from Horizon 2020, has a budget of ~€95.5BN for the 2021-2027 period. And driving digital transformation and developments in AI are among the EU’s stated research funding priorities. So the pot of money available for ‘experimental’ AI looks massive.

But who will be making sure that money isn’t wasted on algorithmic snake oil — and dangerous algorithmic snake oil in instances where the R&D runs so clearly counter to the EU’s own charter of fundamental human rights?

The European Commission declined multiple requests for spokespeople to talk about these issues but it did send some on the record points (below), and some background information regarding access to documents which is a key part of the legal complaint.

Among the Commission’s on the record statements on ‘ethics in research’, it started with the claim that “ethics is given the highest priority in EU funded research”.

“All research and innovation activities carried out under Horizon 2020 must comply with ethical principles and relevant national, EU and international law, including the Charter of Fundamental Rights and the European Convention on Human Rights,” it also told us, adding: “All proposals undergo a specific ethics evaluation which verifies and contractually obliges the compliance of the research project with ethical rules and standards.”

It did not elaborate on how a ‘video lie detector’ could possibly comply with EU fundamental rights — such as the right to dignity, privacy, equality and non-discrimination.

And it’s worth noting that the European Data Protection Supervisor (EDPS) has raised concerns about misalignment between EU-funded scientific research and data protection law, writing in a preliminary opinion last year: “We recommend intensifying dialogue between data protection authorities and ethical review boards for a common understanding of which activities qualify as genuine research, EU codes of conduct for scientific research, closer alignment between EU research framework programmes and data protection standards, and the beginning of a debate on the circumstances in which access by researchers to data held by private companies can be based on public interest”.

On the iBorderCtrl project specifically the Commission told us that the project appointed an ethics advisor to oversee the implementation of the ethical aspects of research “in compliance with the initial ethics requirement”. “The advisor works in ways to ensure autonomy and independence from the consortium,” it claimed, without disclosing who the project’s (self-appointed) ethics advisor is.

“Ethics aspects are constantly monitored by the Commission/REA during the execution of the project through the revision of relevant deliverables and carefully analysed in cooperation with external independent experts during the technical review meetings linked to the end of the reporting periods,” it went on, adding that: “A satisfactory ethics check was conducted in March 2019.”

It did not provide any further details about this self-regulatory “ethics check”.

“The way how it works so far is basically some expert group that the Commission sets up with propose/call for tender,” says Breyer, discussing how the EU’s research program is structured. “It’s dominated by industry experts, it doesn’t have any members of parliament in there, it only has — I think — one civil society representative in it, so that’s falsely composed right from the start. Then it goes to the Research Executive Agency and the actual decision is taking by representatives of the Member States.

“The call [for research proposals] itself doesn’t sound so bad if you look it up — it’s very general — so the problem really was the specific proposal that they proposed in response to it. And these are not screened by independent experts, as far as I understand it. The issue of ethics is dealt with by self assessment. So basically the applicant is supposed to indicate whether there is a high ethical risk involved in the project or not. And only if they indicate so will experts — selected by the REA — do an ethics assessment.

“We don’t know who’s been selected, we don’t know their opinions — it’s also being kept secret — and if it turns out later that a project in unethical it’s not possible to revoke the grant.”

The hypocrisy charge comes in sharply here because the Commission is in the process of shaping risk-based rules for the application of AI. And EU lawmakers have been saying for years that artificial intelligence technologies need ‘guardrails’ to make sure they’re applied in line with regional values and rights.

Commission EVP Margrethe Vestager has talked about the need for rules to ensure artificial intelligence is “used ethically” and can “support human decisions and not undermine them”, for example.

Yet EU institutions are simultaneously splashing public funds on AI research that would clearly be unlawful if implemented in the region, and which civil society critics decry as obviously unethical given the lack of scientific basis underpinning ‘lie detection’.

In an FAQ section of the iBorderCtrl website, the commercial consortia behind the project concedes that real-world deployment of some of the technologies involved would not be covered by the existing EU legal framework — adding that this means “they could not be implemented without a democratic political decision establishing a legal basis”.

Or, put another way, such a system would be illegal to actually use for border checks in Europe without a change in the law. Yet European taxpayer funding was nonetheless ploughed in.

A spokesman for the EDPS declined to comment on Breyer’s case specifically but he confirmed that its preliminary opinion on scientific research and data protection is still relevant.

He also pointed to further related work which addresses a recent Commission push to encourage pan-EU health data sharing for research purposes — where the EDPS advises that data protection safeguards should be defined “at the outset” and also that a “thought through” legal basis should be established ahead of research taking place.

The EDPS recommends paying special attention to the ethical use of data within the [health data sharing] framework, for which he suggests taking into account existing ethics committees and their role in the context of national legislation,” the EU’s chief data supervisor writes, adding that he’s “convinced that the success of the [health data sharing plan] will depend on the establishment of a strong data governance mechanism that provides for sufficient assurances of a lawful, responsible, ethical management anchored in EU values, including respect for fundamental rights”.

tl;dr: Legal and ethical use of data must be the DNA of research efforts — not a check-box afterthought.

Unverifiable tech

In addition to a lack of independent ethics oversight of research projects that gain EU funding, there is — currently and worryingly for supposedly commercially minded research — no way for outsiders to independently verify (or, well, falsify) the technology involved.

In the case of the iBorderCtrl tech no meaningful data on the outcomes of the project has been made public and requests for data sought under freedom of information law have been blocked on commercial interest grounds.

Breyer has been trying without success to obtain information about the results of the project since it finished in 2019. The Guardian reported in detail on his fight back in December.

Under the legal framework wrapping EU research he says there’s only a very limited requirement to publish information on project outcomes — and only long after the fact. His hope is thus that the Court of Justice will agree ‘commercial interests’ can’t be used to over-broadly deny disclosure of information in the public interest.

“They basically argue there is no obligation to examine whether a project actually works so they have the right to fund research that doesn’t work,” he tells TechCrunch. “They also argue that basically it’s sufficient to exclude access if any publication of the information would damage the ability to sell the technology — and that’s an extremely wide interpretation of commercially sensitive information.

“What I would accept is excluding information that really contains business secrets like source code of software programs or internal calculations or the like. But that certainly shouldn’t cover, for example, if a project is labelled as unethical. It’s not a business secret but obviously it will harm their ability to sell it — but obviously that interpretation is just outrageously wide.”

“I’m hoping that this [legal action] will be a precedent to clarify that information on such unethical — and also unlawful if it were actually used or deployed — technologies, that the public right to know takes precedence over the commercial interests to sell the technology,” he adds. “They are saying we won’t release the information because doing so will diminish the chances of selling the technology. And so when I saw this then I said well it’s definitely worth going to court over because they will be treating all requests the same.”

Civil society organizations have also been thwarted in attempts to get detailed information about the iBorderCtrl project. The Intercept reported in 2019 that researchers at the Milan-based Hermes Center for Transparency and Digital Human Rights used freedom of information laws to obtain internal documents about the iBorderCtrl system, for example, but the hundreds of pages they got back were heavily redacted — with many completely blacked out.

“I’ve heard from [journalists] who have tried in vain to find out about other dubious research projects that they are massively withholding information. Even stuff like the ethics report or the legal assessment — that’s all stuff that doesn’t contain any commercial secrets, as such,” Breyer continues. “It doesn’t contain any source code, nor any sensitive information — they haven’t even released these partially.

“I find it outrageous that an EU authority [the REA] will actually say we don’t care what the interest is in this because as soon as it could diminish sales then we will withhold the information. I don’t think that’s acceptable, both in terms of taxpayers’ interests in knowing about what their money is being used for but also in terms of the scientific interest in being able to test/to verify these experiments on the so called ‘deception detection’ — which is very contested if it really works. And in order to verify or falsify it scientists of course need to have access to the specifics about these trials.

“Also democratically speaking if ever the legislator wants to decide on the introduction of such a system or even on the framing of these research programs we basically need to know the details — for example what was the number of false positives? How well does it really work? Does it have a discriminatory effect because it works less well on certain groups of people such as facial recognition technology. That’s all stuff that we really urgently need to know.”

Regarding access to documents related to EU-funded research the Commission referred us to Regulation no. 1049/2001 — which it said “lays down the general principles and limits” — though it added that “each case is analysed carefully and individually”.

However the Commission’s interpretation of the regulations of the Horizon program appears to entirely exclude the application of the freedom of information — at least in the iBorderCtrl project case.

Per Breyer, they limit public disclosure to a summary of the research findings — that can be published some three or four years after the completion of the project.

“You’ll see an essay of five or six pages in some scientific magazine about this project and of course you can’t use it to verify or falsify the technology,” he says. “You can’t see what exactly they’ve been doing — who they’ve been talking to. So this summary is pretty useless scientifically and to the public and democratically and it takes ages. So I hope that in the future we will get more insight and hopefully a public debate.”

The EU research program’s legal framework is secondary legislation. So Breyer’s argument is that a blanket clause about protecting ‘commercial interests’ should not be able to trump fundamental EU rights to transparency. But of course it will be up to the court to decide.

“I think I stand some good chance especially since transparency and access to information is actually a fundamental right in the EU — it’s in the EU charter of fundamental rights. And this Horizon legislation is only secondary legislation — they can’t deviate from the primary law. And they need to be interpreted in line with it,” he adds. “So I think the court will hopefully say that this is applicable and they will do some balancing in the context of the freedom of information which also protects commercial information but subject to prevailing public interests. So I think they will find a good compromise and hopefully better insight and more transparency.

“Maybe they’ll blacken out some parts of the document, redact some of it but certainly I hope that in principle we will get access to that. And thereby also make sure that in the future the Commission and the REA will have to hand over most of the stuff that’s been requested on this research. Because there’s a lot of dubious projects out there.”

A better system of research project oversight could start by having the committee that decides on funding applications not being comprised of mostly industry and EU Member State representatives (who of course will always want EU cash to come to their region) — but also parliamentary representatives, more civil society representatives and scientists, per Breyer.

“It should have independent participants and those should be the majority,” he says. “That would make sense to steer the research activities in the direction of public good, of compliance with our values, of useful research — because what we need to know and understand is research that will never be used because it doesn’t work or it’s unethical or it’s illegal, that wastes money for other programs that would be really important and useful.”

He also points to a new EU research program being set up that’s focused on defence — under the same structure, lacking proper public scrutiny of funding decisions or information disclosure, noting: “They want to do this for defence as well. So that will be even about lethal technologies.”

To date the only disclosures around iBorderCtrl have been a few parts of the technical specifications of its system and some of a communications report, per Breyer, who notes that both were ‘heavily redacted”.

“They don’t say for example which border agencies they have introduced this system to, they don’t say which politicians they’ve been talking to,” he says. “The interesting thing actually is that part of this funding is also presenting the technology to border authorities in the EU and politicians. Which is very interesting because the Commission keeps saying look this is only research; it doesn’t matter really. But in actual fact they are already using the project to promote the technology and the sales of it. And even if this is never used at EU borders funding the development will mean that it could be used by other governments — it could be sold to China and Saudi Arabia and the like.

“And also the deception detection technology — the company that is marketing it [a Manchester-based company called Silent Talker Ltd] — is also offering it to insurance companies, or to be used on job interviews, or maybe if you apply for a loan at a bank. So this idea that an AI system would be able to detect lies risks being used in the private sector very broadly and since I’m saying that it doesn’t work at all and it’s basically a lottery lots of people risk having disadvantages from this dubious technology.”

“It’s quite outrageous that nobody prevents the EU from funding such ‘voodoo’ technology,” he adds.

The Commission told us that “The Intelligent Portable Border Control System” (aka iBorderCtrl) “explored new ideas on increasing efficiency, convenience and security of land border crossing”, and like all security research projects it was “aimed at testing new ideas and technologies to address security challenges”.

“iBorderCtrl was not expected to deliver ready-made technologies or products. Not all research projects lead to the development of technologies with real-world applications. Once research projects are over, it is up to Member States to decide whether they want to further research and/or develop solutions studied by the project,” it also said. 

It also pointed out that specific application of any future technology “will always have to respect EU and national law and safeguards, including on fundamental rights and the EU rules on the protection of personal data”.

However Breyer also calls foul on the Commission seeking to deflect public attention by claiming ‘it’s only R&D’ or that it’s not deciding on the use of any particular technology. “Of course factually it creates pressure on the legislator to agree to something that has been developed if it turns out to be useful or to work,” he argues. “And also even if it’s not used by the EU itself it will be sold somewhere else — and so I think the lack of scrutiny and ethical assessment of this research is really scandalous. Especially as they have repeatedly developed and researched surveillance technologies — including mass surveillance of public spaces.”

“They have projects on Internet on bulk data collection and processing of Internet data. The security program is very problematic because they do research into interferences with fundamental rights — with the right to privacy,” he goes on. “There are no limitations really in the program to rule out unethical methods of mass surveillance or the like. And not only are there no material limitations but also there is no institutional set-up to be able to exclude such projects right from the beginning. And then even once the programs have been devised and started they will even refuse to disclose access to them. And that’s really outrageous and as I said I hope the court will do some proper balancing and provide for more insight and then we can basically trigger a public debate on the design of these research schemes.”

Pointing again to the Commission’s plan to set up a defence R&D fund under the same industry-centric decision-making structure — with a “similarly deficient ethics appraisal mechanism” — he notes that while there are some limits on EU research being able to fund autonomous weapons, other areas could make bids for taxpayer cash — such as weapons of mass destruction and nuclear weapons.

“So this will be hugely problematic and will have the same issue of transparency, all the more of course,” he adds.

On transparency generally, the Commission told us it “always encourages projects to publicise as much as possible their results”. While, for iBorderCtrl specifically, it said more information about the project is available on the CORDIS website and the dedicated project website.

If you take the time to browse to the ‘publications‘ page of the iBorderCtrl website you’ll find a number of “deliverables” — including an “ethics advisor”; the “ethic’s advisor’s first report”; an “ethics of profiling, the risk of stigmatization of individuals and mitigation plan”; and an “EU wide legal and ethical review report” — all of which are listed as “confidential”.

05 Feb 2021

Why these co-founders turned their sustainability podcast into a VC-backed business

When Laura Wittig and Liza Moiseeva met as guests on a podcast about sustainable fashion, they jibed so well together that they began one of their own: Good Together. Their show’s goal was to provide listeners with a place to learn how to be eco-conscious consumers, but with baby steps.

Wittig thinks the non-judgmental environment (one that doesn’t knock on a consumer for not being zero-waste overnight) is the show’s biggest differentiator. “Then, people were emailing us and asking how they can be on our journey beyond being a listener,” Wittig said. Now, over a year after launching the show, the co-hosts are turning validation from listeners into the blueprint for a standalone business: Brightly.

Brightly is a curated platform that sells vetted eco-friendly goods and shares tips about conscious consumerism. While the startup is launching with more than 200 products from eco-friendly brands, such as Sheets & Giggles and Juice Beauty, the long-term vision is to start their own commerce brand of Brightly-branded products. The starting lineup will include two to four products in the home space.

To get those products out by the holiday season, Brightly tells TechCrunch that it has raised $1 million in venture funding from investors, including Tacoma Venture Fund, Keeler Investments, Odile Roujol (a FAB Ventures backer and former L’Oréal CEO) and Female Founder’s Alliance.

The funding caps off a busy 12 months for Brightly. The startup has gone through Snap’s Yellow accelerator, an in-house effort from the social media company that began in 2018. As part of the program Snap invests $150,000 in each Yellow startup for an equity stake. The company also did Ready Set Raise, an equity-free accelerator put on by Female Founders Alliance, in the fall.

With new funding, Brightly is seeking to take a Glossier-style approach to become the next big brand in commerce: gather a community by recommending great products, then turn the strategy on its head and make your superfans buy in-house products under the same brand.

“We have access to a community of women who are beating our door down to shop directly with us and have exclusive products made for them,” Wittig said.

Brightly wants to be more than a “boring storefront” one could quickly whip up on Shopify or Amazon, Wittig says.

The company’s curation process, which every product goes through before being listed on the platform, is extensive. The startup makes sure that every product is created with sustainable and ethical supply chain processes and sustainable material. The team also interviews every brand’s founders to understand the genesis of any product that lives on the Brightly platform. The co-founders also weigh the durability and longevity of products, adopting what Wittig sees as a “Wirecutter approach.”

“It’s more like, ‘why would we pick an ethically produced leather handbag over something that might be made not from leather but wouldn’t last too long necessarily,’ ” she said. “These are the conversations we have with our audience, because the term eco-friendly is very much our grayscale.”

Image Credits: Brightly

More than 250,000 people come to Brightly, either through their app or website, every day, according to Wittig. The startup monetizes largely through brand partnerships and getting those users in front of paid products.

Image Credits: Brightly

The monetization strategy is similar to what you might find a podcast use: affiliate links or product placement mid-episode. But while the co-founders are relying on this strategy right now, they see the opportunity to create their own e-commerce company as larger and more lucrative.

“The billion-dollar opportunity is not with that,” Wittig said. “The value will be going direct commerce and selling our picks of ethical sustainable goods.”

Marking the transition from podcasting about eco-friendly goods to creating them in-house is a strong pivot. The co-founders consider creating a distribution commerce channel to be a larger opportunity and likely more lucrative than the podcasting business.

Beyond creating a line of their own products, Brightly is thinking about how to partner with white-label sustainable products. Another option, Wittig said, is to partner with big corporations to get products on their shelves with colors and customization for Brightly. An example of an ideal partnership would be Reformation’s recent partnership with Blueland.

Wittig declined to share more details on how they plan to win, but likened the strategy to that of Goop or Glossier, two companies that started with content arms and drew their community into a commerce platform.

“It’s not going to be a Thrive Market where there are hundreds and thousands of sustainable goods on there. It’s going to be much more curated,” she said.

COVID-19 has helped the startup further validate the need for a platform that unites a conscious consumer community.

“We are all so aware of the purchasing power we have,” she said. “As consumers we go out and support small businesses by getting coffee on the go. But before, we did not think twice about getting everything from Amazon.”

The conversation with investors hasn’t been as simple, the co-founder said. Investors continue to be “hands off” about community-based platforms because they are unsure it will work. Wittig says that many bearish investors have placed bets on singular direct-to-consumer brands, such as Away or Blueland.

“Those investors know the rising costs of customer acquisition, and see what happens when you don’t have a community that surrounds our business,” she said.

Brightly is betting that the future of commerce brands has to start with a go-to-market, and then bring in the end-product, instead of the other way. The end goal here for Brightly is attracting, and generating excitement from, Gen Z and millennial shoppers. To do so, Wittig says that Brightly is experimenting with ways to implement socialization aspects into the shopping experience.

Leslie Feinzaig, the founder of Female Founders Alliance, said that what’s special about Brightly is that it “demonstrated demand before building for it.”

“I think a lot of people today could build software to connect people and sell things, but very few people could get thousands of fanatical followers to actually engage with each other and make that software useful,” Feinzaig said. “Brightly built that community with matchsticks and tape.”

05 Feb 2021

BeGreatTV to offer MasterClass-like courses taught by Black and brown innovators

BeGreatTV, an online education platform featuring Black and brown instructors, recently closed a $450K pre-seed round from Stand Together Ventures Lab, Arlan Hamilton, Tiffany Haddish and others.

The goal with BeGreatTV is to enable anyone to learn from talented Black and brown innovators and leaders, founder and CEO Cortney Woodruff told TechCrunch.

“When you think of being a Black or brown person or individual who wants to learn from a Black or brown person, there’s nothing that really exists that gives you a glossary of every business vertical and where you see representation at every level in a well put together way,” Woodruff said. “That alone makes our market a lot larger because there are just so many verticals where no one has really invested in or shown before.”

The courses are designed to teach folks how to execute and succeed in a particular industry, enable people to better understand the business aspect of industries while also teaching “you how to deal with the socioeconomic and racial injustices that come with being the only one in the room. Whether you are a Black man or woman who wants to get into the makeup industry, there will always be a lot of biases in the world.”

When BeGreatTV launches in a couple of months (the plan is to launch in April), the platform will feature at least 10 courses — each with around 15 episodes — focused on arts, entertainment, beauty and more. At launch, courses will be available from Sir John, a celebrity makeup artist for L’Oreal and Beyoncé’s personal makeup artist, BeGreatTV co-founder Cortez Bryant, who was also Lil Wayne and Drake’s manager, as well as Law Roach, Zendaya’s stylist.

Hamilton and Haddish will also teach their own respective courses on business and entertainment, Woodruff said. So far, BeGreatTV has produced more than 40 episodes that range anywhere from three to 15 minutes each.

Image Credits: BeGreatTV

Each course will cost $64.99, and the plan is to eventually offer an all-access subscription model once BeGreatTV beefs up its offerings a bit more. For instructors, BeGreatTV shares royalties with them.

“Ultimately, the platform can include a more diverse casting of instructors that aren’t just Black and brown,” Woodruff said. But for now, he said, the idea is to “reverse the course of ‘Now this is our first Black instructor’ but ‘now this is the first white instructor'” on the platform.

BeGreatTV’s team consists of just 15 people, but includes heavy hitters like Cortez and actor Jesse Williams. Currently, BeGreatTV is working on closing its seed round and anticipates a six-figure user base by the end the year.

MasterClass is perhaps BeGreatTV’s biggest competitor. With classes taught by the likes of Gordon Ramsay, Shonda Rhimes and David Sedaris, it’s no wonder why MasterClass has become worth more than $800 million. The company’s $180 annual subscription fee accounts for all of its revenue.

“If you benchmark [BeGreatTV] to MasterClass, we are finding individuals that are not only the best at what they do in the world, but often times these individuals have broken barriers because often times they were the first to do it,” Woodruff said. “And do it without having people who look like them.”

 

05 Feb 2021

Myanmar’s new military government is now blocking Twitter

Myanmar’s new military government has ordered local telecom operators, internet gateways, and other internet service providers to block Twitter and Instagram in the South Asian country days after imposing a similar blackout on Facebook service to ensure “stability” in the Southeast Asian nation.

Several users from Myanmar confirmed that they were unable to access Twitter. NetBlocks, which tracks global internet usage, further reported that multiple networks in the country had started to block the American social network.

Friday’s order comes as thousands of Myanmar citizens joined Twitter this week to protest the new military government that seized power by detaining civilian leader Aung San Suu Kyi and other democratically elected leaders of her National League for Democracy, which won by landslide last year.

Twitter did not immediately respond to a request for comment.

This is a developing story. More to follow…

05 Feb 2021

Lightspeed’s Gaurav Gupta and Grafana’s Raj Dutt discuss pitch decks, pricing and how to nail the narrative

Before he was a partner at Lightspeed Venture Partners, Gaurav Gupta had his eye on Grafana Labs, the company that supports open-source analytics platform Grafana. But Raj Dutt, Grafana’s co-founder and CEO, played hard to get.

This week on Extra Crunch Live, the duo explained how they came together for Grafana’s Series A — and eventually, its Series B. They also walked us through Grafana’s original Series A pitch deck before Gupta shared the aspects that stood out to him and how he communicated those points to the broader partnership at Lightspeed.

Gupta and Dutt also offered feedback on pitch decks submitted by audience members and shared their thoughts about what makes a great founder presentation, pulling back the curtain on how VCs actually consume pitch decks.

We’ve included highlights below as well as the full video of our conversation.

We record new episodes of Extra Crunch Live each Wednesday at 12 p.m. PST/3 p.m. EST/8 p.m. GMT. Check out the February schedule here.

Episode breakdown:

  • How they met — 2:00
  • Grafana’s early pitch deck — 12:00
  • The enterprise ecosystem — 25:00
  • The pitch deck teardown — 32:00

How they met

As soon as Gupta joined Lightspeed in June 2019, he began pursuing Dutt and Grafana Labs. He texted, called and emailed, but he got little to no response. Eventually, he made plans to go meet the team in Stockholm but, even then, Dutt wasn’t super responsive.

The pair told the story with smiles on their faces. Dutt said that not only was he disorganized and not entirely sure of his own travel plans to see his co-founder in Stockholm, Grafana wasn’t even raising. Still, Gupta persisted and eventually sent a stern email.

“At one point, I was like ‘Raj, forget it. This isn’t working’,” recalled Gupta. “And suddenly he woke up.” Gupta added that he got mad, which “usually does not work for VCs, by the way, but in this case, it kind of worked.”

When they finally met, they got along. Dutt said they were able to talk shop due to Gupta’s experience inside organizations like Splunk and Elastic. Gupta described the trip as a whirlwind, where time just flew by.

“One of the reasons that I liked Gaurav is that he was a new VC,” explained Dutt. “So to me, he seemed like one of the most non-VC VCs I’d ever met. And that was actually quite attractive.”

To this day, Gupta and Dutt don’t have weekly standing meetings. Instead, they speak several times a week, conversing organically about industry news, Grafana’s products and the company’s overall trajectory.

Grafana’s early pitch deck

Dutt shared Grafana’s pre-Series A pitch deck — which he actually sent to Gupta and Lightspeed before they met — with the Extra Crunch Live audience. But as we know now, it was the conversations that Dutt and Gupta had (eventually) that provided the spark for that deal.

05 Feb 2021

Learn about the importance of accessible product design at TechCrunch Sessions: Justice

When you are able to navigate a world that is designed for you, it’s easy to avoid thinking about how the world is designed for you. But it can be different if you are disabled.

At TechCrunch Sessions: Justice on March 3, we will examine the importance of ensuring accessible product design from the beginning. We’ll ask how the social and medical models of disability influence technological evolution. Integrating the expertise of disabled technologists, makers, investors, scientists, software engineers into the DNA of your company from the very beginning is vital to the pursuit of a functioning and equitable society. And could mean you don’t leave money on the table.

Join us at TechCrunch Sessions: Justice for a wide-ranging discussion as we attempt to answer these questions and further explore inclusive design with Cynthia Bennett, Mara Mills and Srin Madipalli.

Cynthia Bennett is a post-doc at Carengie Mellon University’s Human-Computer Interaction Institute, as well as a researcher at Apple. Her research focuses on  human-computer interaction, accessibility and Disability Studies, and, she says on her website, spans “the critique and development of HCI theory and methods to designing emergent accessible interactions with technology.” Her research includes Biographical Prototypes: Reimagining Recognition and Disability in Design and The Promise of Empathy: Design, Disability, and Knowing the “Other.”

Mara Mills is the Associate Professor of Media, Culture, and Communication at New York University and a co-founder and co-director of the NYU Center for Disability Studies. Mills research focuses on sound studies, disability studies and history. (You can hear her discuss the intersection of artificial intelligence and disability with Meredith Whittaker, co-founder of the AI Now Institute and Minderoo Research Professor at NYU, and Sara Hendren, professor at Olin College of Engineering and author of the recently published What Can a Body Do: How We Meet the Built World, on the TechCrunch Mixtape podcast here.)

Srin Madipalli is an investor and co-founder of Accomable, an online platform that helped users find accessible vacation properties, which he sold to Airbnb. His advocacy work focuses on disability inclusion iBe sure to snag your tickets here for just $5 here.n the workplace, as well as advising tech companies on accessibility.

Make sure you can join us for this conversation and more at TC Sessions: Justice on March 3. Secure your seat now!

 

05 Feb 2021

How the GameStop stonkathon helped Robinhood raise $3.4B last week

Robinhood has shown an impressive ability to raise enormous amounts of capital in the past few weeks to ensure it has the funds needed to allow users to trade and, presumably, provide it with enough cash until it goes public. Raising $3.4 billion so quickly is an extraordinary feat.

But how the company managed to get investors to wire money with such alacrity has been a curiosity; what about Robinhood was so compelling that giving it a multi-billion dollar injection was such an obvious decision?


The Exchange explores startups, markets and money. Read it every morning on Extra Crunch, or get The Exchange newsletter every Saturday.


We got a whiff of it when we parsed Robinhood’s Q4 2020 payment for order flow (PFOF) data, which showed the discount trading service growing nicely from its Q3 results. Robinhood’s PFOF revenue growth had slowed in sequential terms in the third quarter of 2020, but the final quarter iced near-term concerns that the unicorn’s growth days were behind it.

But then the company gave us a little more, a few charts that I think better explain why Robinhood was able to raise so much money so quickly.

Equities and options volumes go up

The reason why Robinhood was able to raise lots more cash very quickly was because the company’s PFOF revenue driver likely went into overdrive during the mess that was the GameStop period, This is somewhat obvious, as many people were trading.

But thanks to a new chart from the company posted on its own blog, we now know that Robinhood’s PFOF incomes were likely spiking to all-time highs.

Here’s the chart the company published, which I have loosely marked with quarterly intervals. Per Robinhood, the green line is “Robinhood equities and options trading volumes over a longer time horizon, through last week:”