Year: 2019

29 Oct 2019

Extra Crunch expands into Poland, the Netherlands, Belgium, Italy and Austria

We’re excited to announce that Extra Crunch is now available to readers in Austria, Belgium, Italy, the Netherlands, and Poland. That adds to our existing European support in Germany, France, Spain and the U.K.

Extra Crunch is a membership program from TechCrunch that features how-tos and interviews on company building, intelligence on the most disruptive opportunities for startups, discounts on TechCrunch events, an experience on TechCrunch.com that’s free of banner ads and more. We recently launched two new community perks for annual and two-year Extra Crunch members, including opportunities to claim $1,000 in AWS credits and 100,000 Brex Rewards points.

If you are thinking about coming to Disrupt Berlin with us in December, you can save by joining Extra Crunch. Annual and two-year Extra Crunch members can save 20% on all TechCrunch events, including Disrupt Berlin.

You can sign up or learn more about Extra Crunch here.

As a token of appreciation to our European readers, we’re running a special discount on annual and two-year Extra Crunch membership plans. The promotion starts today and runs until November 30. Here’s how you can claim the discounts:

  • Head to our signup page
  • Select an annual or two-year membership plan
  • During the signup process enter the promo code ECEUROPE1130 and hit “Apply”
  • Complete the remaining steps in the signup process

Please note that this code can only be used for readers signing up for Extra Crunch in Austria, Belgium, France, Germany, Italy, the Netherlands, Poland, Spain and UK. The discount can only be applied to annual and two-year subscriptions.

Thanks to everyone that voted on where to expand next. If you haven’t voted and you want to see Extra Crunch in your local country, let us know here. We’re hoping to have support for Romania ready within a few weeks, and possibly a few more European countries later this year. In 2020 we’re looking to expand beyond Europe and North America.

29 Oct 2019

Extra Crunch expands into Poland, the Netherlands, Belgium, Italy and Austria

We’re excited to announce that Extra Crunch is now available to readers in Austria, Belgium, Italy, the Netherlands, and Poland. That adds to our existing European support in Germany, France, Spain and the U.K.

Extra Crunch is a membership program from TechCrunch that features how-tos and interviews on company building, intelligence on the most disruptive opportunities for startups, discounts on TechCrunch events, an experience on TechCrunch.com that’s free of banner ads and more. We recently launched two new community perks for annual and two-year Extra Crunch members, including opportunities to claim $1,000 in AWS credits and 100,000 Brex Rewards points.

If you are thinking about coming to Disrupt Berlin with us in December, you can save by joining Extra Crunch. Annual and two-year Extra Crunch members can save 20% on all TechCrunch events, including Disrupt Berlin.

You can sign up or learn more about Extra Crunch here.

As a token of appreciation to our European readers, we’re running a special discount on annual and two-year Extra Crunch membership plans. The promotion starts today and runs until November 30. Here’s how you can claim the discounts:

  • Head to our signup page
  • Select an annual or two-year membership plan
  • During the signup process enter the promo code ECEUROPE1130 and hit “Apply”
  • Complete the remaining steps in the signup process

Please note that this code can only be used for readers signing up for Extra Crunch in Austria, Belgium, France, Germany, Italy, the Netherlands, Poland, Spain and UK. The discount can only be applied to annual and two-year subscriptions.

Thanks to everyone that voted on where to expand next. If you haven’t voted and you want to see Extra Crunch in your local country, let us know here. We’re hoping to have support for Romania ready within a few weeks, and possibly a few more European countries later this year. In 2020 we’re looking to expand beyond Europe and North America.

29 Oct 2019

Tiqets, a platform for booking museums and other attractions, raises $60M led by Airbnb

Airbnb is best known for being the place you go to book a place to stay when travelling that’s not a hotel. Today, it’s making an investment in a startup that points to its bigger interest in being a go-to destination for experiences and all things tourism, too. Tiqets, a startup out of Amsterdam that has built a platform for booking tickets for museums and other attractions, has raised $60 million in a Series C round led by the travel giant to expand its platform and wider business. Tiqets has sold millions of tickets in over 60 countries to date, it says.

The investment also includes backing from previous investors Investion and HPE Growth, and it brings the total raised by Tiqets to $100 million. Luuc Elzinga, the CEO and co-founder of Tiqets, said the startup is not disclosing its valuation but said it was “really happy” with the number.

This is, for now, a financial investment for Airbnb rather than a strategic one. In other words, the two companies have yet to work together, said Elzinga, although that is the hope longer term. “Airbnb will be involved in the business,” he said, “and that’s interesting because we can also learn from how they scaled.”

Scaled is almost an understatement. Starting as a modest marketplace for people to offer spare rooms and sofas to travellers, Airbnb has become one of the outsized guard of startups, raising $4.4 billion in venture funding, valued at $31 billion, and on the road to an IPO in 2020.

While the vast majority of that growth has come by way of people posting and booking accommodation in private homes, Airbnb’s interest in Tiqets underscores how the company is itself extending the revenue it can make per user by building out the longer tail of services it offers to its users beyond booking travel accommodation: current offerings include Experiences (in-city, one-day activities), Adventures (multi-day guided tours), and restaurant listings.

“Travelers are seeking out a diverse range of experiences when they visit a new city, ” said Airbnb Art and Culture Director Philippe Magid, in a statement. “The Tiqets team has effectively used new technology to connect travelers to communities and we are excited to support their work.”

In the wider tourism and travel industry, museums and attractions revenues are estimated to be worth some $160 billion, with ticketing accounting for $60 billion of that.

The gap in the market that Tiqets is targeting is the shift we’ve seen in how and why people — both tourists but also those visiting museums and attractions in their own home towns — purchase tickets to go to these places.

While some are still waiting to line up outside a venue for hours, others are opting to go online to buy in advance and use mobile tickets to speed up the process. Museums and most other attractions are not often the places that come to mind when you think “technology”, and Tiqets comes in and provides a service to them so that they can meet the demands of more digitally-savvy visitors.

Museums and other attractions are now gradually starting to think of how to use this to their advantage.

There was a Very British uproar in the 1980s when London’s Victoria & Albert Museum ran an ad campaign about how it was an “ace caff with a quite nice museum attached.” (It is a beautiful cafeteria.) But nowadays those cafes, and the ever-present gift shops, are some of their biggest revenue spinners, so offering tickets online reduces some of the friction in getting people into venues, and into better moods, to spend more money later. And, if users “check the box,” the venues can also build their marketing databases to boot.

Tiqets, founded in 2014, is still a relatively young business. Elzinga said it works with some 3,000 museum groups and attractions — with its customer list including some of the world’s most-visited institutions, such as the Louvre in Paris.

It also partners with some 2,500 travel agencies and portals — Ctrip is another big customer — that integrate with Tiqets’ APIs to upsell customers with tickets to venues after they have booked their trips. Some 35% of its revenues currently come by way of these third-party deals.

Tiqets’ plans for the investment will be to expand its coverage to more attractions, and to extend deeper into smaller towns beyond the big cities where it’s currently most active, specifically building out better self-service technology (not unlike Airbnb’s for hosts) to make it easier to onboard to its platform. It’s now available in 14 languages, so adding more localisation is also on the cards.

One area where Tiqets does not plan to expand is into performance or seated event ticketing a la Ticketmaster or Eventbrite.

“Our focus and opportunity will continue to be museums and attractions,” Elzinga said. Part of the reason for this is that so many of them continue to provide paper-based, kiosk-issued tickets, but also because of the rise of the blockbuster, and the new awareness of public safety in crowded, high profile, iconic tourist spots. “It’s getting more complex, with timed entry and issues like crowd management.”

Longer term, it will be interesting to see how and if Airbnb works more closely with Tiqets. Both are targeting what is a massive opportunity. Travel and tourism are some $8.8 trillion and will account for 10.4% of global GDP in 2019, according to one estimate.

29 Oct 2019

Facebook is failing to prevent another human rights tragedy playing out on its platform, report warns

A report by campaign group Avaaz examining how Facebook’s platform is being used to spread hate speech in the Assam region of North East India suggests the company is once again failing to prevent its platform from being turned into a weapon to fuel ethnic violence.

Assam has a long-standing Muslim minority population but ethnic minorities in the state look increasingly vulnerable after India’s Hindu nationalist government pushed forward with a National Register of Citizens (NRC), which has resulted in the exclusion from that list of nearly 1.9 million people — mostly Muslims — putting them at risk of statelessness.

In July the United Nations expressed grave concern over the NRC process, saying there’s a risk of arbitrary expulsion and detention, with those those excluded being referred to Foreigners’ Tribunals where they have to prove they are not “irregular”.

At the same time, the UN warned of the rise of hate speech in Assam being spread via social media — saying this is contributing to increasing instability and uncertainty for millions in the region. “This process may exacerbate the xenophobic climate while fuelling religious intolerance and discrimination in the country,” it wrote.

There’s an awful sense of deja-vu about these warnings. In March 2018 the UN criticized Facebook for failing to prevent its platform being used to fuel ethnic violence against the Rohingya people in the neighboring country of Myanmar — saying the service had played a “determining role” in that crisis.

Facebook’s response to devastating criticism from the UN looks like wafer-thin crisis PR to paper over the ethical cracks in its ad business, given the same sorts of alarm bells are being sounded again, just over a year later. (If we measure the company by the lofty goals it attached to a director of human rights policy job last year — when Facebook wrote that the responsibilities included “conflict prevention” and “peace-building” — it’s surely been an abject failure.)

Avaaz’s report on hate speech in Assam takes direct aim at Facebook’s platform, saying it’s being used as a conduit for whipping up anti-Muslim hatred.

In the report, entitled Megaphone for Hate: Disinformation and Hate Speech on Facebook During Assam’s Citizenship Count, the group says it analysed 800 Facebook posts and comments relating to Assam and the NRC, using keywords from the immigration discourse in Assamese, assessing them against the three tiers of prohibited hate speech set out in Facebook’s Community Standards.

Avaaz found that at least 26.5% of the posts and comments constituted hate speech. These posts had been shared on Facebook more than 99,650 times — adding up to at least 5.4 million views for violent hate speech targeting religious and ethnic minorities, according to its analysis.

Bengali Muslims are a particular target on Facebook in Assam, per the report, which found comments referring to them as “criminals,” “rapists,” “terrorists,” “pigs,” and “dogs”, among other dehumanizing terms.

In further disturbing comments there were calls for people to “poison” daughters, and legalise female foeticide, as well as several posts urging “Indian” women to be protected from “rape-obsessed foreigners”.

Avaaz suggests its findings are just a drop in the ocean of hate speech that it says is drowning Assam via Facebook and other social media. But it accuses Facebook directly of failing to provide adequate human resource to police hate speech spread on its dominant platform.

Commenting in a statement, Alaphia Zoyab, senior campaigner, said: “Facebook is being used as a megaphone for hate, pointed directly at vulnerable minorities in Assam, many of whom could be made stateless within months. Despite the clear and present danger faced by these people, Facebook is refusing to dedicate the resources required to keep them safe. Through its inaction, Facebook is complicit in the persecution of some of the world’s most vulnerable people.”

Its key complaint is that Facebook continues to rely on AI to detect hate speech which has not been reported to it by human users — using its limited pool of (human) content moderator staff to review pre-flagged content, rather than proactively detect it.

Facebook founder Mark Zuckerberg has previously said AI has a very long way to go to reliably detect hate speech. Indeed, he’s suggested it may never be able to do that.

In April 2018 he told US lawmakers it might take five to ten years to develop “AI tools that can get into some of the linguistic nuances of different types of content to be more accurate, to be flagging things to our systems”, while admitting: “Today we’re just not there on that.”

That sums to an admission that in regions such as Assam — where inter-ethnic tensions are being whipped up in a politically charged atmosphere that’s also encouraging violence — Facebook is essentially asleep on the job. The job of enforcing its own ‘Community Standards’ and preventing its platform being weaponized to amplify hate and harass the vulnerable, to be clear.

Avaaz says it flagged 213 of “the clearest examples” of hate speech which it found directly to Facebook — including posts from an elected official and pages of a member of an Assamese rebel group banned by the Indian Government. The company removed 96 of these posts following its report.

It argues there are similarities in the type of hate speech being directed at ethnic minorities in Assam via Facebook and that which targeted at Rohingya people in Myanmar, also on Facebook, while noting that the context is different. But it did also find hateful content on Facebook targeting Rohingya people in India.

It is calling on Facebook to do more to protect vulnerable minorities in Assam, arguing it should not rely solely on automated tools for detecting hate speech — and should instead apply a “human-led ‘zero tolerance’ policy” against hate speech, starting by beefing up moderators’ expertise in local languages.

It also recommends Facebook launch an early warning system within its Strategic Response team, again based on human content moderation — and do so for all regions where the UN has warned of the rise of hate speech on social media.

“This system should act preventatively to avert human rights crises, not just reactively to respond to offline harm that has already occurred,” it writes.

Other recommendations include that Facebook should correct the record on false news and disinformation by notifying and providing corrections from fact-checkers to each and every user who has seen content deemed to have been false or purposefully misleading, including if the disinformation came from a politician; that it should be transparent about all page and post takedowns by publishing its rational on the Facebook Newsroom so the issue of hate speech is given proportionate prominence and publicity to the size of the problem on Facebook; and it should agree to an independent audit of hate speech and human rights on its platform in India.

“Facebook has signed up to comply with the UN Guiding Principles on Business and Human Rights,” Avaaz notes. “Which require it to conduct human rights due diligence such as identifying its impact on vulnerable groups like women, children, linguistic, ethnic and religious minorities and others, particularly when deploying AI tools to identify hate speech, and take steps to subsequently avoid or mitigate such harm.”

We reached out to Facebook with a series of questions about Avaaz’s report and also how it has progressed its approach to policing inter-ethnic hate speech since the Myanmar crisis — including asking for details of the number of people it employs to monitor content in the region.

Facebook did not provide responses to our specific questions. It just said it does have content reviewers who are Assamese and who review content in the language, as well as reviewers who have knowledge of the majority of official languages in India, including Assamese, Hindi, Tamil, Telugu, Kannada, Punjabi, Urdu, Bengali and Marathi.

In 2017 India overtook the US as the country with the largest “potential audience” for Facebook ads, with 241M active users, per figures it reports the advertisers.

Facebook also sent us this statement, attributed to a spokesperson:

We want Facebook to be a safe place for all people to connect and express themselves, and we seek to protect the rights of minorities and marginalized communities around the world, including in India. We have clear rules against hate speech, which we define as attacks against people on the basis of things like caste, nationality, ethnicity and religion, and which reflect input we received from experts in India. We take this extremely seriously and remove content that violates these policies as soon as we become aware of it. To do this we have invested in dedicated content reviewers, who have local language expertise and an understanding of the India’s longstanding historical and social tensions. We’ve also made significant progress in proactively detecting hate speech on our services, which helps us get to potentially harmful content faster.

But these tools aren’t perfect yet, and reports from our community are still extremely important. That’s why we’re so grateful to Avaaz for sharing their findings with us. We have carefully reviewed the content they’ve flagged, and removed everything that violated our policies. We will continue to work to prevent the spread of hate speech on our services, both in India and around the world.

Facebook did not tell us exactly how many people it employs to police content for an Indian state with a population of more than 30 million people.

Globally the company maintains it has around 35,000 people working on trust and safety, less than half of whom (~15,000) are dedicated content reviewers. But with such a tiny content reviewer workforce for a global platform with 2.2BN+ users posting night and day all around the world there’s no plausible no way for it to stay on top of its hate speech problem.

Certainly not in every market it operates in. Which is why Facebook leans so heavily on AI — shrinking the cost to its business but piling content-related risk onto everyone else.

Facebook claims its automated tools for detecting hate speech have got better, saying that in Q1 this year it increased the proactive detection rate for hate speech to 65.4% — up from 58.8% in Q4 2017 and 38% in Q2 2017.

However it also says it only removed 4 million pieces of hate speech globally in Q1. Which sounds incredibly tiny vs the size of Facebook’s platform and the volume of content that will be generated daily by its millions and millions of active users.

Without tools for independent researchers to query the substance and spread of content on Facebook’s platform it’s simply not possible to know how many pieces of hate speech are going undetected. But — to be clear — this unregulated company still gets to mark its own homework. 

In just one example of how Facebook is able to shrink perception of the volume of problematic content it’s fencing, of the 213 pieces of content related to Assam and the NCR that Avaaz judged to be hate speech and reported to Facebook it removed less than half (96).

Yet Facebook also told us it takes down all content that violates its community standards — suggesting it is applying a far more dilute definition of hate speech than Avaaz. Unsurprising for a US company whose nascent crisis PR content review board‘s charter includes the phrase “free expression is paramount”. But for a company that also claims to want to prevent conflict and peace-build it’s rather conflicted, to say the least. 

As things stand, Facebook’s self-reported hate speech performance metrics are meaningless. It’s impossible for anyone outside the company to quantify or benchmark platform data. Because no one except Facebook has the full picture — and it’s not opening its platform for ethnical audit. Even as the impacts of harmful, hateful stuff spread on Facebook continue to bleed out and damage lives around the world. 

29 Oct 2019

Israeli seed fund Remagine is financing media’s AI revolution

While large entertainment companies scramble to catch up to streaming content platforms, more fundamental upheaval is headed their way as a result of technological advances in artificial intelligence and 5G. 

Former ProSiebenSat.1 executive Kevin Baxpehler (based in Tel Aviv) and former Google Ventures partner Eze Vidra (based in London) launched Remagine Ventures earlier this year with a $35 million fund that bridges the gap between technologists at the forefront of change and the largest owners of content.

Backed by a roster of multi-billion-dollar media companies in Europe, Asia and the U.S. as its limited partners, their firm operates independently (and focuses on financial return) but aims to provide strategic value to portfolio companies and insight into the future for its LPs. Vidra referred to it as “a multi-corporate Google Ventures type of model.”

The firm’s focus on entertainment technologies has a B2B bent, with a geographic focus on Israel as its primary hub and with most of its initial portfolio selling to enterprise media companies. That makes Remagine’s ability to guide entrepreneurs through the halls of traditional media giants highly relevant; it also means it can gauge whether traditional media companies are likely to buy a startup’s product before they invest.

I spoke with Baxpehler and Vidra to learn more about their playbook and why they believe a wave of entertainment tech companies is about to come out of Israel. Here’s the transcript of our conversation (edited for length and clarity):

Eric Peckham: Are there specific investment theses within entertainment that you are hunting for startups in?

Kevin Baxpehler: Our investment thesis is based on two main drivers: new advancements in so-called AI technologies — specifically deep-learning, computer-vision and NLP — coupled with new consumer trends such as esports, visual search, and engaging with computer-generated imagery (CGI) like Lil Miquela. 

We believe that recent technological developments such as GANs (generative adversarial networks), coupled with new powerful computing power like new microprocessing chips and 5G, will change how brands, consumers, and stars/influencers will all interact. It creates tremendous opportunities to invest. 

Eze Vidra: Remagine Ventures invests independently in seed and pre-seed startups at the intersection of entertainment, tech, data and commerce. Seed investing is particularly hard for corporates to do directly (because of a combination of reasons including speed, signaling risk and the challenges of deal flow for corporates) so we specialise at that stage by sourcing real time feedback from the market. 

We are seeing industries and disciplines converge and find the intersections to be the most ripe areas of opportunity. For example, content + commerce, AI + entertainment, gaming + live stream tech giving us esports as a cultural phenomenon changing consumer behaviour.

Give me some examples of what startups at these intersection points will look like.

Vidra: The two core tenants of our thesis are 1) changing consumer behavior — for example, how esports is moving young viewers to engage with gaming — and 2) new technologies that make new forms of entertainment possible, primarily driven by AI.

Our portfolio company Syte is an image-recognition and computer-vision company that recognizes the products inside images and videos with a very high degree of accuracy. They are working with top retailers globally and Samsung selected them to power the Bixby assistant and is rolling them out globally. It’s been tried before, but the difference with Syte’s product is the level of accuracy. 

We invested in HourOne, which is a synthetic video company using generative adversarial networks to generate video without the camera. It has multiple use cases, from reducing the cost of video production to programmatic video, to text-to-speech to gaming. 

Another example is Vault, which uses deep learning to predict the success of scripted projects, whether it’s movies or TV shows down to the box office opening Rotten Tomatoes scores, the probability of there being a season two, the demographics that are most impacted, etc. So bringing a more data-driven approach to marketing films and shows.

Being vertically-focused means that we can attract relevant dealflow from both entrepreneurs and co-investors. As we evaluate startups, we look for interesting teams that are leveraging new technology (or taking an interesting consumer angle) that can scale and we focus on helping them open doors internationally. 

To what extent is your interest focused on startups selling their technology to enterprise media companies versus startups building tools for the broader landscape of small content creators?

29 Oct 2019

Facebook unveils its first foray into personal digital healthcare tools

Nearly a year and a half after the Cambridge Analytica scandal reportedly scuttled Facebook’s fledgling attempts to enter the healthcare market, the social media giant is launching a tool called “Preventive Health” to prompt its users to get regular checkups and connect them to service providers.

The architect of the new service is Dr. Freddy Abnousi, the head of the company’s healthcare research, who was previously linked to an earlier skunkworks initiative that would collect anonymized hospital data and use a technique called “hashing” to match the data to individuals that exist in both data sets — for research, according to CNBC reporting.

Working with the American Cancer Society; the American College of Cardiology; the American Heart Association; and the Centers for Disease Control and Prevention Facebook is developing a series of digital prompts that will encourage users to get a standard battery of tests that’s important to ensure health for populations of a certain age.

The company’s initial focus is on the top two leading causes of death in the U.S.: heart disease and cancer — along with the flu, which affects millions of Americans each year.

“Heart disease is the number one killer of men and women around the world and in many cases it is 100% preventable. By incorporating prevention reminders into platforms people are accessing every day, we’re giving people the tools they need to be proactive about their heart health,” said Dr. Richard Kovacs, the president of the American College of Cardiology, in a statement.

Users who want to access Facebook’s Preventive Health tools can search in the company’s mobile app to find which checkups are recommended by the company’s partner organizations based on the age and gender of a user.

The tool allows Facebookers to mark when the tests are completed, set reminders to schedule future tests and tell people in their social network about the tool.

Facebook will even direct users to resources on where to have the tests. One thing that the company will not do, Facebook assures potential users, is collect the results of any test.

“Health is particularly personal, so we took privacy and safety into account from the beginning. For example, Preventive Health allows you to set reminders for your future checkups and mark them as done, but it doesn’t provide us, or the health organizations we’re working with, access to your actual test results,” the company wrote in a statement. “Personal information about your activity in Preventive Health is not shared with third parties, such as health organizations or insurance companies, so it can’t be used for purposes like insurance eligibility.”

The company said that people can also use the new health tool to find locations that administer flu shots.

“Flu vaccines can have wide-ranging benefits beyond just preventing the disease, such as reducing the risk of hospitalization, preventing serious medical events for some people with chronic diseases, and protecting women during and after pregnancy,” said Dr. Nancy Messonnier, Director, National Center for Immunization and Respiratory Diseases, CDC, in a statement. “New tools like this will empower users with instant access to information and resources they need to become a flu fighter in their own communities.”

29 Oct 2019

ZOMG there’s a new ‘The Mandalorian’ trailer (now with more Werner Herzog)

There’s probably no more hotly anticipated series from any new streaming service than “The Mandalorian” on Disney+ — and now the good folks at Disney have given us a new trailer to hypothesize about.

There’s more action, more world-building, and much much more Werner Herzog (who could ask for anything more?).

The Lucasfilm team has been relatively mum about the details of the new live-action Star Wars series that Jon Favreau created for Disney+.

What we do know is that the series will star Pedro Pascal (he of the glorious “Game of Thrones” guest turn as Oberyn Martell), who will star as a “lone Mandalorian gunfighter in the outer reaches of the galaxy.”

Mandalorians, a group of warriors whose ranks included Jango and Boba Fett, are notorious bounty hunters and it looks like Pascal’s character will be no different.

Other cast members include Gina Carano, Giancarlo Esposito, Nick Nolte and the aforementioned Herzog.

Directors for the show include Dave Filoni, Bryce Dallas Howard, and Taika Waititi (whose work on Marvel’s Thor: Ragnarok wasincredible).

As the production values from the trailer indicate, it appears “The Mandalorian” is well worth the $100 million price tag for its 10-episode run.

Disney+ aired the first “Mandalorian” trailer back in August.

28 Oct 2019

Will the quantum economy change your business?

Google and NASA have demonstrated that quantum computing isn’t just a fancy trick, but almost certainly something actually useful — and they’re already working on commercial applications. What does that mean for existing startups and businesses? Simply put: nothing. But that doesn’t mean you can ignore it forever.

There are three main points that anyone concerned with the possibility of quantum computing affecting their work should understand.

1. It’ll be a long time before anything really practical comes out of quantum computing.

Google showed that quantum computers are not only functional, but apparently scalable. But that doesn’t mean they’re scaling right now. And if they were, it doesn’t mean there’s anything useful you can do with them.

What makes quantum computing effective is that it’s completely different from classical computing — and that also makes creating the software and algorithms that run on it essentially a completely unexplored space.

There are theories, of course, and some elementary work on how to use these things to accomplish practical goals. But we are only just now arriving at the time when such theories can be tested at the most basic levels. The work that needs to happen isn’t so much “bringing to market” as “fundamental understanding.”

Although it’s tempting to equate the beginning of quantum computing to the beginning of digital computing, in reality they are very different. Classical computing, with its 1s and 0s and symbolic logic, actually maps readily on to human thought processes and ways of thinking about information — with a little abstraction, of course.

Quantum computing, on the other hand, is very different from how humans think about and interact with data. It doesn’t make intuitive sense, and not only because we haven’t developed the language for it. Our minds really just don’t work that way!

So although even I can now claim to have operated a quantum computer (technically true), there are remarkably few people in the world who can say they can do so deliberately in pursuit of a specific problem. That means progress will be slow (by tech industry standards) and very limited for years to come as the basics of this science are established and the ideas of code and data that we have held for decades are loosened.

2. Early applications will be incredibly domain-specific and not generalizable.

A common misunderstanding of quantum computing is that it amounts to extremely fast parallel processing. Now, if someone had invented a device that performed supercomputer-like operations faster than any actual supercomputer, that would be an entirely different development and, frankly, a much more useful one. But that isn’t the case.

As an engineer explained to me at Google’s lab, not only are quantum computers good at completely different things, they’re incredibly bad at the things classical computers do well. If you wanted to do arithmetical logic like addition and multiplication, it would be much better and faster to use an abacus.

Part of the excitement around quantum computing is learning which tasks a qubit-based system is actually good at. There are theories, but as mentioned before, they’re untested. It remains to be seen whether a given optimization problem or probability space navigation is really suitable for this type of computer at all.

What they are pretty sure about so far is that there are certain very specific tasks that quantum computers will trivialize — but it isn’t something general like “compression and decompression” or “sorting databases.” It’s things like evaluating a galaxy of molecules in all possible configurations and conformations to isolate high-probability interactions.

As you can imagine, that isn’t very useful for an enterprise security company. On the other hand, it could be utterly transformative for a pharmacology or materials company. Do you run one of those? Then in all likelihood, you are already investing in this kind of research and are well aware of the possibilities quantum brings to the table.

But the point is these applications will not only be very few in number, but difficult to conceptualize, prove, and execute. Unlike something like a machine learning agent, this isn’t a new approach that can easily be tested and iterated — it’s an entirely new discipline which people can only now truly begin to learn.

28 Oct 2019

Facebook staff demand Zuckerberg limit lies in politcal ads

Submit campaign ads to fact checking, limit microtargeting, cap spending, observe silence periods, or at least warn users. These are the solutions Facebook employees put forward in an open letter pleading with CEO Mark Zuckerberg and company leadership to address misinformation in political ads.

The letter, obtained by the New York Times’ Mike Isaac, insists that “Free speech and paid speech are not the same thing . . . Our current policies on fact checking people in political office, or those running for office, are a threat to what FB stands for.” The letter was posted to Facebook’s internal collaboration forum a few weeks ago.

The sentiments echo what I called for in a TechCrunch opinion piece on October 13th calling on Facebook to ban political ads. Unfettered misinformation in political ads on Facebook lets politicians and their supporters spread inflammatory and inaccurate claims about their views and their rivals while racking up donations to buy more of these ads.

The social network can still offer freedom of expression to political campaigns on their own Facebook Pages while limiting the ability of the richest and most dishonest to pay to make their lies the loudest. We suggested that if Facebook won’t drop political ads, they should be fact checked and/or use an array of generic “vote for me” or “donate here” ad units that don’t allow accusations. We also criticized how microtargeting of communities vulnerable to misinformation and instant donation links make Facebook ads more dangerous than equivalent TV or radio spots.

Mark Zuckerberg Hearing In Congress

The Facebook CEO, Mark Zuckerberg, testified before the House Financial Services Committee on Wednesday October 23, 2019 Washington, D.C. (Photo by Aurora Samperio/NurPhoto via Getty Images)

Over 250 employees of Facebook’s 35,000 staffers have signed the letter, that declares “We strongly object to this policy as it stands. It doesn’t protect voices, but instead allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy.” It suggests the current policy undermines Facebook’s election integrity work, confuses users about where misinformation is allowed, and signals Facebook is happy to profit from lies.

The solutions suggested include:

  1. Don’t accept political ads unless they’re subject to third-party fact checks
  2. Use visual design to more strongly differentiate between political ads and organic non-ad posts
  3. Restrict microtargeting for political ads including the use of Custom Audiences since microtargeted hides ads from as much public scrutiny that Facebook claims keeps politicians honest
  4. Observe pre-election silence periods for political ads to limit the impact and scale of misinformation
  5. Limit ad spending per politician or candidate, with spending by them and their supporting political action committees combined
  6. Make it more visually clear to users that political ads aren’t fact-checked

A combination of these approaches could let Facebook stop short of banning political ads without allowing rampant misinformation or having to police individual claims.

Zuckerberg Elections 1

Zuckerberg had stood resolute on the policy despite backlash from the press and lawmakers including Representative Alexandria Ocasio-Cortez (D-NY). She left him tongue-tied during a congressional testimony when she asked exactly what kinds of misinfo were allowed in ads.

But then Friday Facebook blocked an ad designed to test its limits by claiming Republican Lindsey Graham had voted for Ocasio-Cortez’s Green Deal he actually opposes. Facebook told Reuters it will fact-check PAC ads

One sensible approach for politicians’ ads would be for Facebook to ramp up fact-checking, starting with Presidential candidates until it has the resources to scan more. Those fact-checked as false should receive an interstitial warning blocking their content rather than just a “false” label. That could be paired with giving political ads a bigger disclaimer without making them too prominent looking in general and only allowing targeting by state.

Deciding on potential spending limits and silent periods would be more messy. Low limits could even the playing field and broad silent periods especially during voting periods could prevent voter suppression. Perhaps these specifics should be left to Facebook’s upcoming independent Oversight Board that acts as a supreme court for moderation decisions and policies.

fb arbiter of truth

Zuckerberg’s core argument for the policy is that over time, history bends towards more speech, not censorship. But that succumbs to utopic fallacy that assumes technology evenly advantages the honest and dishonest. In reality, sensational misinformation spreads much further and faster than level-headed truth. Microtargeted ads with thousands of variants undercut and overwhelm the democratic apparatus designed to punish liars, while partisan news outlets counter attempts to call them out.

Zuckerberg wants to avoid Facebook becoming the truth police. But as we and employees have put forward, there a progressive approaches to limiting misinformation if he’s willing to step back from his philosophical orthodoxy.

The full text of the letter from Facebook employees to leadership about political ads can be found below, via the New York Times:

We are proud to work here.

Facebook stands for people expressing their voice. Creating a place where we can debate, share different opinions, and express our views is what makes our app and technologies meaningful for people all over the world.

We are proud to work for a place that enables that expression, and we believe it is imperative to evolve as societies change. As Chris Cox said, “We know the effects of social media are not neutral, and its history has not yet been written.”

This is our company.

We’re reaching out to you, the leaders of this company, because we’re worried we’re on track to undo the great strides our product teams have made in integrity over the last two years. We work here because we care, because we know that even our smallest choices impact communities at an astounding scale. We want to raise our concerns before it’s too late.

Free speech and paid speech are not the same thing.

Misinformation affects us all. Our current policies on fact checking people in political office, or those running for office, are a threat to what FB stands for. We strongly object to this policy as it stands. It doesn’t protect voices, but instead allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy.

Allowing paid civic misinformation to run on the platform in its current state has the potential to:

— Increase distrust in our platform by allowing similar paid and organic content to sit side-by-side — some with third-party fact-checking and some without. Additionally, it communicates that we are OK profiting from deliberate misinformation campaigns by those in or seeking positions of power.

— Undo integrity product work. Currently, integrity teams are working hard to give users more context on the content they see, demote violating content, and more. For the Election 2020 Lockdown, these teams made hard choices on what to support and what not to support, and this policy will undo much of that work by undermining trust in the platform. And after the 2020 Lockdown, this policy has the potential to continue to cause harm in coming elections around the world.

Proposals for improvement

Our goal is to bring awareness to our leadership that a large part of the employee body does not agree with this policy. We want to work with our leadership to develop better solutions that both protect our business and the people who use our products. We know this work is nuanced, but there are many things we can do short of eliminating political ads altogether.

These suggestions are all focused on ad-related content, not organic.

1. Hold political ads to the same standard as other ads.

a. Misinformation shared by political advertisers has an outsized detrimental impact on our community. We should not accept money for political ads without applying the standards that our other ads have to follow.

2. Stronger visual design treatment for political ads.

a. People have trouble distinguishing political ads from organic posts. We should apply a stronger design treatment to political ads that makes it easier for people to establish context.

3. Restrict targeting for political ads.

a. Currently, politicians and political campaigns can use our advanced targeting tools, such as Custom Audiences. It is common for political advertisers to upload voter rolls (which are publicly available in order to reach voters) and then use behavioral tracking tools (such as the FB pixel) and ad engagement to refine ads further. The risk with allowing this is that it’s hard for people in the electorate to participate in the “public scrutiny” that we’re saying comes along with political speech. These ads are often so micro-targeted that the conversations on our platforms are much more siloed than on other platforms. Currently we restrict targeting for housing and education and credit verticals due to a history of discrimination. We should extend similar restrictions to political advertising.

4. Broader observance of the election silence periods

a. Observe election silence in compliance with local laws and regulations. Explore a self-imposed election silence for all elections around the world to act in good faith and as good citizens.

5. Spend caps for individual politicians, regardless of source

a. FB has stated that one of the benefits of running political ads is to help more voices get heard. However, high-profile politicians can out-spend new voices and drown out the competition. To solve for this, if you have a PAC and a politician both running ads, there would be a limit that would apply to both together, rather than to each advertiser individually.

6. Clearer policies for political ads

a. If FB does not change the policies for political ads, we need to update the way they are displayed. For consumers and advertisers, it’s not immediately clear that political ads are exempt from the fact-checking that other ads go through. It should be easily understood by anyone that our advertising policies about misinformation don’t apply to original political content or ads, especially since political misinformation is more destructive than other types of misinformation.

Therefore, the section of the policies should be moved from “prohibited content” (which is not allowed at all) to “restricted content” (which is allowed with restrictions).

We want to have this conversation in an open dialog because we want to see actual change.

We are proud of the work that the integrity teams have done, and we don’t want to see that undermined by policy. Over the coming months, we’ll continue this conversation, and we look forward to working towards solutions together.

This is still our company.

28 Oct 2019

Spider eyes inspire a new kind of depth-sensing camera

As robots and gadgets continue to pervade our everyday lives, they increasingly need to see in 3D — but as evidenced by the notch in your iPhone, depth-sensing cameras are still pretty bulky. A new approach inspired by how some spiders sense the distance to their prey could change that.

Jumping spiders don’t have room in their tiny, hairy heads for structured light projectors and all that kind of thing. Yet they have to see where they’re going and what they’re grabbing in order to be effective predators. How do they do it? As is usually the case with arthropods, in a super weird but interesting way.

Instead of having multiple eyes capturing a slightly different image and taking stereo cues from that, as we do, each of the spider’s eyes is in itself a depth-sensing system. Each eye is multi-layered, with transparent retinas seeing the image with different amounts of blur depending on distance. The differing blurs from different eyes and layers are compared in the spider’s small nervous system and produce an accurate distance measurement — using incredibly little in the way of “hardware.”

Researchers at Harvard have created a high-tech lens system that uses a similar approach, producing the ability to sense depth without traditional optical elements.

cover1

The “metalens” created by electrical engineering professor Federico Capasso and his team detects an incoming image as two similar ones with different amounts of blur, like the spider’s eye does. These images compared using an algorithm also like the spider’s — at least in that it is very quick and efficient — and the result is a lovely little real-time, whole-image depth calculation.

FlyGif

The process is not only efficient, meaning it can be done with very little computing hardware and power, but it can be extremely compact: the one used for this experiment was only 3 millimeters across.

This means it could be included not just on self-driving cars and industrial robots but on small gadgets, smart home items, and of course phones — probably won’t replace Face ID, but it’s a start.

The paper describing the metalens system will be published today in the Proceedings of the National Academy of Sciences.