Year: 2018

29 May 2018

Snapchat preps Snapkit platform to bring camera, login to other apps

Snapchat is secretly planning the launch of its first full-fledged developer platform, currently called Snapkit. The platform’s prototypes indicate it will let other apps offer a ‘login with Snapchat’ options, use the Bitmoji avatars it acquired, and host a version of Snap’s full-featured camera software that can share back to Snapchat. Multiple sources confirm Snap Inc is currently in talks with several app developers to integrate Snapkit.

The platform could breathe new life into plateauing Snapchat by colonizing the mobile app ecosystem with its login buttons and content. Facebook used a similar strategy to become a ubiquitous utility with tentacles touching everyone’s business. But teens, long skeptical of Facebook and unsettled by the recent Cambridge Analytica scandal, could look to Snapchat for a privacy-safe way to log in to other apps without creating a new username and password.

We’ve spoken with Snapchat and are awaiting a response to our request for comment on the news.

Snapchat is making a big course correction in its strategy here after years of rejecting outside developers. In 2014, unofficial apps that let you surreptiously save Snaps but required your Snapchat credentials caused data breaches, leading the company to reiterate its ban on using them. It also shut off sharing from a popular third-party music video sharing app called Mindie. In fact, Snap’s terms of service still say “You will not use or develop any third-party applications that interact with the Services or other users’ content or information without our written consent.”

A year ago I wrote that “Snap’s anti-developer attitude is an augmented liability” since it would be tough to populate the physical world with AR experiences unless it has help like Facebook had started recruiting. By December, Snapchat had launched Lens Studio which lets brands and developers build limited AR content for the app. And it’s been building out its cadre of marketing and analytics partners that brands can work with.

Yet until now, Snapchat hadn’t created functionality that developers could use in their own apps. Snapkit will change that. We don’t know when it will be announced or launched, or who will be the initial developers who take advantage of it. But with Snapchat slipping to its lowest user growth rate ever after being pummeled by competition from Facebook and Instagram, the company needs more than a puppy face filter to regain the spotlight.

SnapPlat

According to sources familiar with Snap’s discussions with potential developers, Snapkit’s login with Snapchat feature is designed to let users sign up for new apps with their Snapchat credentials instead of creating new ones. Since Snap doesn’t collect much personal info about you unlike Facebook, there’s less data to worry about accidentally giving to developers or them misusing. Displaying its branded button on various app’ signup pages could lure in new Snapchat users or reengage lapsed ones. It’s also the key to developing tighter ties between Snap and other apps.

One benefit of another app knowing who you are on Snapchat which the company plans to provide with Snapkit is the ability to bring your Bitmoji avatar with you. Snapchat acquired Bitmoji’s parent company Bitstrips for just $64.2 million in 2016, but the cartoonish personalized avatar app has been a staple of the top 10 chart since. It remains one of Snapchat’s most differentiated offerings, as Facebook has only recently begun work on its clone called Facebook Avatars.

While Bitmoji has offered a keyboard full of your avatar in different scenes, Snapkit could make it easy to add yours as stickers on photos or in other ways in third-party apps. Seeing them across the mobile universe could inspire more users to create their own Bitmoji lookalike.

Snapchat is also working on a way for developers to integrate its editing tool-laden and AR-equipped camera into their own apps. Instead of having to reinvent the wheel if they want to permit visual sharing and inevitably building a poor knockoff, apps could just add Snapchat’s polished camera. The idea is the photos and videos shot with the camera could then be used in that app as well as shared back to Snapchat. Similar to Facebook and Instagram Stories opening up to posts from third-parties, this could inject fresh forms of content into Snapchat at a time when usage is slipping.

Launching a platform also means Snapchat will take on new risks, as third-parties with access to user data could be breached. Snap will also have to convince developers that making it easier for its 191 million daily users to join their apps is worth the engineering resources, given how that community is dwarfed by the multi-billion user Google and Facebook login systems.

Snapchat has struggled to get out of Facebook’s shadow despite inventing or acquiring what would become some of the hottest trends in social. Yet Snap Inc could develop alliances with a platform that leverages its differentiators — a teen audience that doesn’t care for Facebook, inherent privacy, and custom avatars. Through an army of developers, Snapchat might find the firepower to challenge the blue empire.

For more on Snapchat and its competitors, check out our other coverage:

29 May 2018

Bird is reportedly raising $150 million at $1 billion valuation

Bird, the electric scooter company that first launched in Los Angeles, is reportedly raising $150 million in new financing led by Sequoia Capital, according to Bloomberg. The round would value the company at $1 billion.

This comes after Bird raised $100 million on a $300 million valuation back in March. That same month, Bird deployed its scooters in San Francisco, San Jose and Washington, D.C. Below are confirmed funding rounds from electric scooter companies. However, Lime is reportedly trying to raise up to $500 million.

In San Francisco, Bird, Lime and Spin must remove their scooters from the streets by June 4. If a company receives a permit to operate, which the San Francisco Municipal Transportation Agency will notify them of sometime in June, they will be able to redeploy their scooters.

I’ve reached out to Bird and will update this story if I hear back.

29 May 2018

Bird is reportedly raising $150 million at $1 billion valuation

Bird, the electric scooter company that first launched in Los Angeles, is reportedly raising $150 million in new financing led by Sequoia Capital, according to Bloomberg. The round would value the company at $1 billion.

This comes after Bird raised $100 million on a $300 million valuation back in March. That same month, Bird deployed its scooters in San Francisco, San Jose and Washington, D.C. Below are confirmed funding rounds from electric scooter companies. However, Lime is reportedly trying to raise up to $500 million.

In San Francisco, Bird, Lime and Spin must remove their scooters from the streets by June 4. If a company receives a permit to operate, which the San Francisco Municipal Transportation Agency will notify them of sometime in June, they will be able to redeploy their scooters.

I’ve reached out to Bird and will update this story if I hear back.

29 May 2018

Facebook’s policy on white supremacy plays right into a racist agenda

In an ongoing series over at Motherboard, we’re learning quite a bit about how Facebook polices hate speech and hate organizations on its platform. Historically, the company has been far less than transparent about its often inconsistent censorship practices, even as white supremacist content — and plenty of other forms of hate targeted at marginalized groups — runs rampant on the platform.

Now we know more about why. For one, according to a series of internal slides on white supremacy, Facebook walks a fine line that arguably doesn’t exist at all. According to these post-Charlottesville training documents, the company opted to officially differentiate between white nationalism and white supremacy, allowing the former and forbidding the latter.

White nationalism gets the green light

Facebook appears to take the distinction between white nationalism and white supremacy seriously, but many white nationalists don’t, opting only for the slightly more benign term to soften their image. This is a well-documented phenomenon, as anyone who has spent time in these online circles can attest. It’s also the first sentence in the Anti Defamation League (ADL) entry on white nationalism:

White nationalism is a term that originated among white supremacists as a euphemism for white supremacy.

Eventually, some white supremacists tried to distinguish it further by using it to refer to a form of white supremacy that emphasizes defining a country or region by white racial identity and which seeks to promote the interests of whites exclusively, typically at the expense of people of other backgrounds.

As Motherboard reports, Facebook notes “overlaps with white nationalism/separatism” as a challenge in its relevant training notes section for white supremacy, adding that “Media reports also use the terms interchangeably (for example referring to David Duke as white supremacist even though he doesn’t explicitly identify himself as one).”

Facebook’s own articulation of white supremacy offers considerable concessions:

Although there doesn’t seem to be total agreement among academics on whether white supremacy always implies racial hatred, the fact that it is based on a racist premise is widely acknowledged. [original emphasis]

Most of Facebook’s slides on hate speech and hate groups read like an embarrassingly simplistic CliffsNotes, lacking nuance and revealing the company’s apparently slapdash approach to the issue of racial hate. Tellingly, some portions of Facebook’s training text copy Wikipedia’s own language verbatim.

Here are the first few sentences of the Wikipedia entry on white supremacy:

White supremacy or white supremacism is a racist ideology based upon the belief that white people are superior in many ways to people of other races and that therefore white people should be dominant over other races.

White supremacy has roots in scientific racism and it often relies on pseudoscientific arguments. Like most similar movements such as neo-Nazism, white supremacists typically oppose members of other races as well as Jews.

Facebook’s training note on white supremacy, with differences bolded:

White supremacy or white supremacism is a racist ideology based upon the belief that white people are superior in many ways to people of other races and that therefore white people should be dominant over other races. White supremacy has roots in scientific racism and it often relies on pseudoscientific arguments. Like most similar movements such as neo-Nazism, white supremacists typically oppose people of color, Jews and non-Protestants.

Facebook slides recreated by Motherboard

Bafflingly, Facebook also notes that “White nationalism and calling for an exclusively white state is not a violation for our policy unless it explicitly excludes other PCs [protected characteristics]” which by definition, a white state does.

According to slides recreated by Motherboard, Facebook asserts that “we don’t allow praise, support and representation of white supremacy as an ideology” but stipulates that it does “allow praise, support and representation” for both white nationalism and white separatism. [Again, emphasis theirs.]

Facebook further clarifies:

By the same token, we allow to call for the creation of white ethno-states (e.g. “The US should be a white-only nation”).

White supremacy versus white nationalism

By failing to recognize the political motivations behind white nationalism as an identity, Facebook legitimates white nationalism as something meaningfully distinct from white supremacy. While not all white nationalists call for the dream of a white ethnostate to be achieved through racial domination — and arguably the two could be studied distinctly from a purely academic perspective — they have far more in common than they have differences. Even with such thin sourcing, Facebook has devoted a surprising amount of language to differentiating the two.

In grappling with this question after Charlottesville, the Associated Press offered this clarification for its own coverage:

For many people the terms can be used almost interchangeably. Both terms describe groups that favor whites and support discrimination by race.

The AP also mentions the “subtle difference” that white supremacists believe whites to be superior.

For white nationalists, that attitude at times appears more implicit than explicit but that doesn’t mean it’s not there. From my own reading and considerable hours spent immersed online in white nationalist groups and forums, there is massive observable ideological overlap between the two groups. The instances in which white supremacists and white nationalists truly espouse wholly distinct ideologies are rare.

Further, it’s impossible to ignore that violence against non-whites is a central thread running throughout white nationalism, whether stated or implied. Imagining a white ethnostate that does not directly come about at the cost of the safety, wellbeing and financial security of racial minorities is pure fantasy — a fantasy Facebook is apparently content to entertain in pretending that the “white state” would not “explicitly exclude” anyone based on the protected characteristic of race.

The Southern Poverty Law Center (SPLC) defines white nationalism in similarly broad strokes, tying it directly to white supremacy and stating that “white nationalist groups espouse white supremacist or white separatist ideologies, often focusing on the alleged inferiority of nonwhites.”

The SPLC, an organization devoted to studying hate, explains the expedient fallacy of the white ethnostate as a nonviolent goal:

These racist aspirations are most commonly articulated as the desire to form a white ethnostate — a calculated idiom favored by white nationalists in order to obscure the inherent violence of such a radical project. Appeals for the white ethnostate are often disingenuously couched in proclamations of love for members of their own race, rather than hatred for others.

Apparently, Facebook ignored most dissenting definitions linking white nationalist goals directly to white supremacy. Naively or not, the company bought into white supremacy’s slightly more palatable public-facing image in shaping its policy platforms. In sourcing its policies, Facebook was apparently content to pick and choose which points supported its decision to allow white nationalism on its platform while supposedly casting out white supremacy.

“White nationalist groups espouse white separatism and white supremacy,” the Wikipedia page that Facebook drew from states. “Critics argue that the term ‘white nationalism’ and ideas such as white pride exist solely to provide a sanitized public face for white supremacy, and that most white nationalist groups promote racial violence.”

Sadly, for anyone who has watched many virulent strains of racism flourish and even organize on Facebook, the company’s shoddily crafted internal guidance on white supremacy comes as little surprise. Nor does the fact that the company failed to dedicate even a sliver of its considerable resources to understanding the nuance of white supremacist movements, aims and language.

We reached out to Facebook to see if these alarmingly reductive policies on racial hate have evolved in recent months (these materials are less than a year old), but the company only pointed us to the broad public-facing  “Community Standards.” Any further detail on the actual implementation of policies around hate remains opaque.

Though it may have learned some harsh lessons in 2018, for Facebook, opacity is always the best policy.

29 May 2018

Facebook’s policy on white supremacy plays right into a racist agenda

In an ongoing series over at Motherboard, we’re learning quite a bit about how Facebook polices hate speech and hate organizations on its platform. Historically, the company has been far less than transparent about its often inconsistent censorship practices, even as white supremacist content — and plenty of other forms of hate targeted at marginalized groups — runs rampant on the platform.

Now we know more about why. For one, according to a series of internal slides on white supremacy, Facebook walks a fine line that arguably doesn’t exist at all. According to these post-Charlottesville training documents, the company opted to officially differentiate between white nationalism and white supremacy, allowing the former and forbidding the latter.

White nationalism gets the green light

Facebook appears to take the distinction between white nationalism and white supremacy seriously, but many white nationalists don’t, opting only for the slightly more benign term to soften their image. This is a well-documented phenomenon, as anyone who has spent time in these online circles can attest. It’s also the first sentence in the Anti Defamation League (ADL) entry on white nationalism:

White nationalism is a term that originated among white supremacists as a euphemism for white supremacy.

Eventually, some white supremacists tried to distinguish it further by using it to refer to a form of white supremacy that emphasizes defining a country or region by white racial identity and which seeks to promote the interests of whites exclusively, typically at the expense of people of other backgrounds.

As Motherboard reports, Facebook notes “overlaps with white nationalism/separatism” as a challenge in its relevant training notes section for white supremacy, adding that “Media reports also use the terms interchangeably (for example referring to David Duke as white supremacist even though he doesn’t explicitly identify himself as one).”

Facebook’s own articulation of white supremacy offers considerable concessions:

Although there doesn’t seem to be total agreement among academics on whether white supremacy always implies racial hatred, the fact that it is based on a racist premise is widely acknowledged. [original emphasis]

Most of Facebook’s slides on hate speech and hate groups read like an embarrassingly simplistic CliffsNotes, lacking nuance and revealing the company’s apparently slapdash approach to the issue of racial hate. Tellingly, some portions of Facebook’s training text copy Wikipedia’s own language verbatim.

Here are the first few sentences of the Wikipedia entry on white supremacy:

White supremacy or white supremacism is a racist ideology based upon the belief that white people are superior in many ways to people of other races and that therefore white people should be dominant over other races.

White supremacy has roots in scientific racism and it often relies on pseudoscientific arguments. Like most similar movements such as neo-Nazism, white supremacists typically oppose members of other races as well as Jews.

Facebook’s training note on white supremacy, with differences bolded:

White supremacy or white supremacism is a racist ideology based upon the belief that white people are superior in many ways to people of other races and that therefore white people should be dominant over other races. White supremacy has roots in scientific racism and it often relies on pseudoscientific arguments. Like most similar movements such as neo-Nazism, white supremacists typically oppose people of color, Jews and non-Protestants.

Facebook slides recreated by Motherboard

Bafflingly, Facebook also notes that “White nationalism and calling for an exclusively white state is not a violation for our policy unless it explicitly excludes other PCs [protected characteristics]” which by definition, a white state does.

According to slides recreated by Motherboard, Facebook asserts that “we don’t allow praise, support and representation of white supremacy as an ideology” but stipulates that it does “allow praise, support and representation” for both white nationalism and white separatism. [Again, emphasis theirs.]

Facebook further clarifies:

By the same token, we allow to call for the creation of white ethno-states (e.g. “The US should be a white-only nation”).

White supremacy versus white nationalism

By failing to recognize the political motivations behind white nationalism as an identity, Facebook legitimates white nationalism as something meaningfully distinct from white supremacy. While not all white nationalists call for the dream of a white ethnostate to be achieved through racial domination — and arguably the two could be studied distinctly from a purely academic perspective — they have far more in common than they have differences. Even with such thin sourcing, Facebook has devoted a surprising amount of language to differentiating the two.

In grappling with this question after Charlottesville, the Associated Press offered this clarification for its own coverage:

For many people the terms can be used almost interchangeably. Both terms describe groups that favor whites and support discrimination by race.

The AP also mentions the “subtle difference” that white supremacists believe whites to be superior.

For white nationalists, that attitude at times appears more implicit than explicit but that doesn’t mean it’s not there. From my own reading and considerable hours spent immersed online in white nationalist groups and forums, there is massive observable ideological overlap between the two groups. The instances in which white supremacists and white nationalists truly espouse wholly distinct ideologies are rare.

Further, it’s impossible to ignore that violence against non-whites is a central thread running throughout white nationalism, whether stated or implied. Imagining a white ethnostate that does not directly come about at the cost of the safety, wellbeing and financial security of racial minorities is pure fantasy — a fantasy Facebook is apparently content to entertain in pretending that the “white state” would not “explicitly exclude” anyone based on the protected characteristic of race.

The Southern Poverty Law Center (SPLC) defines white nationalism in similarly broad strokes, tying it directly to white supremacy and stating that “white nationalist groups espouse white supremacist or white separatist ideologies, often focusing on the alleged inferiority of nonwhites.”

The SPLC, an organization devoted to studying hate, explains the expedient fallacy of the white ethnostate as a nonviolent goal:

These racist aspirations are most commonly articulated as the desire to form a white ethnostate — a calculated idiom favored by white nationalists in order to obscure the inherent violence of such a radical project. Appeals for the white ethnostate are often disingenuously couched in proclamations of love for members of their own race, rather than hatred for others.

Apparently, Facebook ignored most dissenting definitions linking white nationalist goals directly to white supremacy. Naively or not, the company bought into white supremacy’s slightly more palatable public-facing image in shaping its policy platforms. In sourcing its policies, Facebook was apparently content to pick and choose which points supported its decision to allow white nationalism on its platform while supposedly casting out white supremacy.

“White nationalist groups espouse white separatism and white supremacy,” the Wikipedia page that Facebook drew from states. “Critics argue that the term ‘white nationalism’ and ideas such as white pride exist solely to provide a sanitized public face for white supremacy, and that most white nationalist groups promote racial violence.”

Sadly, for anyone who has watched many virulent strains of racism flourish and even organize on Facebook, the company’s shoddily crafted internal guidance on white supremacy comes as little surprise. Nor does the fact that the company failed to dedicate even a sliver of its considerable resources to understanding the nuance of white supremacist movements, aims and language.

We reached out to Facebook to see if these alarmingly reductive policies on racial hate have evolved in recent months (these materials are less than a year old), but the company only pointed us to the broad public-facing  “Community Standards.” Any further detail on the actual implementation of policies around hate remains opaque.

Though it may have learned some harsh lessons in 2018, for Facebook, opacity is always the best policy.

29 May 2018

Google starts migrating all G Suite users from Hangouts to Meet

Google today announced that starting now, it is moving all of its G Suite users who follow its recommended schedule from its Hangouts video chat service to Hangouts Meet, the more enterprise-ready version of Hangouts. The move will likely take about a month or so.

This means all newly created Calendar invites will now link to Meet video meetings, though Google stresses that all previously created meetings won’t be affected by this.

For the time being, admins can still opt out of this transition and some may want to do so, simply because Meet doesn’t support Internet Explorer and Safari right now, for example (support for Firefox launched last week). But starting in the second half of 2018, Google expects that Meet will have full parity with Hangouts, including support for these browsers, and, at that time, Meet will become the default for these stragglers, too.

Google’s overall messaging strategy remains as confusing as always. The original plan was to position Allo and Duo as its consumer text and video chat apps while Meet and the more Slack-like Hangouts Chat played to its enterprise users.

Allo was a total failure, however, and its preferred messaging app now seems to be Chat, which will offer support for the new RCS standard and give Android users a more iMessage-like experience — or at least that’s what Google hopes. But to complicate matters, Hangouts is also still around in the consumer sphere and has generally done quite well. As for Duo, I don’t know anybody who uses it, but it’s hanging in there.

At least if you’re a business user, though, the story is pretty simple: Meet is your new video chat service and Hangouts Chat is Google’s chat service for you. And then when you get home, you can just use iMessage or Facebook Messenger or WhatsApp or WeChat or Viber or Signal or Telegram or, if you insist, Hangouts.

29 May 2018

Google starts migrating all G Suite users from Hangouts to Meet

Google today announced that starting now, it is moving all of its G Suite users who follow its recommended schedule from its Hangouts video chat service to Hangouts Meet, the more enterprise-ready version of Hangouts. The move will likely take about a month or so.

This means all newly created Calendar invites will now link to Meet video meetings, though Google stresses that all previously created meetings won’t be affected by this.

For the time being, admins can still opt out of this transition and some may want to do so, simply because Meet doesn’t support Internet Explorer and Safari right now, for example (support for Firefox launched last week). But starting in the second half of 2018, Google expects that Meet will have full parity with Hangouts, including support for these browsers, and, at that time, Meet will become the default for these stragglers, too.

Google’s overall messaging strategy remains as confusing as always. The original plan was to position Allo and Duo as its consumer text and video chat apps while Meet and the more Slack-like Hangouts Chat played to its enterprise users.

Allo was a total failure, however, and its preferred messaging app now seems to be Chat, which will offer support for the new RCS standard and give Android users a more iMessage-like experience — or at least that’s what Google hopes. But to complicate matters, Hangouts is also still around in the consumer sphere and has generally done quite well. As for Duo, I don’t know anybody who uses it, but it’s hanging in there.

At least if you’re a business user, though, the story is pretty simple: Meet is your new video chat service and Hangouts Chat is Google’s chat service for you. And then when you get home, you can just use iMessage or Facebook Messenger or WhatsApp or WeChat or Viber or Signal or Telegram or, if you insist, Hangouts.

29 May 2018

Tradeshift Raises $250M Series E, Goldman Sachs joins the round

Tradeshift, a supply chain payments and marketplaces late-stage startup which recently added blockchain to its armoury, has today raised $250 million in a Series E funding round led by Goldman Sachs and Public Sector Pension Investment Board (PSP Investments). Additional participation comes from HSBC, H14, GP Bullhound, and Gray Swan, a new venture company established by Tradeshift’s founders. The new round of financing brings Tradeshift’s total funding to more than $400 million. The company claims it’s valuation has now passed $1.1 billion.

The new investors join existing backers, including HSBC, American Express Ventures, the CreditEase Fintech Investment Fund, Notion Capital, Santander InnoVentures, and others. Mikkel Hippe Brun, Tradeshift’s GM of China and Co-Founder, will join Tradeshift’s board of directors.

Tradeshift CEO and Co-Founder Christian Lanng said in a statement: “We have always believed that the future of supply chains is 100 percent digital and that connecting trade is just the first step to a digitally connected economy. This investment will enable us to continue our rapid growth and consolidate our leadership position.”

“Given the rapid increase in B2B online transactions, online marketplaces are no longer just for consumers. Tradeshift has established itself as a leader in supply chain commerce by enabling corporations around the globe to take greater control of their supply chains,” said Darren Cohen, global head of Principal Strategic Investments at Goldman Sachs.

29 May 2018

Tradeshift Raises $250M Series E, Goldman Sachs joins the round

Tradeshift, a supply chain payments and marketplaces late-stage startup which recently added blockchain to its armoury, has today raised $250 million in a Series E funding round led by Goldman Sachs and Public Sector Pension Investment Board (PSP Investments). Additional participation comes from HSBC, H14, GP Bullhound, and Gray Swan, a new venture company established by Tradeshift’s founders. The new round of financing brings Tradeshift’s total funding to more than $400 million. The company claims it’s valuation has now passed $1.1 billion.

The new investors join existing backers, including HSBC, American Express Ventures, the CreditEase Fintech Investment Fund, Notion Capital, Santander InnoVentures, and others. Mikkel Hippe Brun, Tradeshift’s GM of China and Co-Founder, will join Tradeshift’s board of directors.

Tradeshift CEO and Co-Founder Christian Lanng said in a statement: “We have always believed that the future of supply chains is 100 percent digital and that connecting trade is just the first step to a digitally connected economy. This investment will enable us to continue our rapid growth and consolidate our leadership position.”

“Given the rapid increase in B2B online transactions, online marketplaces are no longer just for consumers. Tradeshift has established itself as a leader in supply chain commerce by enabling corporations around the globe to take greater control of their supply chains,” said Darren Cohen, global head of Principal Strategic Investments at Goldman Sachs.

29 May 2018

Students confront the unethical side of tech in ‘Designing for Evil’ course

Whether it’s surveilling or deceiving users, mishandling or selling their data, or engendering unhealthy habits or thoughts, tech these days is not short on unethical behavior. But it isn’t enough to just say “that’s creepy.” Fortunately, a course at the University of Washington is equipping its students with the philosophical insights to better identify — and fix — tech’s pernicious lack of ethics.

“Designing for Evil” just concluded its first quarter at UW’s Information School, where prospective creators of apps and services like those we all rely on daily learn the tools of the trade. But thanks to Alexis Hiniker, who teaches the class, they are also learning the critical skill of inquiring into the moral and ethical implications of those apps and services.

What, for example, is a good way of going about making a dating app that is inclusive and promotes healthy relationships? How can an AI imitating a human avoid unnecessary deception? How can something as invasive as China’s proposed citizen scoring system be made as user-friendly as it is possible to be?

I talked to all the student teams at a poster session held on UW’s campus, and also chatted with Hiniker, who designed the course and seemed pleased at how it turned out.

The premise is that the students are given a crash course in ethical philosophy that acquaints them with influential ideas, such as utilitarianism and deontology.

“It’s designed to be as accessible to lay people as possible,” Hiniker told me. “These aren’t philosophy students — this is a design class. But I wanted to see what I could get away with.”

The primary text is Harvard philosophy professor Michael Sandel’s popular book Justice, which Hiniker felt combined the various philosophies into a readable, integrated format. After ingesting this, the students grouped up and picked an app or technology that they would evaluate using the principles described, and then prescribe ethical remedies.

As it turned out, finding ethical problems in tech was the easy part — and fixes for them ranged from the trivial to the impossible. Their insights were interesting, but I got the feeling from many of them that there was a sort of disappointment at the fact that so much of what tech offers, or how it offers it, is inescapably and fundamentally unethical.

I found the students fell into one of three categories.

Not fundamentally unethical (but could use an ethical tune-up)

WebMD is of course a very useful site, but it was plain to the students that it lacked inclusivity: its symptom checker is stacked against non-English-speakers and those who might not know the names of symptoms. The team suggested a more visual symptom reporter, with a basic body map and non-written symptom and pain indicators.

Hello Barbie, the doll that chats back to kids, is certainly a minefield of potential legal and ethical violations, but there’s no reason it can’t be done right. With parental consent and careful engineering it will be in line with privacy laws, but the team said that it still failed some tests of keeping the dialogue with kids healthy and parents informed. The scripts for interaction, they said, should be public — which is obvious in retrospect — and audio should be analyzed on device rather than in the cloud. Lastly, a set of warning words or phrases indicating unhealthy behaviors could warn parents of things like self-harm while keeping the rest of the conversation secret.

WeChat Discover allows users to find others around them and see recent photos they’ve taken — it’s opt-in, which is good, but it can be filtered by gender, promoting a hookup culture that the team said is frowned on in China. It also obscures many user controls behind multiple layers of menus, which may cause people to share location when they don’t intend to. Some basic UI fixes were proposed by the students, and a few ideas on how to combat the possibility of unwanted advances from strangers.

Netflix isn’t evil, but its tendency to promote binge-watching has robbed its users of many an hour. This team felt that some basic user-set limits like two episodes per day, or delaying the next episode by a certain amount of time, could interrupt the habit and encourage people to take back control of their time.

Fundamentally unethical (fixes are still worth making)

FakeApp is a way to face-swap in video, producing convincing fakes in which a politician or friend appears to be saying something they didn’t. It’s fundamentally deceptive, of course, in a broad sense, but really only if the clips are passed on as genuine. Watermarks visible and invisible, as well as controlled cropping of source videos, were this team’s suggestion, though ultimately the technology won’t yield to these voluntary mitigations. So really, an informed populace is the only answer. Good luck with that!

China’s “social credit” system is not actually, the students argued, absolutely unethical — that judgment involves a certain amount of cultural bias. But I’m comfortable putting it here because of the massive ethical questions it has sidestepped and dismissed on the road to deployment. Their highly practical suggestions, however, were focused on making the system more accountable and transparent. Contest reports of behavior, see what types of things have contributed to your own score, see how it has changed over time, and so on.

Tinder’s unethical nature, according to the team, was based on the fact that it was ostensibly about forming human connections but is very plainly designed to be a meat market. Forcing people to think of themselves as physical objects first and foremost in pursuit of romance is not healthy, they argued, and causes people to devalue themselves. As a countermeasure, they suggested having responses to questions or prompts be the first thing you see about a person. You’d have to swipe based on that before seeing any pictures. I suggested having some deal-breaker questions you’d have to agree on, as well. It’s not a bad idea, though open to gaming (like the rest of online dating).

Fundamentally unethical (fixes are essentially impossible)

The League, on the other hand, was a dating app that proved intractable to ethical guidelines. Not only was it a meat market, but it was a meat market where people paid to be among the self-selected “elite” and could filter by ethnicity and other troubling categories. Their suggestions of removing the fee and these filters, among other things, essentially destroyed the product. Unfortunately, The League is an unethical product for unethical people. No amount of tweaking will change that.

Duplex was taken on by a smart team that nevertheless clearly only started their project after Google I/O. Unfortunately, they found that the fundamental deception intrinsic in an AI posing as a human is ethically impermissible. It could, of course, identify itself — but that would spoil the entire value proposition. But they also asked a question I didn’t think to ask myself in my own coverage: why isn’t this AI exhausting all other options before calling a human? It could visit the site, send a text, use other apps and so on. AIs in general should default to interacting with websites and apps first, then to other AIs, then and only then to people — at which time it should say it’s an AI.


To me the most valuable part of all these inquiries was learning what hopefully becomes a habit: to look at the fundamental ethical soundness of a business or technology and be able to articulate it.

That may be the difference in a meeting between being able to say something vague and easily blown off, like “I don’t think that’s a good idea,” and describing a specific harm and reason why that harm is important — and perhaps how it can be avoided.

As for Hiniker, she has some ideas for improving the course should it be approved for a repeat next year. A broader set of texts, for one: “More diverse writers, more diverse voices,” she said. And ideally it could even be expanded to a multi-quarter course so that the students get more than a light dusting of ethics.

With any luck the kids in this course (and any in the future) will be able to help make those choices, leading to fewer Leagues and Duplexes and more COPPA-compliant smart toys and dating apps that don’t sabotage self-esteem.