Year: 2019

26 Sep 2019

Justice Department has issued draft rules on using consumer genetic data in investigations

The U.S. Department of Justice has issued a preliminary set of guidelines for how law enforcement agencies can use genetic information from consumer DNA analysis services in their investigations.

“Prosecuting violent crimes is a Department priority for many reasons, including to ensure public safety and to bring justice and closure to victims and victims’ families,” said Deputy Attorney General Jeffrey A. Rosen, in a statement. “We cannot fulfill our mission if we cannot identify the perpetrators. Forensic genetic genealogy gets us that much closer to being able to solve the formerly unsolvable. But we must not prioritize this investigative advancement above our commitments to privacy and civil liberties; and that is why we have released our Interim Policy – to provide guidance on maintaining that crucial balance.”

Most critically the Department guidelines clearly state that a suspect “shall not be arrested based solely on a genetic association” generated by a genetic genealogical service.

If a suspect is identified using genetic information, the sample must be directly compared to the forensic profile that had already been uploaded to the FBI’s Combined DNA Index System (called CODIS).

Genetic information from a consumer service can only be used when a case involves an unsolved violent crime or sexual offenses and the forensic sample belongs to the person investigators believe to be the perpetrator or when a case involves the remains of a suspected homicide victim, according to the Justice Department.

Prosecutors have the ability to expand or authorize the use of genetic genealogical data beyond violent crimes when law enforcement is investigating crimes that present “a substantial and ongoing threat to public safety or national security.”

Genetic data from a consumer service can only be used after investigators have searched the FBI’s internal system and the collected samples that would be correlated with public information must be reviewed by a designated laboratory official, the Department of Justice said.

“The DLO must determine if the candidate forensic sample is from a single source contributor or is a deduced mixture. The DLO will also assess the candidate forensic sample’s suitability (e.g., quantity, quality, degradation, mixture status, etc.),” for comparison with publicly available genetic records. 

Under the new guidelines, law enforcement agencies can only search consumer genetic databases that provide explicit notifications to their users that law enforcement may use the services to investigate crimes or identify human remains. Investigators also have to receive consent from users of the genealogical service if their genetic information is going to be collected as part of an investigation (unless the consent would compromise the investigation).

These new guidelines follow a series of revelations from earlier in the year centering on the fact that DNA testing services had opened up their services to law enforcement agencies to aid in criminal investigations without their customers’ knowledge or consent.

At the heart of the story, was the decision by the genealogy service FamilyTreeDNA to open the genetic records of several million customers to law enforcement agencies without informing their customers. The story was first reported in January by BuzzFeed.

It wasn’t the first time that law enforcement had turned to genetic evidence to solve a crime. In April 2018, the police arrested a man believed to be the “Golden State Strangler” in part thanks to DNA evidence collected from online DNA and genealogical databases. It was the first instance of public genetic information being used to solve a crime.

The ensuing outcry over FamilyTreeDNA’s decision brought new attention to the fact that the consumer genetic testing companies are largely unregulated and very few regulations exist governing how these companies can use information once a consumer has given their consent.

“We are nearing a de-facto national DNA database,” Natalie Ram, an assistant law professor at the University of Baltimore who specializes in bioethics and criminal justice, told BuzzFeed News at the time. “We don’t choose our genetic relatives, and I cannot sever my genetic relation to them. There’s nothing voluntary about that.”

26 Sep 2019

Privacy in a digital world

Technological progress has created a situation of severe tension and incompatibility between the right to privacy and the extensive data pooling on which the digital economy is based. This development requires new thinking about the substance of that right.

In the last decade, both governments and giant corporations have become data miners, collecting information about every aspect of our activities, behavior and lifestyle. New and inexpensive forms of data storage and the internet connectivity revolution — not only in content, but in fact — in just about everything (from smart appliances to nanobots inside people’s bodies) — enable the constant transmission of big data from sensors and data-collection devices to central “brains”; the artificial intelligence revolution has made it possible to analyze the masses of data gathered in this way.

The intensive collection of data and the inherent advantages of the new technology have spawned the cynical idea that privacy is dead, and we might as well just get used to that fact. In what follows, I will describe three aspects of the right to privacy that have become especially relevant in the digital world. I will then demonstrate that not only is privacy still alive and kicking, but also that we should treat it with the respect it deserves as the most important of all human rights in the digital world.

The first perspective on privacy in the digital world is the idea that the appropriate reaction to the massive pooling of data is to enhance this right, so that we all have better control over our personal information. Individuals should be able to choose what space within their personal domain can be accessed by others and to control the manner, scope and timing of its exposure.

From this perspective, and in a different and more extreme fashion than with regard to other human rights, the borders of the right to privacy allow for compromise and flexibility. Thanks to this control, I — as an individual — have the right to view the content of databases containing information about me. Furthermore, no one is allowed to make any use of this information without my consent, except in extraordinary circumstances. I retain the privilege to agree to the terms of use before I download an app onto my cell phone or began to use freeware — product categories whose economic model rests on commercializing my personal data.

Above all, we need to understand the limits of privacy as control.

This approach is reflected in the regulations requiring my consent for others to make use of and process personal data, ensure my access to data about myself and stipulate that I can have it deleted, corrected or transferred to a different company.

But there is one serious problem with this approach: It is utter fiction. It simply isn’t possible to speak about consent to violations of privacy in a world in which data is processed in many ways and for many purposes, some of which cannot be foreseen at the time when consent is granted. Furthermore, every beginning scholar of behavioral psychology will tell you that no one reads the terms of use, even when they are phrased concisely or displayed in large print — neither of which is the case, of course.

Were this not enough, there is also the psychological phenomenon of the “privacy paradox,” which refers to the discrepancy between the concept of privacy reflected in what users say (“I care deeply about my privacy”) and their actual behavior (“A free pizza? Fantastic! What information do you need?”)

The downside of the notion of privacy as control is that our control of our personal data is quite fictional. There is an overall problem — whereby commercial entities avail themselves of huge tranches of private information without having obtained real consent for doing so. This information, in turn, can be put to various uses, some of which are of value, while others pose serious threats to society.

Above all, we need to understand the limits of privacy as control. It is clear that the best approach would be to upgrade our digital literacy and learn how to deal with the situation; but the problems noted here make this idea only minimally relevant. Perhaps the solution is to start with clearer legislation — national or international — that defines reasonable and legitimate uses of personal information and mandates companies to obtain  the consent of the individual involved, only when the proposed use does not fall into that category.

Somewhat paradoxically, the second approach to the right to privacy in a digital world relates to the most basic and classic connotation of the right to privacy — the “right to be left alone.” This refers to our right to preserve and protect our identity and maintain a safe and protected space around our body, thoughts, feelings, darkest secrets, lifestyle and intimate activities. A world with sensors and surveillance cameras all around us, along with recording devices and gadgets that are constantly monitoring what we do, has far-reaching psychological ramifications.

In the discourse on privacy, we tend to deal chiefly with questions of controlling the transmission or management of information after it has been collected, with regards to issues of data anonymization, security and encryption. But what we need at the present time is to ask whether there really is a commercial, business or public need to collect our private data so obsessively.

Against the clear advantages of technological progress, commercial convenience and even law enforcement, we must weigh the chilling effect on curiosity, on trust, on creativity, on intimate activity, on the ability to think outside the box — which is the critical spark to innovation.

What’s more, the essential feature of all digital personal assistants is the human traits (voice, face, language) with which their developers have endowed them. These devices are supposed to give us the feeling that there is another human being in the room. Researchers have shown that in contrast to our behavior with what we perceive as a machine (such as a computer or telephone), we react to humanized technology as if a real person were standing there. The right to be left alone will get a whole new meaning, then, different than in the internet age.

The third approach to the right to privacy is the idea that privacy should make it impossible for commercial or government entities to combine our personal data with big data amassed from other people in order to construct precise personality, psychological and behavioral profiles through machine learning. This phenomenon, known as the “autonomy trap,” applies to information about emotional tendencies, insecurity, sexual orientation (even of persons still in the closet), fears and anxieties and more.

The problem is that the personality profile is used for retargeting advertisements of products or services or for other facets of influencing behavior — all of it in a way that is precisely tailored to the needs associated with the profile.

In a world in which it is possible to pool and analyze information about us in order to generate buying and behavior recommendations “just for you” (purchases on Amazon, shows on Netflix, navigation guides such as Waze), we in effect are unwittingly surrendering some of our decision-making autonomy to systems that know what is the best route to our destination and what we should eat. 

Without individual privacy there is no meaning to an individual’s life.

We also are exposed to attempts at individual persuasion tailored just for us, with a power, invasiveness and capacity that did not exist in the past. Think “self-restraint preference algorithms” power devices, such as personal assistants, whose purpose is to learn as much about us as possible — what we are interested in, who our friends are, our habits, our mood — and then to help us by sending messages, making phone calls, setting appointments, ordering products or making travel reservations.

We must remember the slippery slope from the use of techniques for collecting personal information in order to offer products and services, and the use of the very same techniques to influence our thoughts, creates an autonomy trap about beliefs, and undermines our trust in democratic institutions — in brief, manipulates elections.

The Cambridge Analytica scandal in the spring of 2018 — which took the lid off the exploitation of personal data in order to sway the elections in many countries — shows that the right to privacy goes far beyond individual control of information and extends to a threat to the very possibility of conducting a sound democratic process, and thus — of protecting all human rights.

And so, in the digital world, privacy must be seen as a crucially important right for us as a society, as a collective. At the conceptual level it needs to go through the same process of evolution as its older sibling, the right to freedom of expression. Just as freedom of expression started out as the right of individuals to scream to their heart’s content, and developed into a collective right that sustains a rich and functional public discourse so that we can engage in a healthy democratic process, so too privacy must grow and develop — from the right of individuals to trade in their own data, into a collective right of defense against autonomy traps, in the context of elections and mind control.

The laws governing commercial competition will have to develop ideas that see personal data as an independent market. Antitrust agencies will have to look at the concentration of the personal data held by a single entity.

By the same token, the laws on election propaganda will have to regulate what types of personal information may not be exploited in campaigns, and determine whether there are techniques whose persuasive and manipulative powers are so great that they should be banned.

Privacy is not dead. In fact, it has become our most basic right and must be protected. Without individual privacy there is no meaning to an individual’s life, and without privacy, democracy loses all meaning.

26 Sep 2019

At the sixth annual Pear Demo Day, weather balloons, branded credit cards, and lots of top degrees

Pear, a Palo Alto-based seed stage fund that has made its name through early bets on Guardant Health, DoorDash, Memebox, and Gusto, among others, hosted its sixth annual demo day this week in what proved to be a scorchingly hot afternoon in Woodside, California — not that invitees were put off by the heat.

Hundreds of investors showed up at a sprawling public estate and surrounding gardens to see the dozen teams that Pear spent the summer working with, each of them less than nine months old, according to Pear, and many incorporated only in recent months. (Each has also only received less than $200,000 so far from Pear and no other institutional investment.)

While some are sure to evolve into other ideas or dissolve into other endeavors, the whole of the group gave those gathered food for thought and a first look at some very solid talent.

Following are the companies that presented:

1) Windborne: Founded by three Stanford grads and another from Harvard, this startup aims to improve the accuracy of weather data where it’s currently limited, like over oceans, by using weather balloons that could allow the team to do things like tell shipping companies which route to take to minimize fuel burn. CEO Paige Brown also says their system can fly 60 times longer than existing solutions and for the same price. The more specific claim: that in a single $350 flight, a Windborne balloon can fly for more than five days and travel a quarter of the way around the world, collecting direct measurements in places no one else can.

The team apparently bonded as engineers in the Stanford Student Space Initiative and they’ve all worked at SpaceX.

Windborne

2) Guild: This one was started by two Stanford grads and helps companies make branded credit cards. Why would they bother? Because, the startup claims, branded credit cards are a lot more lucrative — increasing spending by 20 percent, cutting churn by roughly half, and generating $50 per year of profit per customer. Co-founder Michael Spelfogel says he knows of which he speaks, having tried, unsuccessfully, to launch a branded credit card while at Lyft.

He also says the idea is to partner with sports teams first.

Guild

3) Polimorphic: Started by two computer scientists out of MIT, this startup is building a “civic media platform” meant to help politicians communicate with constituents in an interactive way. The platform basically invites constituents toexpress their views directly to political and government leaders, while giving campaigns, civic groups, and governments a way to engage with those individuals (though the latter has to pay to do this.)  It’s a meaningful market, they argue, saying that campaign spending has been growing by 50 percent in between major election cycles, with $9 billion spent in 2016 alone.

Of course, because this was a demo day, the founders also talked about their traction, saying they already have three letters of intent, and volunteering that they’re in early talks with three presidential campaigns.

Polimorphic

4) Gradio: Launched by graduates of Stanford, Georgia Institute of Technology, NYU and MIT, Gradio says it speeds up the process of collecting and labeling data for use with AI and machine learning. The “Gradio data engine” corrects mislabeled data, identifies and removes “low value” data, and highlights the highest value data. It’s a smart pitch, considering that acquiring and labeling data right now requires tons of human labor and often requires pricey domain expertise and that, even so, something like one if five data points is mislabeled at a typical AI company.

As for who will use the technology, the founders say they’re targeting companies in the natural language processing space first.

Gradio

5) Sympto Health: Launched by two founders from UC San Diego (one who graduated, one who dropped out to build Sympto), this startup is trying to tackle a universal problems, which is that patients very often forget clinical instructions, and when that happens, they sometimes wind up being readmitted to the hospital.

Sympto ties into a care facility’s existing systems/workflows and sends “patient engagement” messages — things like surgery checklists, pre-appointment questionnaires, etc — to minimize missed information and unnecessary readmissions. It says its patient-as-an-engagement service has already landed the company two enterprise contracts worth $300,000, too.

Screen Shot 2019 09 26 at 12.29.50 PM

 

6) Smarty: This startup was founded by a single person with multiple degrees (HBS, MIT) who previously worked as a software engineer at Yammer.

What she has built: an automation tool that’s focused on business tasks like scheduling meetings, making introductions, and finding flights for out of town meetings. The tool is being made available first to users of G Suite and Office 365 users (which have 200 million paying users combined), who are being asked to pay Smarty $20 a month for its workflow automation tool, Eventually, though, it aims to be its own client.

Screen Shot 2019 09 26 at 1.01.34 PM

7) Impct: Started by two MBAs from National Chengchi University and another from Stanford, Impct is making what it called snacks for good. It’s not that they’re more healthful than other options; instead, the idea is for companies to buy these white-label snacks for their offices, then re-invest a percentage of their sales into social responsibility programs chosen by employees. The thinking is that employees want their kombucha; why not buy spend on snack bars and drinks that give back?

Screen Shot 2019 09 26 at 1.14.39 PM

8) Learn to Win: Started by two Stanford MBAs who say traditional learning management systems fall short of the needs of  high-performance teams, Learn to Win is a “micro learning” training program that’s right now being used by 100 sports organizations; it also has a signed contract with the Air Combat Command to train fighter pilots.

What the program ostensibly offers: content that’s presented in a visual and easy-to-use content authoring engine, the ability to deploy mobile active learning content to users, and and the ability to quickly evaluate results and iterate.

Next on the startup’s to-do list: enticing other entities with training challenges, including in the commercial airline industry, at oil and gas companies, and within police and fire departments.

Screen Shot 2019 09 26 at 1.25.01 PM

9) Fanimal: Founders with degrees from Stanford, Columbia University and UC Berkeley (and who’ve worked at Boston Consulting Group, Gunderson Dettmer, and Hackbright Academy) decided to come together to tackle two annoying problems associated with uying tickets for live events: high fees, and that feeling when you buy tickets for a group of people . . . then need to chase them down for reimbusement.

With Fanimal, everyone in a social group pays individually and receives their own tickets, and there are no hidden fees. Instead, Fanimal makes money by adding a “small markup” to tickets. Since launching a few weeks ago, they’ve sold more than $31,000 in tickets.

Screen Shot 2019 09 26 at 1.30.30 PM

10) Xilis:  A Stanford PhD and a PhD from UNC Chapel Hill who are both now Duke University professors focused on oncology and precision health came together for this company out of their acutely awareness that when someone is diagnosed with cancer, finding the right treatment frequently takes months and often comes with countless side effects. To speed along the process, their company, Xilis, uses “micro-organoids” to make thousands of 3d replicas of a patient’s tumor in about 6 days, which the company says can be used for testing for drug compatibility faster.

They say it works, too. At least, the cofounders, Xiling Shen and David Hsu, say they’ve tested the technology with 12 patients, with a 100% success rate in predicting how a tumor will respond to medication.

Screen Shot 2019 09 26 at 1.38.53 PM 1

 

11) Equipped: Founded by two Stanford grads who’ve worked variously for the NBA, Tesla and Amazon, Equipped has an interesting proposal. What if instead of lug an oversize umbrella to the beach or bring a soccer ball to the park, you could buy these things where they make sense, in on-demand equipment lockers at the beach, or outside a park, where you could rent what you need, then return it?

Nike seems to like the idea. CEO Dan Mandelman says the sports retail giant is paying them $200,000 for six lockers in LA, with the cities of Burlingame, San Ramon, and Redwood City currently implementing pilot programs.

Screen Shot 2019 09 26 at 1.44.31 PM

12) Maker: Two Stanford MBAs with marketing and management consultant experience have created a marketplace for small batch wines.

Maker finds small/independent wineries, cans their product under the Maker label, then delivers to the end customer.

Screen Shot 2019 09 26 at 1.58.41 PM

By the way, you can get a flavor for Pear’s demo day here if you’re curious.

 

26 Sep 2019

‘We are seeing volume and interest in Peloton explode,’ says company president on listing day

This morning, Peloton (NASDAQ: PTON), the tech-enabled stationary bicycle and fitness content streaming company, raised $1.2 billion in its NASDAQ initial public offering. Despite dropping more than 10% in its first day of trading — ultimately closing down 11% at $25.84 per share — the IPO was a bona fide success. Peloton, once denied (over and over again) by VC skeptics, now has hundreds of millions of dollars to take its business into a new era. One in which, the media, hardware, software, logistics and social company attempts to become a generation-defining company akin to Apple.

Founded in 2012 — six years after Soul Cycle opened its first cycling studio in New York’s Upper East Side and two years before a Soul Cycle founder, Ruth Zukerman, jumped ship to launch her own indoor cycling business, Flywheel Sports — a man by the name of John Foley made the ambitious, some might say foolish, decision to start a company that would sell these exercise bikes direct-to-consumer. That way, you could take a Soul Cycle class, in essence, in the comfort of your own home. Even better, technology would improve the experience.

As my colleague Josh Constine recently described it, these bikes come outfitted with a 22-inch Android screen, transforming an outdated exercising experience and bringing it into 2019: “It makes lazy people like me work out. That’s the genius of the Peloton bicycle. All you have to do is Velcro on the shoes and you’re trapped. You’ve eliminated choice and you will exercise,” Constine writes.

Peloton’s ability to get people exercise — a feature driven by its talented instructors (some of whom were poached from competitor Flywheel Sports) — ultimately had venture capital investors funneling $1 billion, roughly, into the business. Today, Peloton operates dozens of showrooms across the U.S., counts 1.4 million total community members — defined as any individual who has a Peloton account — and over 500,000 paying subscribers. Why? Because the company, as stated in its IPO prospectus, “sells happiness.”

“Peloton is so much more than a Bike — we believe we have the opportunity to create one of the most innovative global technology platforms of our time,” writes Foley. “It is an opportunity to create one of the most important and influential interactive media companies in the world; a media company that changes lives, inspires greatness, and unites people.”

Peloton Bike Lifestyle 04

Peloton’s flagship product, a tech-enabled stationary bike.

Peloton’s community coupled with the high margins on sales of its $2,245 bikes had the company reporting $915 million in total revenue for the year ending June 30, 2019, an increase of 110% from $435 million in fiscal 2018 and $218.6 million in 2017. Its losses, meanwhile, hit $245.7 million in 2019, up significantly from a reported net loss of $47.9 million last year.

What’s next for Peloton? The opportunities are endless, given the company’s firm seat at the intersection of hardware, software, media content and more. A third product may be in the works, expansion to international markets or new instructors. Peloton is going after a massive market ripe for disruption. What’s certain is that we’ll see a whole lot of cash flowing into fitness tech copycats in the next couple of years.

Peloton, following a number of lukewarm consumer IPOs (Uber), nearly doubled its valuation to $8.1 billion this morning after pricing its IPO at the top of its range, $29 per share. To answer some of our most burning questions, we chatted with Peloton’s president William Lynch, the former CEO of Barnes & Noble, about the float.

The following conversation has been edited for length and clarity.

William Lynch

Peloton president and former Barnes & Noble CEO William Lynch.


Kate Clark: What’s next for Peloton?
William Lynch: We now have over a billion in capital to fuel more growth, especially in the area of product innovation.

26 Sep 2019

DoorDash confirms data breach affected 4.9 million customers, workers and merchants

DoorDash has confirmed a data breach.

The food delivery company said in a blog post Thursday that 4.9 million customers, delivery workers and merchants had their information stolen by hackers.

The breach happened on May 4, the company said, but added that customers who joined after April 5, 2018 are not affected by the breach.

It’s not clear why it took almost five months for DoorDash to publicly reveal the breach. A spokesperson for DoorDash did not immediately comment.

Users who joined the platform before April 5, 2018 had their name, email and delivery addresses, order history, phone numbers, and hashed and salted passwords stolen.

The company also said consumers had the last-four digits of their payment cards was also taken, though full numbers and card verification values (CVV) were not taken. Both delivery workers and merchants had the last four-digits of their bank account numbers stolen.

Around 100,000 delivery workers also had their driver’s license information stolen in the breach.

The news comes almost exactly a year after DoorDash customers complained that their accounts had been hacked. The company at the time denied a data breach and claimed attackers were running credential stuffing attacks, in which hackers take lists of stolen usernames and passwords and try them on other sites that use the same passwords. But many of the customers we spoke to said their passwords were unique to DoorDash, ruling out such an attack.

When asked at the time, DoorDash could not explain how the affected accounts were breached.

26 Sep 2019

DoorDash confirms data breach affected 4.9 million customers, workers and merchants

DoorDash has confirmed a data breach.

The food delivery company said in a blog post Thursday that 4.9 million customers, delivery workers and merchants had their information stolen by hackers.

The breach happened on May 4, the company said, but added that customers who joined after April 5, 2018 are not affected by the breach.

It’s not clear why it took almost five months for DoorDash to publicly reveal the breach. A spokesperson for DoorDash did not immediately comment.

Users who joined the platform before April 5, 2018 had their name, email and delivery addresses, order history, phone numbers, and hashed and salted passwords stolen.

The company also said consumers had the last-four digits of their payment cards was also taken, though full numbers and card verification values (CVV) were not taken. Both delivery workers and merchants had the last four-digits of their bank account numbers stolen.

Around 100,000 delivery workers also had their driver’s license information stolen in the breach.

The news comes almost exactly a year after DoorDash customers complained that their accounts had been hacked. The company at the time denied a data breach and claimed attackers were running credential stuffing attacks, in which hackers take lists of stolen usernames and passwords and try them on other sites that use the same passwords. But many of the customers we spoke to said their passwords were unique to DoorDash, ruling out such an attack.

When asked at the time, DoorDash could not explain how the affected accounts were breached.

26 Sep 2019

Facebook tries hiding Like counts to fight envy

If their post has lots of Likes, you feel jealous. If your post doesn’t get enough Likes, you feel embarassed. And when you just chase Likes, you distort your life seeking moments that score them, or censor it fearing you won’t look popular without them.

That’s why Facebook is officially starting to hide Like counts on posts, first in Australia starting tomorrow, September 27th. A post’s author can still see the count, but it’s hidden from everyone else who will only be able to see who but now how many people gave a thumbs-up or other reaction.

Facebook Hides Likes

The launch of the hidden Like counts test makes available what we reported Facebook was privately prototyping earlier this month, as spotted in its Android code by reverse engineering master Jane Manchun Wong. The test will run in parallel to Instagram’s own hidden Like count test we also scooped that first tested in Canada in April before expanding to six more countries in July.

“We are running a limited test where like, reaction, and video view counts are made private across Facebook” a Facebook spokesperson tells me. “We will gather feedback to understand whether this change will improve people’s experiences.” If the test improves people’s sense of well-being without tanking user engagement, it could expand to more countries or even roll out to everyone.

Facebook’s goal here is to make people comfortable expressing themselves. It wants users to focus on the quality of what they share and how it connects them with people they care about, not just the number of people who hit the thumbs-up.

Facebook Like Counts

As you can see, comment counts will still be displayed, as will the most common types of reactions left on a post plus the faces and names of some people who Liked it.

But without a big number on friends’ posts that could make users feel insignificant, or a low number on their own posts announcing their poor reception, users might feel more carefree on Facebook. The removal could also reduce herd mentality, encouraging users to decide for themselves if they enjoyed a post rather than just blindly clicking to concur with everyone else.

As I wrote about 2 years ago, a collection of studies identify the harm Facebook can do. They found that while chatting with friends and comment threads on Facebook made people feel better, passively scrolling and Liking could lead to envy spiraling and declines in perception of well-being. Users would compare their seemingly boring life to the well-Liked glamorous moments shared by friends or celebrities and conclude they were lesser.

One concern is that Facebook Pages that have large followings and often get more Likes than individual users’ posts could miss out on extra engagement and reach without that herd mentality.

But if Facebook wants to build a social network people continue using for another 15 years, it has to put their well-being first — above brands, above engagement, and above ad dollars.

26 Sep 2019

Tesla V10.0 car software update adds Smart Summon, Netflix/YouTube, Spotify, karaoke and more

Tesla is rolling out a new software update that adds a slew of new features to its cars. These include the new ‘Smart Summon’ feature which will allow cars equipped with the optional $5,000 full-self driving package to automatically drive themselves from a parking spot to collect you in a parking spot.

This is one of the most advanced semi self-driving features that Tesla has yet released to the general public, and the company still says you should use it only in lots and when you have a clear view of your car. The company also notes that you’re ultimately responsible for the vehicle, so definitely be aware of what’s going on with the car and its surroundings if you’re planning to use this one – and you can stop the car remotely should you feel the need to. Smart Summon has been out in a limited preview beta for some customers, but now it’s going to be rolling out to all vehicles that have purchased the FSD option

Other new features included in this update include the much-requested native Spotify support, which is available to all Spotify Premium account-holders across all markets where it’s available. That should go a long way towards satisfying Tesla owners who have been less than satisfied with playing audio via Bluetooth from this extremely popular streaming music option. In China, Tesla is also rolling out Ximalaya, a podcast and audiobook streaming service.

Tesla Theater Mode, also new in version 10.0, connects your infotainment system to your Netflix, YouTube and Hulu/Hulu+ (including Live TV if you’re subscribed to that feature) accounts, giving you access to streaming video from all these platforms while the car is safely in park. In China, the automaker is also adding IQiyi and Tencent Video, and it says it’ll be adding more options globally “over time” to supplement these offerings. The new Theater Mode will also provide access to Tesla vehicle tutorials for owners to watch in-car, again only while parked.

A lot of these updates focus on entertainment options, including the new “Car-aoke” mode, which, as you might have guessed, adds an in-car karaoke experience that includes a “massive” library of music and lyrics, Tesla says, with multiple languages supported. Singing along on road-trips has long gotten by with low-tech options only, but official support might encourage more amateur James Cordens.

Last but not least for new entertainment features, there’s the launch of the Cuphead port on Tesla Arcade, the in-car gaming software Tesla launched earlier this year. Cuphead is a cult smash hit indie game, with an iconic art style reminiscent of early Disney animation, and this is definitely a nod to Tesla’s core geek audience (and probably a treat for the Musk man himself). Again, this is only available while parked in case you were worried about distracted driving.

Tesla also added some new navigation features that suggest interesting restaurants and sightseeing opportunities along your way, w which could result in some more interesting spontaneous adventures. There’s also a new file system tweak that separates videos captured by the car’s camera when in Dashcam and Sentry Mode to make it easier for users to find them, and they’ll be auto-deleted when there’s a need to free up storage.

This is a big ol’ update packed with new features, and it’s going to be rolling out over-the-air to vehicles beginning this week. As mentioned a couple of places above, you might see some slight differences region to region but Tesla says you can also check out the updates in-store at its showrooms if you want a sneak preview.

26 Sep 2019

MediaRadar’s new product helps event organizers maximize sales

MediaRadar CEO Todd Krizelman describes his company as having “a very specific objective, which is to help media salespeople sell more advertising” by providing them with crucial data. And with today’s launch of MediaRadar Events, Krizelman hopes to do something similar for event organizers.

These customer groups might actually be one and the same, as plenty of companies (including TechCrunch) see both advertising and events as part of their business. In fact, Krizelman said customer demand “basically pushed us into this business.

He also suggested that that after years of seeing traditional ad dollars shifting into digital, “the money is now moving out of digital into events.”

If you’re organizing a trade show, you can use MediaRadar Events to learn about the overall size of the market, and then see who’s been purchasing sponsorships and exhibitor booths at similar events.

The product doesn’t just tell you who to reach out to, but how much these companies have paid for booths and sponsorships in the past, whether there are seasonal patterns in their conference spending and how that spending fits into their overall marketing budget — after all, Krizelman said, “In 2019, very few companies are siloed by media format as a buyer or a seller. Anyone doing that is putting their business at risk.”

He also described collecting the data needed to power MediaRadar Events as “much more complicated than we expected,” which is why it took the team two years to build the product. He said that data comes from three sources — some of it is posted publicly by event organizers, some of is shared directly by the event organizers with MediaRadar and in some cases members of the MediaRadar team will attend the events themselves.

MediaRadar Events support a wide range of events, although Krizelman acknowledged that it doesn’t have data for every industry. For example, he suggested that a convention for coin-operated laundromat owners might be “too niche” (though he hastened to add that he meant no offense to the laundromat business).

In a statement, James Ogle — chief financial officer at AdExchanger owner Access Intelligence — said:

Hosting events and the resulting revenue that comes from them is a big part of our business. However, the event space is getting more and more crowded and also more niche. Relevancy equals value, so we want to make sure our attendees are within the right target market for our exhibitors. MediaRadar provides critical transparency into the marketplace.

26 Sep 2019

Dating app maker Match sued by FTC for fraud

They’re just not that into you. Or maybe it was a bot? The U.S. Federal Trade Commission on Wednesday announced it has sued Match Group, the owner of just about all the dating apps — including Match, Tinder, OKCupid, Hinge, PlentyofFish, and others — for fraudulent business practices. According to the FTC, Match tricked hundreds of thousands of consumers into buying subscriptions, exposed customers to the risk of fraud, and engaged in other deceptive and unfair practices.

The suit focuses only on Match.com and boils down to this: Match.com didn’t just turn a blind eye to its massive bot and scammer problem, the FTC claims. It knowingly profited from it. And it made deceiving users a core part of its business practices.

The charges against Match are fairly significant.

The FTC says that most consumers aren’t aware that 25 to 30 percent of Match registrations per day come from scammers. This includes romance scams, phishing scams, fraudulent advertising, and extortion scams. During some months from 2013 to 2016, more than half the communications taking place on Match were from accounts the company identified as fraudulent.

Bots and scammers, of course, are a problem all over the web. The difference is that, in Match’s case, it indirectly profited from this, at consumers’ expense, the suit claims.

The dating app sent out marketing emails (i.e., the “You caught his eye” notices) to potential subscribers about new messages in the app’s inbox. However, it did so after it had already flagged the message’s sender as a suspected bot or scammer.

Screen Shot 2019 09 26 at 2.57.37 PM

“We believe that Match.com conned people into paying for subscriptions via messages the company knew were from scammers,” said Andrew Smith, Director of the FTC’s Bureau of Consumer Protection. “Online dating services obviously shouldn’t be using romance scammers as a way to fatten their bottom line.”

From June 2016 to May 2018, Match’s own analysis found 499,691 consumers signed up for subscriptions within 24 hours of receiving an email touting the fraudulent communication, the FTC said. Some of these consumers joined Match only to find the message that brought them there was a scam. Others joined after Match deleted the scammers’ account, following its fraud review process. That left them to find the account that messaged them was now “unavailable.”

In all cases, the victims were now stuck with a subscription — and a hassle when they tried to cancel.

Because of Match’s allegedly “deceptive advertising, billing, and cancellation practices,” consumers would often try to reverse their charges through their bank. Match would then ban the users from the app.

Related to this, Match is also in violation of the “Restore Online Shoppers’ Confidence Act” (ROSCA) by failing to provide a simple way for customers to stop the recurring charges, the FTC says. In 2015, one Match internal document showed how it took over 6 clicks to cancel a subscription, and often led consumers to thinking they canceled when they did not.

Screen Shot 2019 09 26 at 2.59.35 PM

And the suit alleges Match tricked people into free, six-month subscriptions by promising them they wouldn’t have to pay if they didn’t meet someone. It didn’t, however, adequately disclose that there were other, specific steps that had to be taken, involving how they had to use their subscription or redeem their free months.

Screen Shot 2019 09 26 at 2.58.39 PM

Match, naturally, disputes the matter. It claims that it is, in fact, fighting fraud and that it handles 85% of potentially improper accounts in the first four hours, often before they become active. And it handles 96% of those fraudulent accounts within a day.

“For nearly 25 years Match has been focused on helping people find love, and fighting the criminals that try to take advantage of users. We’ve developed industry-leading tools and A.I. that block 96% of bots and fake accounts from our site within a day and are relentless in our pursuit to rid our site of these malicious accounts,” Match stated, in response to the news. “The FTC has misrepresented internal emails and relied on cherry-picked data to make outrageous claims and we intend to vigorously defend ourselves against these claims in court.”

The Match Group, as you may know, loves to have its day in court.

The FTC’s lawsuit isn’t the only one facing Match’s parent company because it doesn’t (allegedly) play fair.

A group of Tinder execs are currently suing Match and its controlling shareholder IAC for manipulating financial data to strip them of their stock options. The suit today continues, even though some plaintiffs had to drop out because Match had snuck an arbitration clause into its employees’ recent compliance acknowledgments.

Now those former plaintiffs are acting as witnesses, and Match is trying to argue that the litigation funding agreement overcompensates them for their testimony in violation of the law. The judge called that motion a “smoke screen” and an attempt to “litigate [the plaintiffs] to death until they settle.”

The Match Group also got into it with Tinder’s rival Bumble, which it failed to acquire twice. It filed a lawsuit over infringed patents, which Bumble said was meant to bring down its valuation. Bumble then filed and later dropped its own $400M suit over Match fraudulently obtaining Bumble’s trade secrets.

In the latest lawsuit, the FTC is asking Match to pay back the “ill-gotten” money and wants to impose civil penalties and other relief. While the financial impacts may not be enough to take down a company with the resources of Match, the headlines from the trial could bring about an increase in negative consumer sentiment over Match and online dating in general. It’s a business that’s become commonplace and normalized in society, but also has a reputation of being a little scammy at times, too. This suit won’t help.

And given that Match Group operates a majority of the U.S.’s top dating apps, that could have a larger, trickle-down effect on its broader business.

The FTC suit is available below.