Year: 2018

05 Nov 2018

Instagram prototypes bully-proof moderated School Stories

Instagram is considering offering collaborative School Stories that only a certain school’s students can see or contribute to. And to make sure these Stories wouldn’t become bullying cesspools, Instagram’s code shows a warning that “School stories are manually reviewed to make sure the community is safe.” School Stories could create a fun space for kids to share with their peers beyond the prying eyes of their parents or strangers, though they could also exacerbate teen culture issues around envy and exclusionary social scenes.

The code was first discovered by TechCrunch’s top tipster Jane Manchun Wong. Instagram declined to comment on the record regarding the code, though previous discoveries from its code that it also “no-comment’ed” such as video calling and Nametags went on to officially launch. But typically, Instagram confirms if it’s internally or externally testing features we spot, so if it ever decides to actually launch School Stories, it might not be for months or longer. And it could scrap the feature rather than having to risk the bullying issues and invest in moderation.

In other news from Wong’s findings, Instagram is also prototyping a URL scheme for Stories so users could share deep links directly to their Stories outside of Instagram. That could be very powerful for influencers, public figures, and brands trying to build their audience with behind-the-scenes and day-to-day Stories content instead of just feed posts that already have URLs.

Instagram declined to comment on Stories URL as well, so again it could be a while before this rolls out if ever. But marketers might especially love the idea of being able to funnel ad clicks or fans of their other social profiles to their Instagram Stories. You could imagine Stories links floating around Twitter, YouTube, and more. Snapchat has neveroffered more than a deep link to user profiles, so this could be a way for Instagram to show it does more than just copy.

Instagram already allows users to contribute to public collaborative Stories around locations and hashtags, while Facebook offers them for Events and Groups. And Facebook Stories recently launched holiday Stories where friends can see collections of each other’s posts for Halloween or other big moments.

School Stories could build on the idea of Instagram sub-networks, which it first started testing last month with universities. Instagram used signals from what you post about, your location, and your network to invite users to join their university’s network. This lets them show off a line in their profile with their school, class year, major, sports team, and/or greek affiliation, and show up in a directory for the school so people could follow them or DM their pending inbox.

Facebook was originally school network-based when it launched in 2004. Users could leave their content visible by default to everyone at their school. While Facebook and Instagram are a lot more careful with privacy these days, School Stories could bring back that feeling of in-group community where users can post things that might be irrelevant or confusing to outsiders.

05 Nov 2018

Is it legal to post a ‘votie’ in your state? Check this map

The 2018 midterm elections are tomorrow and in addition to exciting changes to the political landscape, the day promises to bring millions of voties, pictures of completed ballots posted to social media, and simultaneously, confusion at a national level over whether they’re legal or not. It depends on what state you’re in — so fear not and consult this map! Or the handy color-coded list below.

Now, some of you may be wondering why would it be illegal for someone to express their political opinion by sharing how they voted. Well, the problem isn’t that you are engaging in political speech, which is of course protected, but that you are publicly displaying your actual vote when the election process specifically prohibits that.

Secrecy in voting is meant to be empowering, not limiting. If no one knows how you vote and it is in fact illegal to provide proof one way or the other, you can’t be coerced or threatened into voting a certain way. In addition to this possibility, there is the more general threat of having recording devices active in a polling location where others may not want to have their voting process documented.

Essentially the integrity of the democratic process and the possibility of infringing on the private votes of others has been decided by some states, and not without considerable discussion and dissent, to take precedence over the free speech of the individual in this specific location and time. As exceptions to the First Amendment go, it’s a pretty narrow one.

Here’s a big version of the map to download and share.

All told 18 states prohibit the practice, with varying degrees of breadth and severity. It might be a low-level misdemeanor or a more serious offense; it might be a blanket ban on electronics in the polling places; there might be a legal challenge of the law; the state may be vigilant or never care to prosecute… but I’m simplifying them all to “illegal” because one way or another it’s against the law there on this election day.

21 states either have no law prohibiting the practice or explicitly allow it, making it “legal” assuming the picture is of you and/or your own ballot and not of someone else and theirs.

The remaining 11 states don’t fit neatly into either category. For instance, in some states you can freely share pictures of your completed mail-in ballot but taking photos in or around polling places is disallowed. It’s not just unclear to me, but to lawyers and lawmakers — generally speaking, though, there’s some way to break a law with a votie. If you’re in a “unclear” state, the safest thing is not to do it, but if you must, check out the specifics at the bottom of this list at Law & Crime.

Without further ado, here’s the list:

  • Alabama: ILLEGAL
  • Alaska: ILLEGAL
  • Arizona: MAIL-IN BALLOTS OK
  • Arkansas: UNCLEAR
  • California: ILLEGAL (but not for long)
  • Colorado: ILLEGAL
  • Connecticut: LEGAL
  • Delaware: ILLEGAL
  • District of Columbia: LEGAL
  • Florida: ILLEGAL
  • Georgia: ILLEGAL
  • Hawaii: LEGAL
  • Idaho: LEGAL
  • Illinois: ILLEGAL
  • Indiana: LEGAL
  • Iowa: MAIL-IN BALLOTS OK
  • Kansas: LEGAL
  • Kentucky: LEGAL
  • Louisiana: LEGAL
  • Maine: LEGAL
  • Maryland: MAIL-IN BALLOTS OK
  • Massachusetts: UNCLEAR
  • Michigan: ILLEGAL
  • Minnesota: LEGAL
  • Mississippi: ILLEGAL
  • Missouri: UNCLEAR
  • Montana: LEGAL
  • Nebraska: LEGAL
  • Nevada: ILLEGAL
  • New Hampshire: LEGAL
  • New Jersey: ILLEGAL
  • New Mexico: ILLEGAL
  • New York: ILLEGAL
  • North Carolina: ILLEGAL
  • North Dakota: LEGAL
  • Ohio: UNCLEAR
  • Oklahoma: UNCLEAR
  • Oregon: LEGAL
  • Pennsylvania: DEPENDS ON COUNTY
  • Rhode Island: LEGAL
  • South Carolina: ILLEGAL
  • South Dakota: ILLEGAL
  • Tennessee: ILLEGAL
  • Texas: MAIL-IN BALLOTS OK
  • Utah: LEGAL
  • Vermont: LEGAL
  • Virginia: LEGAL
  • Washington: LEGAL
  • West Virginia: MAIL-IN BALLOTS OK
  • Wisconsin: ILLEGAL
  • Wyoming: LEGAL

If it’s illegal or questionably legal in your state, file a votie at your own risk!

Using your phone to document voter suppression, malfunctioning voting machines, and other problems is a potential exception to the rule. Use your best judgment respect the privacy of others.

I am not a lawyer and this is not legal advice! This article is strictly informational and correct to the best of my knowledge as of November 5, 2018. Is something incorrect? Let us know in the comments and I’ll look into it! And no, I’m not going to stop trying to make “votie” happen. It’s going to happen!

more 2018 US Midterm Election coverage

05 Nov 2018

Hey look, it’s a new Barnes & Noble Nook in 2018

It’s true! It’s 2018, and Barnes & Noble just announced another Nook! You can preorder it today! All of these things are somehow simultaneously true. The Nook line has basically been a non-starter since 2016, back when the once ubiquitous bookseller offered up a dirt-cheap $50 model. Even back then it felt like a strange anachronism.

The pricing on the new model is more in line with what you’d expect from a budget tablet, from, say Amazon. The Nook 10.1 runs $130. Aside from the titular screen size (at a middling 224 ppi), there’s really not much to talk about with what will almost certainly be a run of the mill budget Android tablet with 32GB of storage, two cameras and a headphone jack — which admittedly does qualify as a feature in 2018. Barnes & Noble is calling it a “game changer,” because that’s what people do in press releases.

“The new Nook 10.1 provides a complete reading and entertainment experience on our biggest display yet,” Chief Digital Officer Bill Wood says in the release. “The soft-touch feel and lightweight design make it a perfect holiday gift for readers who want to enjoy their favorite books for hours, while also being able to browse, watch shows, listen to music, or send emails all from one device. The NOOK 10.1 is truly a game changer for the Nook lineup.”

Games will be changed when the device hits what’s left of Barnes & Noble’s stores on November 14.

05 Nov 2018

Facebook opens its first small biz pop-up stores inside Macy’s

Facebook is bursting out of the ones and zeros into the physical realm with nine brick-and-mortar pop-up stores that will show off goods from 100 small business and online brands. Facebook organized the merchants to be part of The Market @ Macy’s, which first launched earlier this year to create temporary spaces for businesses. The merchants keep all their sales revenue, with Facebook and Macy’s taking no revenue share, and Facebook paid for each merchant’s one-time fee that Macy’s charges for the space. The stores feature News Feed post-themed displays complete with like button imagery so it feels like you’re shopping Facebook in real life.

While Facebook doesn’t earn money directly from the stores, it could convince the small businesses and others like them to spend more on Facebook ads. Alongside recent tests of advanced Instagram analytics and instant Promote ads for Stories, Facebook wants to build a deeper relationship with small businesses so this long tail of advertisers sticks with it as sharing and marketing shift from the News Feed toward Stories and private messaging.

“All over the world people are running businesses, big and small, that have inspiring stories and we want to help them succeed. We are thrilled to be partnering with one of the world’s biggest retailers to bring some of those businesses to a physical store this holiday season,” writes Facebook’s Director of North America Marketing Michelle Klein. “Macy’s shoppers will have the chance to meet businesses like Love Your Melon that sells hats and apparel to help in the fight against pediatric cancer, or Charleston Gourmet Burger Company that started from a backyard barbecue and has expanded to reach customers in all 50 states.”

The nine pop-ups will be open from today until February 2nd in The Market @ Macy’s in NYC, Pittsburgh, Atlanta, Fort Lauderdale, San Antonio, Las Vegas, Los Angeles, San Francisco and Seattle. The brands come from across apparel, lifestyle, food, beauty and other verticals, and include Two Blind Brothers, Bespoke Post, Inspiralized, Mented Cosmetics and Link. Facebook seems taken with the idea of having a physical presence in the world, with the pop-ups coming just days before Facebook starts shipping its first hardware product, the Facebook Portal video chat device.

Facebook will also run a big ad campaign of its own in NYC’s Grand Central Station for the next three weeks to promote the pop-ups, featuring 600 ad units with 36 designs across 115 locations in the public transportation hub. Facebook will also run ads for the stores on its app and Instagram, and provide the merchants with complementary digital ad design from Facebook’s Creative Shop.

Facebook’s revenue growth has been massively decelerating, dropping from 59 percent year-over-year in Q3 2016 to 49 percent in Q3 2017 to 33 percent in Q3 2018. That’s introduced uncharacteristic share price declines and volatility for Facebook’s historically stable business. Delighting small businesses with experiences like the pop-up stores could keep them loyal as Facebook’s ad formats shift toward more vivid and interactive formats that can be tough for budget-strapped merchants to adopt.

05 Nov 2018

Oath is dead. Long live Verizon Media Group/Oath

Friends, readers, internet browsers, lend me your ears;
I come to bury Oath, not to praise it.
The subsidiary brands that companies own live after them;
Their terrible rebranding is oft interred with their bones;
So let it be with Oath . The new Verizon chief executive,
Hans Vestberg, told you Oath was ambitious:
If it were so, it was a grievous ambition,
And grievously hath Oath answer’d it.
Here, under leave of Vestberg and the rest of Verizon’s leadership —

Come I to speak in Oath’s funeral.

It was TechCrunch’s parent company, distant and somewhat comical to me:
But Vestberg says it was ambitious;
And Vestberg is an honourable man.

Oath did merge Yahoo and AOL under one brand
Whose ad networks and media outlets (like TechCrunch and HuffPo) did the Verizon coffers fill:
Did this in Oath seem ambitious?
When that the Go90 staff have cried, Oath hath wept:
Ambition should be made of sterner stuff:
Yet Vestberg says it was ambitious;
And Vestberg is an honourable man.

Y’all saw that when the merger was first proposed
I said it was a really bad idea,
But my parent company didn’t listen to me: was this ambition?
Yet Vestberg says Oath was ambitious;
And, sure, he is an honourable man.

I write not to disprove what Verizon issued in a press release,
But here I am to speak what I do know.
Y’all thought the branding was terrible, not without cause:
So don’t let us stop you now from mocking it.
O judgment! You were lost when branding was left to brutish beasts,
And men with no reason. Bear with me;
My LOLZ are in the coffin there with Oath,
And I must pause till I can stop cackling.

Anyway, Oath is now going to be Verizon Media Group/Oath as part of a corporate restructuring undertaken by Verizon’s CEO, Vestberg. The company is going to operate under three different business units — a Consumer Group, led by Ronan Dunne, a current executive vice president of Verizon and president of Verizon Wireless; a Business Group, led by Tami Erwin, currently executive vice president of wireless operations — which will focus on government, small and medium businesses, large business customers, and operate the company’s telematics arm; and a Media Group / Oath, which will be led by Guru Gowrappan, currently Oath’s chief executive.

 

05 Nov 2018

Niantic overhauls Ingress to make it more welcoming for new players

Before there was Pokémon GO, there was Ingress. It was Niantic’s first game — and while it never became the overwhelmingly popular phenomenon that GO did, it’s undeniably what allowed GO to exist in the first place.

Now Niantic is taking another swing at it. The company has rebuilt Ingress from the ground up, with the goal of making it prettier, more immersive, and — most importantly — more accessible to new players. The new app will ship for iOS and Android later today.

Unfamiliar with Ingress? At its core, it shares its DNA with Pokémon GO; it’s a game that encourages you to walk around the real world, visit nearby landmarks and parks, and work together with your self-selected team (or, in Ingress’ terminology, your “faction”).

But Ingress is a good bit more… intense than GO (Ingress players like to poke at GO as being “Ingress Lite”.) There are no cutesie monsters to collect or Pokéstops to spin; instead, you’re “hacking” portals (the same real-world locations, mostly, that act as Pokéstops) and “linking” them together in an effort to conquer as much of the map as you can for your faction. Link three portals, and everything in between becomes your team’s turf. It’s like capture the flag mashed up with one massive worldwide game of tug of war, with a bit of Matrix-y cyberpunk dressing slathered on top.

Ingress Prime, as version 2.0 is known, replaces the original Ingress app with one built on Unity — the same gaming engine that powers Pokémon GO and many thousands of other games.

If you’ve been playing Ingress for a while, many of the changes here are “quality of life”-type tweaks: the UI has been cleaned up, and they’ve added all sorts of shortcuts and gestures to make it faster to do things like attack nearby portals or manage your inventory. The new map interface is easier to pan and zoom around with one hand, with a one-finger control scheme that’ll feel pretty familiar for GO players. The new UI is bound to be a point of contention at first, if only because it means a bit of habit breaking for players who’ve spent hundreds to thousands of hours getting used to the old one, and, well, people don’t like change. Hopefully, they come around.

Speaking of those hours spent in Ingress already: your progress and badges carry over to Ingress Prime. If you’re Level 16 in the original Ingress, you’ll be Level 16 in Ingress Prime. New here, though, is the ability to “recurse”. Sort of like the “prestige” concept made popular by Call of Duty, recursing sets you back to level 1 to start the grind all over again, but with your myriad unlocks (your lifetime AP score, recharge distance, and inventory items) still in tow.

Niantic tells me that certain things moving forward will only be available to those who opt to recurse and start afresh, but didn’t elaborate on what those could be. (With many longtime players approaching Pokémon Go’s level cap of 40, I’d be quite surprised if a similar concept doesn’t make its way into GO eventually.)

It’s the players who are new to Ingress, though — or those who gave Ingress a glance before and were spooked away by the steep learning curve — that Niantic seems most interested in here.

Whereas the original Ingress just sort of dumped you into the thick of it, Ingress Prime offers a bit more handholding out of the gate. A plot-driven tutorial introduces new players to the concepts of portals, hacking, etc, all while starting to plant the seeds of the game’s backstory and lore. You’re introduced to the two factions and the rival AIs behind them, eventually being asked to choose a side.

I ran through a beta build of the game’s onboarding process last week, and, as someone who admittedly fits right into that “gave Ingress a glance and got spooked away” camp mentioned above, Ingress Prime does a much better job of clarifying what the heck is going on. It feels like it could use a bit more play testing (particularly in explaining when I’m doing the wrong thing), but it’s a big step forward. It doesn’t spoon feed you, but it does a much better job of getting the ball rolling.

(Pro tip: the game recommends using headphones, and I don’t think that’s just so you can hear things at the highest fidelity. With the tutorial’s voice-acted tracks talking about hacking systems and controlling minds, anyone playing in public sans earbuds is bound to get some preeeeetty weird looks.)

Once they’ve gotten a new player hooked, Niantic intends to go a bit harder with the aforementioned plot/lore this time around. A weekly live-action web series called the “Dunraven Project” will fill in the game’s backstory, while an anime series (which debuted in Japan in October with an English version coming to Netflix in 2019) is meant to explore the wider universe.

According to Niantic, Pokémon GO was downloaded onto nearly a billion times. Ingress, meanwhile, capped out at around 20 million downloads.

Will this overhaul get Ingress downloads up into the billions? Probably not. Pokémon Go had that powder keg spark of nostalgia and familiarity to draw in massive crowds right off the bat — but, built on someone else’s intellectual property, there are limitations in what Niantic can do with GO and where GO can… er, go. But by rebooting Ingress, Niantic is using existing IP it already owns/fully controls as a springboard; they’re striving to keep the existing player base happy, while setting it up to grow dramatically by lowering the barrier to entry and expanding the storyline. It’s a tough tightrope act to pull off, but it really seems that they’re starting out on a good foot here.

05 Nov 2018

Daily Digest: Technology and tyranny, lying to ourselves, and Spotify’s $1b repurchase

Want to join a conference call to discuss more about these thoughts? Email Arman at Arman.Tabatabai@techcrunch.com to secure an invite.

Hello! We are experimenting with new content forms at TechCrunch. This is a rough draft of something new. Provide your feedback directly to the authors: Danny at danny@techcrunch.com or Arman at Arman.Tabatabai@techcrunch.com if you like or hate something here.

Harari on technology and tyranny

Yuval Noah Harari, the noted author and historian famed for his work Sapiens, wrote a lengthy piece in The Atlantic entitled “Why Technology Favors Tyranny” that is quite interesting. I don’t want to address the whole piece (today), but I do want to discuss his views that humans are increasingly eliminating their agency in favor of algorithms who make decisions for them.

Harari writes in his last section:

Even if some societies remain ostensibly democratic, the increasing efficiency of algorithms will still shift more and more authority from individual humans to networked machines. We might willingly give up more and more authority over our lives because we will learn from experience to trust the algorithms more than our own feelings, eventually losing our ability to make many decisions for ourselves. Just think of the way that, within a mere two decades, billions of people have come to entrust Google’s search algorithm with one of the most important tasks of all: finding relevant and trustworthy information. As we rely more on Google for answers , our ability to locate information independently diminishes. Already today, “truth” is defined by the top results of a Google search. This process has likewise affected our physical abilities, such as navigating space. People ask Google not just to find information but also to guide them around. Self-driving cars and AI physicians would represent further erosion: While these innovations would put truckers and human doctors out of work, their larger import lies in the continuing transfer of authority and responsibility to machines.

I am not going to lie: I completely dislike this entire viewpoint and direction of thinking about technology. Giving others authority over us is the basis of civilized society, whether that third-party is human or machine. It’s how that authority is executed that determines whether it is pernicious or not.

Harari brings up a number of points here though that I think deserve a critical look. First, there is this belief in an information monolith, that Google is the only lens by which we can see the world. To me, that is a remarkably rose-colored view of printing and publishing up until the internet age, when gatekeepers had the power (and the politics) to block public access to all kinds of information. Banned Books Week is in some ways quaint today in the Amazon Kindle era, but the fight to have books in public libraries was (and sometimes today is) real. Without a copy, no one had access.

That disintegration of gatekeeping is one reason among many why extremism in our politics is intensifying: there is now a much more diverse media landscape, and that landscape doesn’t push people back toward the center anymore, but rather pushes them further to the fringes.

Second, we don’t give up agency when we allow algorithms to submit their judgments on us. Quite the opposite in fact: we are using our agency to give a third-party independent authority. That’s fundamentally our choice. What is the difference between an algorithm making a credit card application decision, and a (human) judge adjudicating a contract dispute? In both cases, we have tendered at least some of our agency to another party to independently make decisions over us because we have collectively decided to make that choice as part of our society.

Third, Google, including Search and Maps, has empowered me to explore the world in ways that I wouldn’t have dreamed before. When I visited Paris the first time in 2006, I didn’t have a smartphone, and calling home was a $1/minute. I saw parts of the city, and wandered, but I was mostly taken in by fear — fear of going to the wrong neighborhood (the massive riots in the banlieues had only happened a few months prior) and fear of completely getting lost and never making it back. Compare that to today, where access to the internet means that I can actually get off the main tourist stretches peddled by guidebooks and explore neighborhoods that I never would have dreamed of doing before. The smartphone doesn’t have to be distracting — it can be an amazing tool to explore the real world.

I bring these different perspectives up because I think the “black box society” as Frank Pasquale calls it by his eponymous book is under unfair attack. Yes, there are problems with algorithms that need addressing, but are they worse or better than human substitutes? When eating times can vastly affect the outcome of a prisoner’s parole decisions, don’t we want algorithms to do at least some of the work for us?

Lying to ourselves

Photo: Getty Images / Siegfried Kaiser / EyeEm

Talking about humans acting badly, I wrote a review over the weekend of Elephant in the Brain, a book about how we use self-deception to ascribe better motives to our actions than our true intentions. As I wrote about the book’s thesis:

Humans care deeply about being perceived as prosocial, but we are also locked into constant competition, over status attainment, careers, and spouses. We want to signal our community spirit, but we also want to selfishly benefit from our work. We solve for this dichotomy by creating rationalizations and excuses to do both simultaneously. We give to charity for the status as well as the altruism, much as we get a college degree to learn, but also to earn a degree which signals to employers that we will be hard workers.

It’s a depressing perspective, but one that’s ultimately correct. Why do people wear Stanford or Berkeley sweatshirts if not to signal things about their fitness and career prospects? (Even pride in school is a signal to others that you are part of a particular tribe). One of the biggest challenges of operating in Silicon Valley is simply understanding the specific language of signals that workers there send.

Ultimately, though, I was nonplussed with the book, because I felt that it didn’t end up leading to a broader sense of enlightenment, nor could I see how to change either my behavior or my perception’s of others’ behaviors as a result of the book. That earned a swift rebuke from one of the author’s last night on Twitter:

Okay, but here is the thing: of course we lie to ourselves. Of course we lie to each other. Of course PR people lie to make their clients look good, and try to come off as forthright as possible. The best salesperson is going to be the person that truly believes in the product they are selling, rather than the person who knows its weaknesses and scurries away when they are brought up. This book makes a claim — that I think is reasonable — that self-deception is the key ingredient – we can’t handle the cognitive load of lying all the time, so evolution has adapted us to handle lying with greater facility by not allowing us to realize that we are doing it.

No where is this more obvious than in my previous career as a venture capitalist. Very few founders truly believe in their products and companies. I’m quite serious. You can hear the hesitation in their voices about the story, and you can hear the stress in their throats when they hit a key slide that doesn’t exactly align with the hockey stick they are selling. That’s okay, ultimately, because these companies were young, but if the founder of the company doesn’t truly believe, why should I join the bandwagon?

Confidence is ambiguous — are you confident because the startup truly is good, or is it because you are carefully masking your lack of enthusiasm? That’s what due diligence is all about, but what I do know is that a founder without confidence isn’t going to make it very far. Lying is wrong, but confidence is required — and the line between the two is very, very blurry.

Spotify may repurchase up to $1b in stock

Photo by Spencer Platt/Getty Images

Before the market opened this morning, Spotify announced plans to buy back stock starting in the fourth quarter of 2018. The company has been authorized to repurchase up to $1 billion worth of shares, and up to 10 million shares total. The exact cadence of the buybacks will depend on various market conditions, and will likely occur gradually until the repurchase program’s expiration date in April of 2021.

The announcement comes on the back of Spotify’s quarterly earnings report last week, which led to weakness in the company’s stock price behind concerns over its outlook for subscriber, revenue and ARPU (Average Revenue Per User) growth, despite the company reporting stronger profitability than Wall Street’s expectations.

After its direct-offering IPO in April, Spotify saw its stock price shoot to over $192 a share in August. However, the stock has since lost close to $10 billion in market cap, driven in part by broader weakness in public tech stocks, as well as by fears about subscription pricing pressure and ARPU growth as more of Spotify’s users opt for discounted family or student subscription plans.

Per TechCrunch’s Sarah Perez:

…The company faces heavy competition these days – especially in the key U.S. market from Apple Music, as well as from underdog Amazon Music, which is leveraging Amazon’s base of Prime subscribers to grow. It also has a new challenge in light of the Sirius XM / Pandora deal.

The larger part of Spotify’s business is free users – 109 million monthly actives on the ad-supported tier. But its programmatic ad platform is currently only live in the U.S., U.K., Canada and Australia. That leaves Spotify room to grow ad revenues in the months ahead.

The strategic rationale for Spotify is clear despite early reports painting the announcement as a way to buoy a flailing stock price. With over $1 billion in cash sitting on its balance sheet and the depressed stock price, the company clearly views this as an affordable opportunity to return cash to shareholders at an attractive entry point when the stock is undervalued.

As for Spotify’s longer-term outlook from an investor standpoint, the company’s ARPU growth should not be viewed in isolation. In the past, Spotify has highlighted discounted or specialized subscriptions, like family and student subscriptions, as having a much stickier user base. And the company has seen its retention rates improving, with churn consistently falling since the company’s IPO.

The stock is up around 1.5% on the news on top of a small pre-market boost.

What’s next

  • We are still spending more time on Chinese biotech investments in the United States (Arman previously wrote a deep dive on this a week or two ago).
  • We are exploring the changing culture of Form D filings (startups seem to be increasingly foregoing disclosures of Form Ds on the advice of their lawyers)
  • India tax reform and how startups have taken advantage of it

Reading docket

05 Nov 2018

Samsung’s social logo teases a folding phone ahead of announcement

All of the major players have held events over the past few months, but hardware season still has a few last gasps left. The Samsung Developer Conference happening this week in San Francisco isn’t likely to be a major launching pad for consumer electronics, but the company is expected to offer a glimpse into what’s to come.

Samsung’s long been a fan of teasing out big news ahead of launch, and all subtly has gone out the window with the folding logo the company’s adopted on social media. A report from Bloomberg later backed up by The Wall Street Journal has the company showing off a prototype of a phone with a foldable display this week.

The company is said to still be debating the specifics of the hardware at this late stage, and the product may only be glimpsed at in the form of an on-screen render or prototype. For Samsung, the key is showing the world that it’s continuing to innovate after a lukewarm critical reception on its last couple of devices.

The company likely won’t be first to market with the screen folding tech. That honor will likely go to the Royole Corporation’s FlexPai handset, which is due out before the end of the year. Though from the looks of it, it won’t leave the best first impression.

05 Nov 2018

Security researchers have busted the encryption in several popular Crucial and Samsung SSDs

Researchers at Radboud University have found critical security flaws in several popular Crucial and Samsung solid state drives (SSDs), which they say can be easily exploited to recover encrypted data without knowing the password.

The researchers, who detailed their findings in a new paper out Monday, reverse engineered the firmware of several drives to find a “pattern of critical issues” across the device makers.

In the case of one drive, the master password used to decrypt the drive’s data was just an empty string and could be easily exploiting by flipping a single bit in the drive’s memory. Another drive could be unlocked with “any password” by crippling the drive’s password validation checks.

That wouldn’t be much of a problem if an affected drive also used software encryption to secure its data. But the researchers found that in the case of Windows computers, often the default policy for BitLocker’s software-based drive encryption is to trust the drive — and therefore rely entirely on a device’s hardware encryption to protect the data. Yet, as the researchers found, if the hardware encryption is buggy, BitLocker isn’t doing much to prevent data theft.

In other words, users “should not rely solely on hardware encryption as offered by SSDs for confidentiality,” the researchers said.

Alan Woodward, a professor at the University of Surrey, said that the greatest risk to users is the drive’s security “failing silently.”

“You might think you’ve done the right thing enabling BitLocker but then a third party fault undermines your security, but you never know and never would know,” he said.

Matthew Green, a cryptography professor at Johns Hopkins, described the BitLocker flaw in a tweet as “like jumping out of a plane with an umbrella instead of a parachute.”

The researchers said that their findings are not yet finalized — pending a peer review. But the research was made public after disclosing the bugs to the drive makers in April.

Crucial’s MX100, MX200 and MX300 drives, Samsung’s T3 and T5 USB external disks, and Samsung 840 EVO and 850 EVO internal hard disks are known to be affected, but the researchers warned that many other drives may also be at risk.

The researchers criticized the device makers’ proprietary and closed-source cryptography that they said — and proved — is “often shown to be much weaker in practice” than their open source and auditable cryptographic libraries. “Manufacturers that take security seriously should publish their crypto schemes and corresponding code so that security claims can be independently verified,” they wrote.

The researchers recommend using software-based encryption, like the open source software VeraCrypt.

In an advisory, Samsung also recommended that users install encryption software to prevent any “potential breach of self-encrypting SSDs.” Crucial’s owner Micron is said to have a fix on the way, according to an advisory by the Netherlands’ National Cyber Security Center, but did not say when.

Micron did not immediately respond to a request for comment.

05 Nov 2018

Facebook’s election interference problem exponentially worse on eve of midterms, study suggests

An analysis of political advertisers running extensive campaigns on Facebook targeting users in the United States over the past six months has flagged a raft of fresh concerns about its efforts to tackle election interference — suggesting the social network’s self regulation is offering little more than a sham veneer of accountability.

Dig down and all sorts of problems and concerns become apparent, according to new research conducted by Jonathan Albright, of the Tow Center for Digital Journalism.

Where’s the recursive accountability?

Albright timed the research project to cover the lead up to the US midterms, which represent the next major domestic test of Facebook’s democracy-denting platform — though this time it appears that homegrown disinformation is as much, if not more, in the frame than Kremlin-funded election fiddling.

The three-month project to delve into domestic US political muck spreading involved taking a thousand screen shots and collecting more than 250,000 posts, 5,000 political ads, and the historic engagement metrics for hundreds of Facebook pages and groups — using “a diverse set of tools and data resources”.

In the first of three Medium posts detailing his conclusions, Albright argues that far from Facebook getting a handle on its political disinformation problem the dangers appear to have “grown exponentially”.

The sheer scale of the problem is one major takeaway, with Albright breaking out his findings into three separate Medium posts on account of how extensive the project was, with each post focusing on a different set of challenges and concerns.

In the first post he zooms in on what he calls “Recursive Ad-ccountability” — or rather Facebook’s lack of it — looking at influential and verified Pages that have been running US political ad campaigns over the past six months, yet which he found being managed by accounts based outside the US.

Albright says he found “an alarming number” of these, noting how Page admins could apparently fluctuate widely and do so overnight — raising questions about how or even whether Facebook is even tracking Page administrator shifts at this level so it can factor pertinent changes into its political ad verification process.

Albright asserts that his findings highlight both structural loopholes in Facebook’s political ad disclosure system — which for example only require that one administrator for each Page get “verified” in order to be approved to run campaigns — but also “emphasize the fact that Facebook does not appear to have a rigid protocol in place to regularly monitor Pages running political campaigns after the initial verification takes place”.

So, essentially, it looks like Facebook doesn’t make regular checks on Pages after an initial (and also flawed) verification check — even, seemingly, when Pages’ administrative structure changes almost entirely. As Albright puts it, the company lacks “recursive accountability”.

Other issues of concern he flags include finding ad campaigns that had foreign Page managers using “information-seeking “polls” — aka sponsored posts asking their target audiences, in this case American Facebook users, to respond to questions about their ideologies and moral outlooks”.

Which sounds like a rather similar modus operandi to disgraced (and now defunct) data company Cambridge Analytica’s use of a quiz app running on Facebook’s platform to extract personal data and psychological insights on users (which it repurposed for its own political ad targeting purposes).

Albright also unearthed instances of influential Pages with foreign manager accounts that had run targeted political campaigns for durations of up to four months without any “paid for” label — a situation that, judging Facebook’s system by face value, shouldn’t even be possible. Yet there it was.

There are of course wider issues with ‘paid for’ labels, given they aren’t linked to accounts — making the entire system open to abuse and astroturfing, which Albright also notes.

“After finding these huge discrepancies, I found it difficult to trust any of Facebook’s reporting tools or historical Page information. Based on the sweeping changes observed in less than a month for two of the Pages, I knew that the information reported in the follow-ups was likely to be inaccurate,” he writes damningly in conclusion.

“In other words, Facebook’s political ad transparency tools — and I mean all of them — offer no real basis for evaluation. There is also no ability to know the functions and differential privileges of these Page “managers,” or see the dates managers are added or removed from the Pages.”

We’ve reached out to Facebook for comment, and to ask whether it intends to expand its ad transparency tools to include more information about Page admins. We’ll update this post with any response.

The company has made a big show of launching a disclosure system for political advertisers, seeking to run ahead of regulators. Yet its ‘paid for’ badge disclosure system for political ads has quickly been shown as trivially easy for astroturfers to bypass, for example…

The company has also made a big domestic PR push to seed the idea that it’s proactively fighting election disinformation ahead of the midterms — taking journalists on a tour of its US ‘election security war room‘, for example — even as political disinformation and junk news targeted at American voters continues being fenced on its platform…

The disconnect is clear.

Shadow organizing

In a second Medium post, dealing with a separate set of challenges but stemming from the same body of research, Albright suggests Facebook Groups are now playing a major role in the co-ordination of junk news political influence campaigns — with domestic online muck spreaders seemingly shifting their tactics.

He found bad actors moving from using public Facebook Pages (presumably as Facebook has responded to pressure and complaints over visible junk) to quasi-private Groups as a less visible conduit for seeding and fencing “hate content, outrageous news clips, and fear-mongering political memes”.

“It is Facebook’s Groups— right here, right now — that I feel represents the greatest short-term threat to election news and information integrity,” writes Albright. “It seems to me that Groups are the new problem — enabling a new form of shadow organizing that facilitates the spread of hate content, outrageous news clips, and fear-mongering political memes. Once posts leave these Groups, they are easily encountered, and — dare I say it —algorithmically promoted by users’ “friends” who are often shared group members — resulting in the content surfacing in their own news feeds faster than ever before. Unlike Instagram and Twitter, this type of fringe, if not obscene sensationalist political commentary and conspiracy theory seeding is much less discoverable.”

Albright flags how notorious conspiracy outlet Infowars remains on Facebook’s platform in a closed Group form, for instance. Even though Infowars has previously had some of its public videos taken down by Facebook for “glorifying violence, which violates our graphic violence policy, and using dehumanizing language to describe people who are transgender, Muslims and immigrants, which violates our hate speech policies”.

Facebook’s approach to content moderation typically involves only post-publication content moderation, on a case-by-case basis — and only when content has been flagged for review.

Within closed Facebook Groups with a self-selecting audience there’s arguably likely to be less chance of that.

“This means that in 2018, the sources of misinformation and origins of conspiracy seeding efforts on Facebook are becoming invisible to the public — meaning anyone working outside of Facebook,” warns Albright. “Yet, the American public is still left to reel in the consequences of the platform’s uses and is tasked with dealing with its effects. The actors behind these groups whose inconspicuous astroturfing operations play a part in seeding discord and sowing chaos in American electoral processes surely are aware of this fact.”

Some of the closed Groups he found seeding political conspiracies he argues are likely to break Facebook’s own content standards did not have any admins or moderators at all — something that is allowed by Facebook’s terms.

“They are an increasingly popular way to push conspiracies and disinformation. And unmoderated groups — often with of tens of thousands of users interacting, sharing, and posting with one other without a single active administrator are allowed [by Facebook],” he writes.

“As you might expect, the posts and conversations in these Facebook Groups appear to be even more polarized and extreme than what you’d typically find out on the “open” platform. And a fair portion of the activities appear to be organized. After going through several hundred Facebook Groups that have been successful in seeding rumors and in pushing hyper-partisan messages and political hate memes, I repeatedly encountered examples of extreme content and hate speech that easily violates Facebook’s terms of service and community standards.”

Albright couches this move by political disinformation agents from seeding content via public Pages to closed Groups as “shadow organizing”. And he argues that Groups pose a greater threat to the integrity of election discourse than other social platforms like Twitter, Reddit, WhatsApp, and Instagram — because they “have all of the advantages of selective access to the world’s largest online public forum”, and are functioning as an “anti-transparency feature”.

He notes, for example, that he had to use “a large stack of different tools and data-sifting techniques” to locate the earliest posts about the Soros caravan rumor on Facebook. (And “only after going through thousands of posts across dozens of Facebook Groups”; and only then finding “some” not all the early seeders.)

He also points to another win-win for bad actors using Groups as their distribution pipe of choice, pointing out they get to “reap all of the benefits of Facebook— including its free unlimited photo and meme image hosting, its Group-based content and file sharing, its audio, text, and video “Messenger” service, mobile phone and app notifications, and all the other powerful free organizing and content promoting tools, with few — if any — of the consequences that might come from doing this on a regular Page, or by sharing things out in the open”.

“It’s obvious to me there has been a large-scale effort to push messages out from these Facebook groups into the rest of the platform,” he continues. “I’ve seen an alarming number of influential Groups, most of which list their membership number in the tens of thousands of users, that seek to pollute information flows using suspiciously inauthentic but clearly human operated accounts. They don’t spam messages like what you’d see with “bots”; instead they engage in stealth tactics such as “reply” to other group members profiles with “information.”

“While automation surely plays a role in the amplification of ideas and shared content on Facebook, the manipulation that’s happening right now isn’t because of “bots.” It’s because of humans who know exactly how to game Facebook’s platform,” he concludes the second part of his analysis.

“And this time around, we saw it coming, so we can’t just shift the blame over to foreign interference. After the midterm elections, we need to look closely, and press for more transparency and accountability for what’s been happening due to the move by bad actors into Facebook Groups.”

The shift of political muck spreading from Pages to Groups means disinformation tracker tools that only scrape public Facebook content — such as the Oxford Internet Institute’s newly launched junk news aggregator — aren’t going to show a full picture. They can only given a snapshot of what’s being said on Facebook’s public layer.

And of course Facebook’s platform allows for links to closed Group content to posted elsewhere, such as in replies to comments, to lure in other Facebook users.

And, indeed, Albright says he saw bad actors engaging in what he dubs “stealth tactics” to quietly seed and distribute their bogus material.

“It’s an ingenious scheme: a political marketing campaign for getting the ideas you want out there at exactly the right time,” he adds. “You don’t need to go digging in Reddit, or 4 or 8 Chan, or crypochat for these things anymore. You’ll see them everywhere in political Facebook Groups.”

The third piece of analysis based on the research — looking at Facebook’s challenges in enforcing its rules and terms of service — is slated to follow shortly.

Meanwhile this is the year Facebook’s founder, Mark Zuckerberg, made it his personal challenge to ‘fix the platform’.

Yet at this point, bogged down by a string of data scandals, security breaches and content crises, the company’s business essentially needs to code its own apology algorithm — given the volume of ‘sorries’ it’s now having to routinely dispense.

Late last week the Intercept reported that Facebook had allowed advertisers to target conspiracy theorists  interested in “white genocide”, for example — triggering yet another Facebook apology.

Facebook also deleted the offending category. Yet it did much the same a year ago when a ProPublica investigation showed Facebook’s ad tools could be used to target people interested in “How to burn Jews”.

Plus ça change then. Even though the company said it would hire actual humans to moderate its AI-generated ad targeting categories. So it must have been an actual human who approved the ‘white genocide’ bullseye. Clearly, overworked, undertrained human moderators aren’t going to stop Facebook making more horribly damaging mistakes.

Not while its platform continues to offer essentially infinite ad-targeting possibilities — via the use of proxies and/or custom lookalike audiences — which the company makes available to almost anyone with a few dollars to put towards whipping up hate and social division, around their neo-fascist cause of choice, making Facebook’s business richer in the process.

The social network itself — its staggering size and reach — increasingly looks like the problem.

And fixing that will require a lot more than self-regulation.

Not that Facebook is the only social network being hijacked for malicious political purposes, of course. Twitter has a long running problem with nazis appropriating its tools to spread hateful content.

And only last month, in a lengthy Twitter thread, Albright raised concerns over anti-semitic content appearing on (Facebook-owned) Instagram…

But Facebook remains the dominant social platform with the largest reach. And now its platform seems appears to be offering election fiddlers the perfect blend of mainstream reach plus unmoderated opportunity to skew political outcomes.

“It’s like the worst-case scenario from a hybrid of 2016-era Facebook and an unmoderated Reddit,” as Albright puts it.

The fact that other mainstream social media platforms are also embroiled in the disinformation mess doesn’t let Facebook off the hook. It just adds further fuel to calls for proper sector-wide regulation.