On Thursday, Democrats on the House Intelligence Committee released a massive new trove of Russian government-funded Facebook political ads targeted at American voters. While we’d seen a cross section of the ads before through prior releases from the committee, the breadth of ideological manipulation is on full display across the more than 3,500 newly released ads — and that doesn’t even count still unreleased unpaid content that shared the same divisive aims.
Russia sought to weaponize social media to drive a wedge between Americans, and in an attempt to sway the 2016 election. They created fake accounts, pages and communities to push divisive online content and videos, and to mobilize real Americans.
After viewing the ads, which stretch from 2015 to late 2017, some clear trends emerged.
Russia focused on black Americans
Many, many of these ads targeted black Americans. From the fairly large sample of ads that we reviewed, black Americans were clearly of particular interest, likely in an effort to escalate latent racial tensions.
Many of these ads appeared as memorials for black Americans killed by police officers. Others simply intended to stir up black pride, like one featuring an Angela Davis quote. One ad posted by “Black Matters” was targeted at Ferguson, Missouri residents in June 2015 and only featured the lyrics to Tupac’s “California Love.” Around this time, many ads targeted black Facebook users in Baltimore and the St. Louis area.
Some Instagram ads targeted black voters interested in black power, Malcolm X, and the new Black Panther party using Facebook profile information. In the days leading up to November 8, 2016 other ads specifically targeted black Americans with anti-Clinton messaging.
Not all posts were divisive (though most were)
While most ads played into obvious ideological agendas, those posts were occasionally punctuated by more neutral content. The less controversial or call-to-action style posts were likely designed to buffer the politically divisive content, helping to build out and grow an account over time.
For accounts that grew over the course of multiple years, some “neutral” posts were likely useful for making them appear legitimate and build trust among followers. Some posts targeting LGBT users and other identity-based groups just shared positive messages specific to those communities.
Ads targeted media consumers and geographic areas
Some ads we came across targeted Buzzfeed readers, though they were inexplicably more meme-oriented and not political in nature. Others focused on Facebook users that liked the Huffington Post’s Black Voices section or Sean Hannity.
Many ads targeting black voters targeted major U.S. cities with large black populations (Baltimore and New Orleans, for example). Other geo-centric ads tapped into Texas pride and called on Texans to secede.
Conservatives were targeted on many issues
We already knew this from the ad previews, but the new collection of ads makes it clear that conservative Americans across a number of interest groups were regularly targeted. This targeting concentrated on stirring up patriotic and sometimes nationalist sentiment with anti-Clinton, gun rights, anti-immigrant and religious stances. Some custom-made accounts spoke directly to veterans and conservative Christians. Libertarians were also separately targeted.
Events rallied competing causes
Among the Russian-bought ads, event-based posts became fairly frequent in 2016. The day after the election, an event called for an anti-Trump rally in Union Square even as another ad called for Trump supporters to rally outside Trump tower. In another instance, the ads promoted both a pro-Beyoncé and anti-Beyoncé event in New York City.
Candidate ads were mostly pro-Trump, anti-Clinton
Consistent with the intelligence community’s assessment of Russia’s intentions during the 2016 U.S. election, among the candidates, posts slamming Hillary Clinton seemed to prevail. Pro-Trump ads were fairly common, though other ads stirred up anti-Trump sentiment too. Few ads seemed to oppose Bernie Sanders and some rallied support for Sanders even after Clinton had won the nomination. One ad in August 2016 from account Williams&Kalvin denounced both presidential candidates and potentially in an effort to discourage turnout among black voters. In this case and others, posts called for voters to ignore the election outright.
While efforts like the Honest Ads Act are mounting to combat foreign-paid social media influence in U.S. politics, the scope and variety of today’s House Intel release makes it clear that Americans would be well served to pause before engaging with provocative, partisan ideological content on social platforms — at least when it comes from unknown sources.
The music industry is finally seeing some daylight after years of sales declines and revenue attrition. As industry organizations announce year-on-year growth, songwriters are turning to royalty management organizations like SongTrust in increasing numbers. In just under a year SongTrust added 50,000 songwriters, 5,000 publishers, and now holds 1 million copyrights. The company said that one-in-five new professional songwriters are using SongTrust’s platform.
More good news is coming to songwriters and rights holders in the form of the Music Modernization Act that’s now making its way through Congress.
Technology is something that the music industry’s back end has sorely needed. Performers, producers and songwriters avail themselves of the latest technologies in the studios and stages around the world and are then reduced to Excel spreadsheets and outmoded tracking systems to follow their songs through various distribution channels. And digital technologies like sampling, and distribution platforms like Spotify and others have complicated the process even further.
There’s a whole range of tools that are coming to market to help professionalize the back end of the industry, so that artists can get paid their fair share.
Songtrust was born out of Downtown Music Publishing, a publishing and rights management firm that manages rights for artists, such as Frank Sinatra, One Direction and Santigold.
Spotify has a new policy that covers not just “hate content” but also “hateful conduct” outside the music itself. And at least two artists have already been culled from playlists as a result.
To be a clear, Spotify is making a distinction between hate content, which it says it will “remove … whenever we find it,” and music by artists who may have done morally or legally questionable things. Here’s how the company describes its approach in these situations:
We don’t censor content because of an artist’s or creator’s behavior, but we want our editorial decisions — what we choose to program — to reflect our values. When an artist or creator does something that is especially harmful or hateful (for example, violence against children and sexual violence), it may affect the ways we work with or support that artist or creator.
So Billboard has confirmed that starting today, listeners will no longer find songs by R. Kelly on Spotify’s playlists, whether they’re editorially curated or created algorithmically. (A number of women have accused Kelly of sexual abuse, though he has denied the allegations.) The publication also confirmed that rapper XXXTentacion had been removed from the high-profile Rap Caviar playlist.
In theory, this seems like a reasonable balance between not wanting to remove artists from the platform entirely and not wanting the apperance of tacitly endorsing reprehensible behavior. (Putting someone on a high-profile Spotify playlist really is a big deal, with hit-making power.)
But as others have pointed out, this could also put Spotify in the position of making a lot of tricky calls, since there are plenty of other musicians who have been accused of (or convicted of, or admitted to) some pretty bad stuff.
Spotify may find itself in a similar situation to YouTube, which also tried to crack down on objectionable content (and become more advertiser-friendly) by setting a higher bar for creator monetization. In theory, it was the right decision, but it also led to plenty of creator complaints and a bit of course correction.
Note: This is the final article in a three-part series on valuation thoughts for common sectors of venture-capital investment. The first article, which attempts to make sense of the SaaS revenue multiple, can be found here; the second, on public marketplaces can be found here.
Over the past year, the VC-backed hardware category got a big boost — Roku was the best-performing tech IPO of 2017 and Ring was acquired by Amazon for a price rumored to exceed $1 billion. In addition to selling into large, strategic markets, both companies have excellent business models. Ring sells a high-margin subscription across a high percentage of its customer base and Roku successfully monetizes its 19 million users through ads and licensing fees.
In the context of these splashy exits, it is interesting to consider the key factors that have made for valuable hardware companies against a backdrop of an investment sector that has often been maligned through the years, as I’m sure we’ve all heard the trope that “hardware is hard.” Despite this perception, hardware investment has grown much faster than the overall VC market since 2010, as shown below.
A large part of this investment growth has to do with the fact that we’ve seen larger exits in hardware over the past few years than ever before. Starting with Dropcam’s*$555 million acquisition in 2014, we’ve seen a number of impressive outcomes in the category, from large acquisitions like Oculus ($2 billion), Beats ($3 billion) and Nest ($3.2 billion) to IPOs like GoPro ($1.2 billion), Fitbit ($3 billion) and Roku* ($1.3 billion)**. Unfortunately for the sector, a few of these companies have underperformed since exit; notably, GoPro and Fitbit have both cratered in the public markets.
As of April 3, 2018, both stocks traded at less than 1x trailing revenue, a far cry from the multiples of forward revenue given to other tech companies. Roku, on the other hand, continues to perform as a stock market darling, trading at approximately 6x trailing revenue and a market cap of $3.1 billion. What sets them so far apart?
The simple answer is their business model — Roku generates a significant amount of high gross margin platform revenue, while GoPro and Fitbit are reliant on continued hardware sales to drive future business, a revenue stream that has been stagnant to declining. However, Roku’s platform is one successful hardware business model; in this article I’ll explore four others — Attach, Replacement, Razor and Blades and Chunk.
Attach
“Attaching” a high gross margin annuity stream from a subscription to a hardware sale is a goal for many hardware startups. However, this is often easier said than done — as it’s critical to nail the alignment of the subscription service to the core value proposition of the hardware.
For example, Fitbit rolled out coaching, but people buy Fitbit to track activity and sleep — and this mismatch resulted in a low attach rate. On the other hand, Ring’s subscription allows users to view past doorbell activity, which aligns perfectly with customers looking to improve home security. Similarly, Dropcam sold a subscription for video storage, and at an approximate 40 percent attach rate created a strong economic model. Generally, we’ve found that the attach rate necessary to create a viable business should be at least in the 15-20 percent range.
Platform
Unlike the “Attach” business model that sells services directly related to improving the core functionality of the hardware device, “Platform” business models create ancillary revenue streams that materialize when users regularly engage with their hardware. I consider Roku or Apple to be in this category; by having us glued to our smartphones or TV screens, these companies earn the privilege of monetizing an app store or serving us targeted advertisements. Here, the revenue stream is not tied directly to the initial sale, and can conceivably scale well beyond the hardware margin that is generated.
In fact, AWS is one of the more successful recent examples of a hardware platform — by originally farming out the capacity from existing servers in use by the company, Amazon has generated an enormously profitable business, with more than $5 billion in quarterly revenue.
Replacement
Despite the amazing economics of Apple’s App Store, as of the company’s latest quarterly earnings report, less than 10 percent of their nearly $80 billion in quarterly revenue came from the “Services” category, which includes their digital content and services such as the App Store.
What really drives value to Apple is the replacement rate of their core money-maker — the iPhone. With the average consumer upgrading their iPhone every two to three years, Apple creates a massive recurring revenue stream that continues to compound with growth in the install base. Contrast this with GoPro, where part of the reason for its poor market performance has been its inability to get customers to buy a new camera — once you have a camera that works “well enough” there is little incentive to come back for more.
Razor and Blades
The best example of this is Dollar Shave Club, which quite literally sold razors and blades on its way to a $1 billion acquisition by Unilever. This business model usually involves a low or zero gross margin sale on the initial “Razor” followed by a long-term recurring subscription of “Blades,” without which the original hardware product wouldn’t work. Recent venture examples include categories like 3D printers, but this model isn’t anything new — think of your coffee machine!
Chunk
Is it still possible to build a large hardware business if you don’t have any of the recurring revenue models mentioned above? Yes — just try to make thousands of dollars in gross profit every time you sell something — like Tesla does. At 23 percent gross margin and an average selling price in the $100,000 range, you’d need more than a lifetime of iPhones to even approach one car’s worth of margin!
So, while I don’t think anyone would disagree that building a successful hardware business has quite literally many more moving parts than software, it’s interesting to consider the nuances of different hardware business models.
While it’s clear that in most cases, recurring revenue is king, it’s difficult to say that any of these models are intrinsically more superior, as large businesses have been built in each of the five categories covered above. However, if forced to choose, a “Platform” model seems to offer the most unbounded upside as it’s indicative of a higher engagement product and isn’t indexed to the original value of the product (some people certainly spend more on the App Store than on the iPhone purchase).
While it’s easy to take a narrow view of VC-hardware investing based on the outcome of a few splashy tech gadgets, broadening our aperture just a bit shows us that large hardware businesses have been built across a variety of industries and business models, and many more successes are yet to come.
Google Pay got a big upgrade at Google I/O this week. At a breakout session, Google announced a series of changes to its payments platform, recently rebranded from Android Pay, including support for peer-to-peer payments in the main Google Pay app; online payments support in all browsers; the ability to see all payments in a single place, instead of just those in-store; and support for tickets and boarding passes in Google Pay’s APIs, among several other things.
Some of Google Pay’s expansions were previously announced, like its planned support for more browsers and devices, for example.
However, the company detailed a host of other features at I/O that are now rolling out across the Google Pay platform.
One notable addition is support for peer-to-peer payments which is being added to the Google Pay app in the U.S. and the U.K.
And that transaction history, along with users’ other payments, will all be consolidated into one place.
“In an upcoming update of the Google Pay app, we’re going to allow you to manage all the payment methods in your Google account – not just the payment methods that you used to pay in-store,” said Gerardo Capiel, Product Management lead at Google Pay, during the session at I/O. “And even better, we’re going to provide you with a holistic view of all your transactions – whether they be on Google apps and services, such as Play and YouTube, whether they be with third-party merchants, such as Walgreens and Uber, or whether they’re transactions you’ve made to friends and families via our peer-to-peer service,” he said.
The company also said it would allow users to send and request money, manage payment info linked to their Google accounts, and see their transaction history on the web with the Google Pay iOS app, too.
And because I/O is a developer conference, many of the new additions were in the form new and updated APIs.
For starters, Google launched a new API for incorporating Google Pay into other third-party apps.
“Via our APIs, we’re going to enable these ready-to-pay users [who already have payment information stored with Google Pay] to also checkout quickly and easily in your own apps and websites,” Capiel said.
The benefit to those developers who add Google Pay support is an increase in conversion rates and faster monetization, he noted.
Plus, Google added support for tickets and boarding passes to the Google Pay APIs, where they joined the existing support for offers and loyalty cards.
This allows companies such as Urban Airship or DotDashPay to help business clients distribute and update their passes and tickets to Google Pay users.
“It shows an even stronger commitment on Google Pay’s part to make the digital wallet a priority,” Sean Arietta, founder and CEO of DotDashPay, told TechCrunch, following the presentation. “It also reinforces their focus on partners like DotDashPay to help build connections between consumers and brands. The fact that they are specifically highlighting a complete experience that starts with payments and ends with an NFC tap-to-identify, is really powerful. It makes the Google Pay story now complete,” he added.
“We help businesses reinvent the customer experience by delivering the right information at the right time on any digital channel, and mobile wallets fill an increasingly critical role in that vision,” Brett Caine, CEO and president of Urban Airship, said in a statement. “Google Pay’s new support for tickets and boarding passes means customers will always have up-to-date information when they need it most – on the go.”
Some of Google’s early access partners on ticketing include Singapore Airlines, Eventbrite, Southwest, and FortressGB, which handles major soccer league tickets in the U.K. and elsewhere.
In terms of transit-related announcements, Google added a few more partners who will soon adopt Google Pay integration, including Vancouver, Canada and the U.K. bus system, following recent launches in Las Vegas and Portland.
The company also offered an update on Google Pay’s traction, noting the Google Pay app just passed 100 million downloads in the Google Play store, where it’s available to users in 18 markets worldwide.
Soon, Google said it will launch many of the core features and the Google Pay app globally to billions of Google users worldwide.
The so-called ‘Duplex’ feature of the Google Assistant was shown calling a hair salon to book a woman’s hair cut, and ringing a restaurant to try to book a table — only to be told it did not accept bookings for less than five people.
At which point the AI changed tack and asked about wait times, earning its owner and controller, Google, the reassuring intel that there wouldn’t be a long wait at the elected time. Job done.
The voice system deployed human-sounding vocal cues, such as ‘ums’ and ‘ahs’ — to make the “conversational experience more comfortable“, as Google couches it in a blog about its intentions for the tech.
The voices Google used for the AI in the demos were not synthesized robotic tones but distinctly human-sounding, in both the female and male flavors it showcased.
Indeed, the AI pantomime was apparently realistic enough to convince some of the genuine humans on the other end of the line that they were speaking to people.
At one point the bot’s ‘mm-hmm’ response even drew appreciative laughs from a techie audience that clearly felt in on the ‘joke’.
But while the home crowd cheered enthusiastically at how capable Google had seemingly made its prototype robot caller — with Pichai going on to sketch a grand vision of the AI saving people and businesses time — the episode is worryingly suggestive of a company that views ethics as an after-the-fact consideration.
One it does not allow to trouble the trajectory of its engineering ingenuity.
A consideration which only seems to get a look in years into the AI dev process, at the cusp of a real-world rollout — which Pichai said would be coming shortly.
Deception by design
“Google’s experiments do appear to have been designed to deceive,” agreed Dr Thomas King, a researcher at the Oxford Internet Institute’s Digital Ethics Lab, discussing the Duplex demo. “Because their main hypothesis was ‘can you distinguish this from a real person?’. In this case it’s unclear why their hypothesis was about deception and not the user experience… You don’t necessarily need to deceive someone to give them a better user experience by sounding naturally. And if they had instead tested the hypothesis ‘is this technology better than preceding versions or just as good as a human caller’ they would not have had to deceive people in the experiment.
“As for whether the technology itself is deceptive, I can’t really say what their intention is — but… even if they don’t intend it to deceive you can say they’ve been negligent in not making sure it doesn’t deceive… So I can’t say it’s definitely deceptive, but there should be some kind of mechanism there to let people know what it is they are speaking to.”
“I’m at a university and if you’re going to do something which involves deception you have to really demonstrate there’s a scientific value in doing this,” he added, agreeing that, as a general principle, humans should always be able to know that an AI they’re interacting with is not a person.
Because who — or what — you’re interacting with “shapes how we interact”, as he put it. “And if you start blurring the lines… then this can sew mistrust into all kinds of interactions — where we would become more suspicious as well as needlessly replacing people with meaningless agents.”
No such ethical conversations troubled the I/O stage, however.
Yet Pichai said Google had been working on the Duplex technology for “many years”, and went so far as to claim the AI can “understand the nuances of conversation” — albeit still evidently in very narrow scenarios, such as booking an appointment or reserving a table or asking a business for its opening hours on a specific date.
“It brings together all our investments over the years in natural language understanding, deep learning, text to speech,” he said.
What was yawningly absent from that list, and seemingly also lacking from the design of the tricksy Duplex experiment, was any sense that Google has a deep and nuanced appreciation of the ethical concerns at play around AI technologies that are powerful and capable enough of passing off as human — thereby playing lots of real people in the process.
The Duplex demos were pre-recorded, rather than live phone calls, but Pichai described the calls as “real” — suggesting Google representatives had not in fact called the businesses ahead of time to warn them its robots might be calling in.
“We have many of these examples where the calls quite don’t go as expected but our assistant understands the context, the nuance… and handled the interaction gracefully,” he added after airing the restaurant unable-to-book example.
So Google appears to have trained Duplex to be robustly deceptive — i.e. to be able to reroute around derailed conversational expectations and still pass itself off as human — a feature Pichai lauded as ‘graceful’.
And even if the AI’s performance was more patchy in the wild than Google’s demo suggested it’s clearly the CEO’s goal for the tech.
While trickster AIs might bring to mind the iconic Turing Test — where chatbot developers compete to develop conversational software capable of convincing human judges it’s not artificial — it should not.
Because the application of the Duplex technology does not sit within the context of a high profile and well understood competition. Nor was there a set of rules that everyone was shown and agreed to beforehand (at least so far as we know — if there were any rules Google wasn’t publicizing them). Rather it seems to have unleashed the AI onto unsuspecting business staff who were just going about their day jobs. Can you see the ethical disconnect?
“The Turing Test has come to be a bellwether of testing whether your AI software is good or not, based on whether you can tell it apart from a human being,” is King’s suggestion on why Google might have chosen a similar trick as an experimental showcase for Duplex.
“It’s very easy to say look how great our software is, people cannot tell it apart from a real human being — and perhaps that’s a much stronger selling point than if you say 90% of users preferred this software to the previous software,” he posits. “Facebook does A/B testing but that’s probably less exciting — it’s not going to wow anyone to say well consumers prefer this slightly deeper shade of blue to a lighter shade of blue.”
Had Duplex been deployed within Turing Test conditions, King also makes the point that it’s rather less likely it would have taken in so many people — because, well, those slightly jarringly timed ums and ahs would soon have been spotted, uncanny valley style.
Ergo, Google’s PR flavored ‘AI test’ for Duplex is also rigged in its favor — to further supercharge a one-way promotional marketing message around artificial intelligence. So, in other words, say hello to yet another layer of fakery.
How could Google introduce Duplex in a way that would be ethical? King reckons it would need to state up front that it’s a robot and/or use an appropriately synthetic voice so it’s immediately clear to anyone picking up the phone the caller is not human.
“If you were to use a robotic voice there would also be less of a risk that all of your voices that you’re synthesizing only represent a small minority of the population speaking in ‘BBC English’ and so, perhaps in a sense, using a robotic voice would even be less biased as well,” he adds.
And of course, not being up front that Duplex is artificial embeds all sorts of other knock-on risks, as King explained.
“If it’s not obvious that it’s a robot voice there’s a risk that people come to expect that most of these phone calls are not genuine. Now experiments have shown that many people do interact with AI software that is conversational just as they would another person but at the same time there is also evidence showing that some people do the exact opposite — and they become a lot ruder. Sometimes even abusive towards conversational software. So if you’re constantly interacting with these bots you’re not going to be as polite, maybe, as you normally would, and that could potentially have effects for when you get a genuine caller that you do not know is real or not. Or even if you know they’re real perhaps the way you interact with people has changed a bit.”
Safe to say, as autonomous systems get more powerful and capable of performing tasks that we would normally expect a human to be doing, the ethical considerations around those systems scale as exponentially large as the potential applications. We’re really just getting started.
But if the world’s biggest and most powerful AI developers believe it’s totally fine to put ethics on the backburner then risks are going to spiral up and out and things could go very badly indeed.
We’ve seen, for example, how microtargeted advertising platforms have been hijacked at scale by would-be election fiddlers. But the overarching risk where AI and automation technologies are concerned is that humans become second class citizens vs the tools that are being claimed to be here to help us.
Pichai said the first — and still, as he put it, experimental — use of Duplex will be to supplement Google’s search services by filling in information about businesses’ opening times during periods when hours might inconveniently vary, such as public holidays.
Though for a company on a general mission to ‘organize the world’s information and make it universally accessible and useful’ what’s to stop Google from — down the line — deploying vast phalanx of phone bots to ring and ask humans (and their associated businesses and institutions) for all sorts of expertise which the company can then liberally extract and inject into its multitude of connected services — monetizing the freebie human-augmented intel via our extra-engaged attention and the ads it serves alongside?
During the course of writing this article we reached out to Google’s press line several times to ask to discuss the ethics of Duplex with a relevant company spokesperson. But ironically — or perhaps fittingly enough — our hand-typed emails received only automated responses.
Pichai did emphasize that the technology is still in development, and said Google wants to “work hard to get this right, get the user experience and the expectation right for both businesses and users”.
But that’s still ethics as a tacked on afterthought — not where it should be: Locked in place as the keystone of AI system design.
And this at a time when platform-fueled AI problems, such as algorithmically fenced fake news, have snowballed into huge and ugly global scandals with very far reaching societal implications indeed — be it election interference or ethnic violence.
You really have to wonder what it would take to shake the ‘first break it, later fix it’ ethos of some of the tech industry’s major players…
Google Assistant making calls pretending to be human not only without disclosing that it's a bot, but adding "ummm" and "aaah" to deceive the human on the other end with the room cheering it… horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing.
Ethical guidance relating to what Google is doing here with the Duplex AI is actually pretty clear if you bother to read it — to the point where even politicians are agreed on foundational basics, such as that AI needs to operate on “principles of intelligibility and fairness”, to borrow phrasing from just one of several political reports that have been published on the topic in recent years.
In short, deception is not cool. Not in humans. And absolutely not in the AIs that are supposed to be helping us.
Transparency as AI standard
The IEEE technical professional association put out a first draft of a framework to guide ethically designed AI systems at the back end of 2016 — which included general principles such as the need to ensure AI respects human rights, operates transparently and that automated decisions are accountable.
In the same year the UK’s BSI standards body developed a specific standard — BS 8611 Ethics design and application robots — which explicitly names identity deception (intentional or unintentional) as a societal risk, and warns that such an approach will eventually erode trust in the technology.
“Avoid deception due to the behaviour and/or appearance of the robot and ensure transparency of robotic nature,” the BSI’s standard advises.
It also warns against anthropomorphization due to the associated risk of misinterpretation — so Duplex’s ums and ahs don’t just suck because they’re fake but because they are misleading and so deceptive, and also therefore carry the knock-on risk of undermining people’s trust in your service but also more widely still, in other people generally.
“Avoid unnecessary anthropomorphization,” is the standard’s general guidance, with the further steer that the technique be reserved “only for well-defined, limited and socially-accepted purposes”. (Tricking workers into remotely conversing with robots probably wasn’t what they were thinking of.)
The standard also urges “clarification of intent to simulate human or not, or intended or expected behaviour”. So, yet again, don’t try and pass your bot off as human; you need to make it really clear it’s a robot.
For Duplex the transparency that Pichai said Google now intends to think about, at this late stage in the AI development process, would have been trivially easy to achieve: It could just have programmed the assistant to say up front: ‘Hi, I’m a robot calling on behalf of Google — are you happy to talk to me?’
Instead, Google chose to prioritize a demo ‘wow’ factor — of showing Duplex pulling the wool over busy and trusting humans’ eyes — and by doing so showed itself tonedeaf on the topic of ethical AI design.
Not a good look for Google. Nor indeed a good outlook for the rest of us who are subject to the algorithmic whims of tech giants as they flick the control switches on their society-sized platforms.
“As the development of AI systems grows and more research is carried out, it is important that ethical hazards associated with their use are highlighted and considered as part of the design,” Dan Palmer, head of manufacturing at BSI, told us. “BS 8611 was developed… alongside scientists, academics, ethicists, philosophers and users. It explains that any autonomous system or robot should be accountable, truthful and unprejudiced.
“The standard raises a number of potential ethical hazards that are relevant to the Google Duplex; one of these is the risk of AI machines becoming sexist or racist due to a biased data feed. This surfaced prominently when Twitter users influenced Microsoft’s AI chatbot, Tay, to spew out offensive messages.
”Another contentious subject is whether forming an emotional bond with a robot is desirable, especially if the voice assistant interacts with the elderly or children. Other guidelines on new hazards that should be considered include: robot deception, robot addiction and the potential for a learning system to exceed its remit.
“Ultimately, it must always be transparent who is responsible for the behavior of any voice assistant or robot, even if it behaves autonomously.”
Yet despite all the thoughtful ethical guidance and research that’s already been produced, and is out there for the reading, here we are again being shown the same tired tech industry playbook applauding engineering capabilities in a shiny bubble, stripped of human context and societal consideration, and dangled in front of an uncritical audience to see how loud they’ll cheer.
Leaving important questions — over the ethics of Google’s AI experiments and also, more broadly, over the mainstream vision of AI assistance it’s so keenly trying to sell us — to hang and hang.
Questions like how much genuine utility there might be for the sorts of AI applications it’s telling us we’ll all want to use, even as it prepares to push these apps on us, because it can — as a consequence of its great platform power and reach.
A core ‘uncanny valley-ish’ paradox may explain Google’s choice of deception for its Duplex demo: Humans don’t necessarily like speaking to machines. Indeed, oftentimes they prefer to speak to other humans. It’s just more meaningful to have your existence registered by a fellow pulse-carrier. So if an AI reveals itself to be a robot the human who picked up the phone might well just put it straight back down again.
“Going back to the deception, it’s fine if it’s replacing meaningless interactions but not if it’s intending to replace meaningful interactions,” King told us. “So if it’s clear that it’s synthetic and you can’t necessarily use it in a context where people really want a human to do that job. I think that’s the right approach to take.
“It matters not just that your hairdresser appears to be listening to you but that they are actually listening to you and that they are mirroring some of your emotions. And to replace that kind of work with something synthetic — I don’t think it makes much sense.
“But at the same time if you reveal it’s synthetic it’s not likely to replace that kind of work.”
So really Google’s Duplex sleight of hand may be trying to conceal the fact AIs won’t be able to replace as many human tasks as technologists like to think they will. Not unless lots of currently meaningful interactions are rendered meaningless. Which would be a massive human cost that societies would have to — at very least — debate long and hard.
Trying to avoid such a debate from taking place by pretending there’s nothing ethical to see here is, hopefully, not Google’s designed intention.
King also makes the point that the Duplex system is (at least for now) computationally costly. “Which means that Google cannot and should not just release this as software that anyone can run on their home computers.
“Which means they can also control how it is used, and in what contexts — and they can also guarantee it will only be used with certain safeguards built in. So I think the experiments are maybe not the best of signs but the real test will be how they release it — and will they build the safeguards that people demand into the software,” he adds.
As well as a lack of visible safeguards in the Duplex demo, there’s also — I would argue — a curious lack of imagination on display.
Had Google been bold enough to reveal its robot interlocutor it might have thought more about how it could have designed that experience to be both clearly not human but also fun or even funny. Think of how much life can be injected into animated cartoon characters, for example, which are very clearly not human yet are hugely popular because people find them entertaining and feel they come alive in their own way.
It really makes you wonder whether, at some foundational level, Google lacks trust in both what AI technology can do and in its own creative abilities to breath new life into these emergent synthetic experiences.
Dropbox made its debut as a public company earlier this year and today passed through its first milestone of reporting its results to public investors, and it more or less beat expectations set for Wall Street on the top and bottom line.
The company reported more revenue and beat expectations for earnings that Wall Street set, bringing in $316.3 million in revenue and appearing to pick up momentum among its paying user base. It also said it had 11.5 million paying users, a jump from last year. However, the stock was largely flat in extended trading. One small negative signal — and it definitely appears to be a small one — was that its GAAP gross margin slipped slightly to 61.9% from 62.3% a year earlier. Dropbox is a software company that’s supposed to have great margins as it begins to ramp up its own hardware, but that slipping margin may end up being something that investors will zero in on going forward. Still, as the company continues to ramp up the enterprise component of its business, the calculus of its business may change over time.
This is a pretty important moment for the company, as it was a darling in Silicon Valley and rocketed to a $10 billion valuation in the early phases of the Web 2.0 era but began to face a ton of criticism as to whether it could be a robust business as larger companies started to offer cloud storage as a perk and not a business. Dropbox then found itself going up against companies like Box and Microsoft as it worked to create an enterprise business, but all this was behind closed doors — and it wasn’t clear if it was able to successfully maneuver its way into a second big business. Now the company is beholden to public shareholders and has to show all this in the open, and it serves as a good barometer of not just storage and collaboration businesses, but also some companies that are looking to drastically simplify workflow processes and convert that into a real business (like Slack, for example).
Here’s the final scorecard for the company:
Q1 revenue: $316.3 million, compared to Wall Street estimates of $308.7 million (up 28% year over year.)
Q1 earnings: 8 cents per share adjusted, compared to Wall Street estimates of 5 cents per share adjusted.
Paying users: 11.5 million, up from 9.3 million in the same period last year.
GAAP gross margin: 61.9%, down from 62.3% last year in the same period last year.
Non-GAAP gross margin: 74.2%, up from 63.5% in the same period last year.
Free cash flow: $51.9 million, down from $56.5 million in the same period last year.
(The GAAP and non-GAAP comparison is typically related to share-based compensation, which is a key component of employee compensation and retention.)
Dropbox was largely considered to be a successful IPO, rising more than 40% in its trading debut. That does mean that it may have left some money on the table, but its operating losses have been largely stable, even as it looks to woo larger enterprise customers as it — which is a bit of a taller order than its typical growth amid consumers that’s heavily driven by organic growth. Those larger enterprise customers offer more stable, and larger, revenue streams than a consumer base that faces a variety of options as many companies start to offer free storage. The company is now worth well over that original $10 billion valuation as a public company. Dropbox says it has more than 500 million users.
Since going public, the stock has had its ups and downs, but for the most part hasn’t dipped below that significant jump it saw from day one. Keeping that number propped up — and growing — is an important part of growing a business as a public company as it waves off more intense scrutiny and pressure for change from public shareholders, as well as offering competitive compensation packages for incoming employees in order to attract the best talent. It’s also good for morale as it offers a kind of grade for how the company is doing in the eyes of the public, though CEOs of companies often say they are committed toward long-term goals. The company’s shares are up around 11% since going public.
While there have been a wave of enterprise IPOs this year, including zScalar and Pluralsight’s upcoming IPO, Dropbox was largely considered to be a potential gauge of whether the IPO window was still open this year because of its hybrid nature. Dropbox started off as a consumer company based around a dead-simple approach of hosting and sharing files online, and used that to build a massive user base even as the cost of cloud storage was rapidly commoditized. But it also is building a robust enterprise-focus business, and continues to roll out a variety of tools to woo those businesses with consistent updates to products like its document tool Paper. Last month, the company started rolling out templates, as it looked to make traditional workflow processes easier and easier for companies in order to capture their interest much in the same way it captured the interest of consumers at large.
Earlier this year, Verizon quietly launched a new startup called Visible, offering unlimited data, minutes, and messaging services for the low, low price of $40.
To subscribe for the service, users simply download the Visible app (currently available only on iOS) and register. Right now, subscriptions are invitation only and would-be subscribers have to get an invitation from someone who’s already a current Visible member.
Once registration is complete, Visible will send a sim card the next day, and, once installed, a user can access Verizon’s 4G LTE network to stream videos, send texts, and make calls as much as their heart desires.
Visible says there’s no throttling at the end of the month and subscribers can pay using internet-based payment services like PayPal and Venmo (which is owned by PayPal).
The service is only available on unlocked devices — and right now, pretty much only to iPhone users.
“This is something that’s been the seed of an idea for a year or so,” says Minjae Ormes, head of marketing at Visible. “There’s a core group of people from the strategy side. There’s a core group of five or ten people who came up with the idea.”
The company wouldn’t say how much Verizon gave to the business to get it off the ground, but the leadership team is comprised mostly of former employees, like Miguel Quiroga the company’s chief executive.
“The way I would think about it.. we are a phone service in the platform that enables everything that you do. The way we launched and the app messaging piece of it. You do everything else on your phone and a lot of time if you ask people your phone is your life,” said Ormes. The thinking was, “let’s give you a phone that you can activate right from your phone and get ready to go and see how it resonates.”
It’s an interesting move from our corporate overlord (Verizon owns Oath, which owns TechCrunch), which is already the top dog in wireless services, with some 150 million subscribers compared with AT&T’s 141.6 million and a soon-to-be-combinedSprint and T-Mobile subscriber base of 126.2 million.
For Verizon, the new company is likely about holding off attrition. The company shed 24,000 postpaid phone connections in the last quarter, according to The Wall Street Journal, which put some pressure on its customer base (but not really all that much).
Mobile telecommunications remain at the core of Verizon’s business plans for the future, even as other carriers like AT&T look to dive deeper into content (while Go90 has been a flop, Verizon hasn’t given up on content plans entirely). The acquisition of Oath added about $1.2 billion in brand revenue (?) to Verizon for the last quarter, but it’s not anywhere near the kind of media juggernaut that AT&T would get through the TimeWarner acquisition.
Verizon seems to be looking to its other mobile services, through connected devices, industrial equipment, autonomous vehicles, and the development of its 5G network for future growth.
Every wireless carrier is pushing hard to develop 5G technologies, which should see nationwide rollout by the end of this year. Verizon recently completed its 11 city trial-run and is banking on expansion of the network’s capabilities to drive new services.
As the Motely Fool noted, all of this comes as Verizon adds new networking capabilities for industrial and commercial applications through its Verizon Connect division — formed in part from the $2.4 billion acquisition of Fleetmatics, that Verizon bought in 2016 along with Telogis, Sensity Systems, and LQD Wifi to beef up its mobile device connectivity services.
Founded by the team that created the media site Elite Daily, Wing uses Sprint cell-phone towers to deliver its service.
David Arabov and co-founder Jonathan Francis didn’t take long after taking a $26 million payout for their previous business before getting right back into the startup fray. Unlike Visible, Wing isn’t a one-size-fits-all plan and it’s a much more traditional MVNO. The company has a range of plans starting at $17 for a flip-phone and increasing to an unlimited plan at $27 per month, according to the company’s website.
As carriers continue to face complaints over service fees, locked in contracts, and terrible options, new options are bound to emerge. In this instance, it looks like Verizon is trying to make itself into one of those carriers.
Lithium operates a range of social media services, including products that handle social media marketing campaigns and engagement with customers, and now it has decided that Klout is no longer part of its vision.
“The Klout acquisition provided Lithium with valuable artificial intelligence (AI) and machine learning capabilities but Klout as a standalone service is not aligned with our long-term strategy,” CEO Pete Hess wrote in a short note.
Hess said those apparent AI and ML smarts will be put to work in the company’s other product lines.
He did tease a potential Klout replacement in the form of “a new social impact scoring methodology based on Twitter” that Lithium is apparently planning to release soon. I’m pretty sure someone out there is already pledging to bring Klout back on the blockchain and is frantically writing up an ICO whitepaper as we speak because that’s how it is these days.
Canadian Prime Minister Justin Trudeau and Quebec Premier Philippe Couillard joined a key execs from Apple and industrial manufacturers Alcoa and Rio Tinto to announce a new process for smelting aluminum that removes greenhouse gases from the equation.
Alcoa and Rio Tinto are creating a joint venture in based in Montreal called Elysis, to help mainstream the process, with plans to make it commercially available by 2024. Along with swapping carbon for oxygen as a byproduct of the production process, the technology is also expected to reduce costs by 15-percent.
It’s easy to see why Apple’s jumped at investing into tech here, investing $13 million CAD ($10 million USD) in the process. The company has been making a big push over the past couple of years to reduce its carbon footprint across the board. This time last month, Apple announced that it had moved to 100-percent clean energy for its global facilitates.
“Apple is committed to advancing technologies that are good for the planet and help protect it for generations to come,” Tim Cook said in a release tied to today’s news. We are proud to be part of this ambitious new project, and look forward to one day being able to use aluminum produced without direct greenhouse gas emissions in the manufacturing of our products.”
Those companies, along with the Governments of Canada and Quebec have combined to invest a full $188 million CAD in the forward looking tech. While the new business will be headquartered in Montreal, U.S. manufacturing will also get a piece of the pie. Alcoa has been smelting metal through the process at a smaller scale in a plant outside of Pittsburg since 2009.