Year: 2018

07 Jun 2018

Apple introduces the AI phone

At Apple’s WWDC 2018 — an event some said would be boring this year with its software-only focus and lack of new MacBooks and iPads — the company announced what may be its most important operating system update to date with the introduction of iOS 12. Through a series of Siri enhancements and features, Apple is turning its iPhone into a highly personalized device, powered by its Siri AI.

This “new AI iPhone” — which, to be clear, is your same ol’ iPhone running a new mobile OS — will understand where you are, what you’re doing and what you need to know right then and there.

The question now is will users embrace the usefulness of Siri’s forthcoming smarts, or will they find its sudden insights creepy and invasive?

Siri Suggestions

After the installation of iOS 12, Siri’s Suggestions will be everywhere.

In the same place on the iPhone Search screen where you today see those Siri suggested apps to launch, you’ll begin to see other things Siri thinks you may need to know, too.

For example, Siri may suggest that you:

  • Call your grandma for her birthday.
  • Tell someone you’re running late to the meeting via a text.
  • Start your workout playlist because you’re at the gym.
  • Turn your phone to Do Not Disturb at the movies.

And so on.

These will be useful in some cases, and perhaps annoying in others. (It would be great if you could swipe on the suggestions to further train the system to not show certain ones again. After all, not all your contacts deserve a birthday phone call.)

Siri Suggestions will also appear on the Lock Screen when it thinks it can help you perform an action of some kind. For example, placing your morning coffee order — something you regularly do around a particular time of day — or launching your preferred workout app, because you’ve arrived at the gym.

These suggestions even show up on Apple Watch’s Siri watch face screen.

Apple says the relevance of its suggestions will improve over time, based on how you engage.

If you don’t take an action by tapping on these items, they’ll move down on the watch face’s list of suggestions, for instance.

AI-powered workflows

These improvements to Siri would have been enough for iOS 12, but Apple went even further.

The company also showed off a new app called Siri Shortcuts.

The app is based on technology Apple acquired from Workflow, a clever — if somewhat advanced — task automation app that allows iOS users to combine actions into routines that can be launched with just a tap. Now, thanks to the Siri Shortcuts app, those routines can be launched by voice.

Onstage at the developer event, the app was demoed by Kim Beverett from the Siri Shortcuts team, who showed off a “heading home” shortcut she had built.

When she tells Siri she’s “heading home,” her iPhone simultaneously launched directions for her commute in Apple Maps, set her home thermostat to 70 degrees, turned on her fan, messaged an ETA to her roommate and launched her favorite NPR station.

That’s arguably very cool — and it got a big cheer from the technically minded developer crowd — but it’s most certainly a power user feature. Launching an app to build custom workflows is not something everyday iPhone users will do right off the bat — or in some cases, ever.

Developers to push users to Siri

But even if users hide away this new app in their Apple “junk” folder, or toggle off all the Siri Suggestions in Settings, they won’t be able to entirely escape Siri’s presence in iOS 12 and going forward.

That’s because Apple also launched new developer tools that will allow app creators to build directly into their own apps integrations with Siri.

Developers will update their apps’ code so that every time a user takes a particular action — for example, placing their coffee order, streaming a favorite podcast, starting their evening jog with a running app or anything else — the app will let Siri know. Over time, Siri will learn users’ routines — like, on many weekday mornings, around 8 to 8:30 AM, the user places a particular coffee order through a coffee shop app’s order ahead system.

These will inform those Siri Suggestions that appear all over your iPhone, but developers will also be able to just directly prod the user to add this routine to Siri right in their own apps.

In your favorite apps, you’ll start seeing an “Add to Siri” link or button in various places — like when you perform a particular action — such as looking for your keys in Tile’s app, viewing travel plans in Kayak, ordering groceries with Instacart and so on.

Many people will probably tap this button out of curiosity — after all, most don’t watch and rewatch the WWDC keynote like the tech crowd does.

The “Add to Siri” screen will then pop up, offering a suggestion of voice prompt that can be used as your personalized phase for talking to Siri about this task.

In the coffee ordering example, you might be prompted to try the phrase “coffee time.” In the Kayak example, it could be “travel plans.”

You record this phrase with the big, red record button at the bottom of the screen. When finished, you have a custom Siri shortcut.

You don’t have to use the suggested phrase the developer has written. The screen explains you can make up your own phrase instead.

In addition to being able to “use” apps via Siri voice commands, Siri can also talk back after the initial request.

It can confirm your request has been acted upon — for example, Siri may respond, “OK. Ordering. Your coffee will be ready in 5 minutes,” after you said “Coffee time” or whatever your trigger phrase was.

Or it can tell you if something didn’t work — maybe the restaurant is out of a food item on the order you placed — and help you figure out what to do next (like continue your order in the iOS app).

It can even introduce some personality as it responds. In the demo, Tile’s app jokes back that it hopes your missing keys aren’t “under a couch cushion.”

There are a number of things you could do beyond these limited examples — the App Store has more than 2 million apps whose developers can hook into Siri.

And you don’t have to ask Siri only on your phone — you can talk to Siri on your Apple Watch and HomePod, too.

Yes, this will all rely on developer adoption, but it seems Apple has figured out how to give developers a nudge.

Siri Suggestions are the new Notifications

You see, as Siri’s smart suggestions spin up, traditional notifications will wind down.

In iOS 12, Siri will take note of your behavior around notifications, and then push you to turn off those with which you don’t engage, or move them into a new silent mode Apple calls “Delivered Quietly.” This middle ground for notifications will allow apps to send their updates to the Notification Center, but not the Lock Screen. They also can’t buzz your phone or wrist.

At the same time, iOS 12’s new set of digital well-being features will hide notifications from users at particular times — like when you’ve enabled Do Not Disturb at Bedtime, for example. This mode will not allow notifications to display when you check your phone at night or first thing upon waking.

Combined, these changes will encourage more developers to adopt the Siri integrations, because they’ll be losing a touchpoint with their users as their ability to grab attention through notifications fades.

Machine learning in photos

AI will further infiltrate other parts of the iPhone, too, in iOS 12.

A new “For You” tab in the Photos app will prompt users to share photos taken with other people, thanks to facial recognition and machine learning.  And those people, upon receiving your photos, will then be prompted to share their own back with you.

The tab will also pull out your best photos and feature them, and prompt you to try different lighting and photo effects. A smart search feature will make suggestions and allow you to pull up photos from specific places or events.

Smart or creepy?

Overall, iOS 12’s AI-powered features will make Apple’s devices more personalized to you, but they could also rub some people the wrong way.

Maybe people won’t want their habits noticed by their iPhone, and will find Siri prompts annoying — or, at worst, creepy, because they don’t understand how Siri knows these things about them.

Apple is banking hard on the fact that it’s earned users’ trust through its stance on data privacy over the years.

And while not everyone knows that Siri is does a lot of its processing on your device, not in the cloud, many do seem to understand that Apple doesn’t sell user data to advertisers to make money.

That could help sell this new “AI phone” concept to consumers, and pave the way for more advancements later on.

But on the flip side, if Siri Suggestions become overbearing or get things wrong too often, it could lead users to just switch them off entirely through iOS Settings. And with that, Apple’s big chance to dominate in the AI-powered device market, too.

07 Jun 2018

Bloomberg Media Group’s chief product officer sees big opportunities in audio

Julia Beizer joined Bloomberg Media Group as its first chief product officer in January — and since then, she said, “Audio has been a big part of my world.”

Specifically, Beizer’s team has been releasing products for different smart speakers including Apple’s HomePod, Amazon’s Echo Show and most recently Google Home, with the launch of the First Word news briefing for both Google Home and the Google Assistant app. Bloomberg has also turned its video news show TicToc (initially created for Twitter) into an audio podcast. And by leveraging Amazon Polly for text-to-audio conversion, the company now offers audio versions of every article on the Bloomberg website and app.

Beizer joined Bloomberg from The Huffington Post (which, like TechCrunch, is owned by Verizon’s digital subsidiary Oath). She pointed out that these new initiatives represent a range of different approaches to audio news, from the “beautiful, bespoke, handcrafted audio projects” that you can create via podcasts, to an automated solution like text-to-speech that allows Bloomberg to offer audio in a more scalable way.

“What that really represents is utility,” said Beizer, “We want to fit into our consumers’ lives in different ways.”

She added that since text-to-speech launched at the beginning of May, her team has found that “the people who use it, use it a lot,” listening to two to three articles per session on average.

And beyond the success of individual products, Beizer suggested that these audio initiatives represent a new “culture of experimentation.”

“Newsrooms historically thought a lot about what we have to offer to the world,” Beizer said. “That’s a mindset that’s really built for the world when people had morning newspaper habits or watched the 6pm newscast every night. For us to be relevant in consumers’ lives, we have to adapt to how they are consuming media.”

That means trying out new things, and it also means shutting them down if they’re not working.

“I often say: Launching things is my favorite thing to do, and killing things is my second favorite thing to do,” she said. So it’s possible that some of these audio products won’t exist in a year, though she also argued, “Audio writ large — specific intiaitives aside — is something I believe is a trend that isn’t go away.”

Not that Beizer is spending all her time on audio. She acknowledged that the “pivot to video” has become a punchline in digital media, but she said that as she looks ahead, she still wants to find new ways to repackage and promote Bloomberg’s TV content for an online audience. She also said that the site’s new paywall represents “a huge opportunity.”

“We’re completely rethinking how we deliver our content —we want it to be essential to users’ lives,” she said. “That ties directly into subscription. I’ve worked in subscription before, and it gives you real clarity about your user and your audience.”

07 Jun 2018

Essential is releasing a wired headphone jack accessory

Essential has been nearly radio silent since reports surfaced late last month that Andy Rubin was looking for a buyer for his hardware startup. The company didn’t really confirm or deny the rumors — or say much else for that matter. A few weeks later, the company does have some news — but it’s not what you were most likely expecting.

The startup just dropped the second modular accessory for its first smartphone. The Audio Adapter HD features a built-in amp and the ability to play back MQA (Master Quality Authenticated), a hi-res streaming audio technology. Oh, and there’s a 3.5mm audio jack, because everything that’s old is new again.

The new add-on is set to drop at some point this summer. The company has also teamed up with Tidal. The streaming service has also reportedly had some issues of late, though, for its part, the company did offer a more outright denial. The partnership gives new and existing Essential customers a three-month trial subscription of Tidal’s HiFi service — a taste of what they’ve been missing with their low bit rates.

The company has declined to provide pricing for the add-on. Its first mod, the 360-degree camera, retailed for $200 at launch, though that price has since dropped considerably on retailers like Amazon — much like the Essential phone itself. As with the camera, the Audio Adapter HD feels like a niche, compared to mods from Motorola, which include things like battery packs and speakers.

At the very least, however, it does show that there’s still some life left in the Essential line.

07 Jun 2018

GDPR panic may spur data and AI innovation

If AI innovation runs on data, the new European Union’s General Data Protection Regulations (GDPR) seem poised to freeze AI advancement. The regulations prescribe a utopian data future where consumers can refuse companies access to their personally identifiable information (PII). Although the enforcement deadline has passed, the technical infrastructure and manpower needed to meet these requirements still do not exist in most companies today.

Coincidentally, the barriers to GDPR compliance are also bottlenecks of widespread AI adoption. Despite the hype, enterprise AI is still nascent: Companies may own petabytes of data that can be used for AI, but fully digitizing that data, knowing what the data tables actually contain and understanding who, where and how to access that data remains a herculean coordination effort for even the most empowered internal champion. It’s no wonder that many scrappy AI startups find themselves bogged down by customer data cleanup and custom integrations.

As multinationals and Big Tech overhaul their data management processes and tech stack to comply with GDPR, here’s how AI and data innovation counterintuitively also stand to benefit.

How GDPR impacts AI

GDPR covers the collection, processing and movement of data that can be used to identify a person, such as a name, email address, bank account information, social media posts, health information and more, all of which are currently used to power the AI algorithms ranging from targeting ads to identifying terrorist cells.

The penalty for noncompliance is 4 percent of global revenue, or €20 million, whichever is higher. To put that in perspective: 4 percent of Amazon’s 2017 revenue is $7.2 billion, Google’s is $4.4 billion and Facebook’s is $1.6 billion. These regulations apply to any citizen of the EU, no matter their current residence, as well as vendors upstream and downstream of the companies that collect PII.

Article 22 of the GDPR, titled “Automated Individual Decision-making, including Profiling,” prescribes that AI cannot be used as the sole decision-maker in choices that have legal or similarly significant effects on users. In practice, this means an AI model cannot be the only step for deciding whether a borrower can receive a loan; the customer must be able to request that a human review the application.

One way to avoid the cost of compliance, which includes hiring a data protection officer and building access controls, is to stop collecting data on EU residents altogether. This would bring PII-dependent AI innovation in the EU to a grinding halt. With the EU representing about 16 percent of global GDP, 11 percent of global online advertising spend and 9 percent of the global population in 2017, however, Big Tech will more likely invest heavily in solutions that will allow them to continue operating in this market.

Transparency mandates force better data accessibility

GDPR mandates that companies collecting consumer data must enable individuals to know what data is being collected about them, understand how it is being used, revoke permission to use specific data, correct or update data and obtain proof that the data has been erased if the customer requests it. To meet these potential requests, companies must shift from indiscriminately collecting data in a piecemeal and decentralized manner to establishing an organized process with a clear chain of control.

Any data that companies collect must be immediately classified as either PII or de-identified and assigned the correct level of protection. Its location in the company’s databases must be traceable with an auditable trail: GDPR mandates that organizations handling PII must be able to find all copies of regulated data, regardless of how and where it is stored. These organizations will need to assign someone to manage their data infrastructure and fulfill these user privacy requests.

Unproven upside alone has always been insufficient to motivate cross-functional modernization.

Having these data infrastructure and management processes in place will greatly lower the company’s barriers to deploying AI. By fully understanding their data assets, the company can plan strategically about where they can deploy AI in the near-term using their existing data assets. Moreover, once they build an AI road map, the company can determine where they need to obtain additional data to build more complex and valuable AI algorithms. With the data streams simplified, storage mapped out and a chain of ownership established, the company can more effectively engage with AI vendors to deploy their solutions enterprise-wide.

More importantly, GDPR will force many companies dragging their feet on digitization to finally bite the bullet. The mandates require that data be portable: Companies must provide a way for users to download all of the data collected about them in a standard format. Currently, only 10 percent of all data is collected in a format for easing analysis and sharing, and more than 80 percent of enterprise data today is unstructured, according to Gartner estimates.

Much of this structuring and information extraction will initially have to be done manually, but Big Tech companies and many startups are developing tools to accelerate this process. According to PWC, the sectors most behind on digitization are healthcare, government and hospitality, all of which handle large amounts of unstructured data containing PII — we could expect to see a flood of AI innovation in these categories as the data become easier to access and use.

Consumer opt-outs require more granular AI model management

Under GDPR guidelines, companies must let users prevent the company from storing certain information about them. If the user requests that the company permanently and completely delete all the data about them, the company must comply and show proof of deletion. How this mandate might apply to an AI algorithm trained on data that a user wants to delete is not specifically prescribed and awaits its first test case.

Today, data is pooled together to train an AI algorithm. It is unclear how an AI engineer would attribute the impact of a particular data point to the overall performance of the algorithm. If the enforcers of GDPR decide that the company must erase the effect of a unit of data on the AI model in addition to deleting the data, companies using AI must find ways to granularly explain how a model works and fine tune the model to “forget” that data in question. Many AI models are black boxes today, and leading AI researchers are working to enable model explainability and tunability. The GDPR deletion mandate could accelerate progress in these areas.

In this post-GDPR future, companies no longer have to infer intent from expensive schemes to sneakily capture customer information.

In the nearer term, these GDPR mandates could shape best practices for UX and AI model design. Today, GDPR-compliant companies offer users the binary choice of allowing full, effectively unrestricted use of their data or no access at all. In the future, product designers may want to build more granular data access permissions.

For example, before choosing to delete Facebook altogether, a user can refuse companies access to specific sets of information, such as their network of friends or their location data. AI engineers anticipating the need to trace the effect of specific data on a model may choose to build a series of simple models optimizing on single dimensions, instead of one monolithic and very complex model. This approach may have performance trade-offs, but would make model management more tractable.

Building trust for more data tomorrow

The new regulations require companies to protect PII with a level of security previously limited to patient health and consumer finance data. Nearly half of all companies recently surveyed by Experian about GDPR are adopting technology to detect and report data breaches as soon as they occur. As companies adopt more sophisticated data infrastructure, they will be able to determine who has and should have access to each data stream and manage permissions accordingly. Moreover, the company may also choose to build tools that immediately notify users if their information was accessed by an unauthorized party; Facebook offers a similar service to its employees, called a “Sauron alert.”

Although the restrictions may appear to reduce tech companies’ ability to access data in the short-term, 61 percent of companies see additional benefits of GDPR-readiness beyond penalty avoidance, according to a recent Deloitte report. Taking these precautions to earn customer trust may eventually lower the cost of acquiring high-quality, highly dimensional data.

In this post-GDPR future, companies no longer have to infer intent from expensive schemes to sneakily capture customer information. Improved data infrastructure will have enabled early AI applications to demonstrate their value, encouraging more customers to voluntarily share even more information about themselves to trustworthy companies.

Unproven upside alone has always been insufficient to motivate cross-functional modernization, but the threat of a multi-billion-dollar penalty may finally spur these companies to action. More importantly, GDPR is but the first of much more data privacy regulation to come, and many countries across the world look to it as a model for their own upcoming policies. As companies worldwide lay the groundwork for compliance and transparency, they’re also paving the way to an even more vibrant AI future to come.

07 Jun 2018

How Facebook’s new 3D photos work

In May, Facebook teased a new feature called 3D photos, and it’s just what it sounds like. However, beyond a short video and the name, little was said about it. But the company’s computational photography team has just published the research behind how the feature works and, having tried it myself, I can attest that the results are really quite compelling.

In case you missed the teaser, 3D photos will live in your news feed just like any other photos, except when you scroll by them, touch or click them, or tilt your phone, they respond as if the photo is actually a window into a tiny diorama, with corresponding changes in perspective. It will work for both ordinary pictures of people and dogs, but also landscapes and panoramas.

It sounds a little hokey, and I’m about as skeptical as they come, but the effect won me over quite quickly. The illusion of depth is very convincing, and it does feel like a little magic window looking into a time and place rather than some 3D model — which, of course, it is. Here’s what it looks like in action:

I talked about the method of creating these little experiences with Johannes Kopf, a research scientist at Facebook’s Seattle office, where its Camera and computational photography departments are based. Kopf is co-author (with University College London’s Peter Hedman) of the paper describing the methods by which the depth-enhanced imagery is created; they will present it at SIGGRAPH in August.

Interestingly, the origin of 3D photos wasn’t an idea for how to enhance snapshots, but rather how to democratize the creation of VR content. It’s all synthetic, Kopf pointed out. And no casual Facebook user has the tools or inclination to build 3D models and populate a virtual space.

One exception to that is panoramic and 360 imagery, which is usually wide enough that it can be effectively explored via VR. But the experience is little better than looking at the picture printed on butcher paper floating a few feet away. Not exactly transformative. What’s lacking is any sense of depth — so Kopf decided to add it.

The first version I saw had users moving their ordinary cameras in a pattern capturing a whole scene; by careful analysis of parallax (essentially how objects at different distances shift different amounts when the camera moves) and phone motion, that scene could be reconstructed very nicely in 3D (complete with normal maps, if you know what those are).

But inferring depth data from a single camera’s rapid-fire images is a CPU-hungry process and, though effective in a way, also rather dated as a technique. Especially when many modern cameras actually have two cameras, like a tiny pair of eyes. And it is dual-camera phones that will be able to create 3D photos (though there are plans to bring the feature downmarket).

By capturing images with both cameras at the same time, parallax differences can be observed even for objects in motion. And because the device is in the exact same position for both shots, the depth data is far less noisy, involving less number-crunching to get into usable shape.

Here’s how it works. The phone’s two cameras take a pair of images, and immediately the device does its own work to calculate a “depth map” from them, an image encoding the calculated distance of everything in the frame. The result looks something like this:

Apple, Samsung, Huawei, Google — they all have their own methods for doing this baked into their phones, though so far it’s mainly been used to create artificial background blur.

The problem with that is that the depth map created doesn’t have some kind of absolute scale — for example, light yellow doesn’t mean 10 feet, while dark red means 100 feet. An image taken a few feet to the left with a person in it might have yellow indicating 1 foot and red meaning 10. The scale is different for every photo, which means if you take more than one, let alone dozens or a hundred, there’s little consistent indication of how far away a given object actually is, which makes stitching them together realistically a pain.

That’s the problem Kopf and Hedman and their colleagues took on. In their system, the user takes multiple images of their surroundings by moving their phone around; it captures an image (technically two images and a resulting depth map) every second and starts adding it to its collection.

In the background, an algorithm looks at both the depth maps and the tiny movements of the camera captured by the phone’s motion detection systems. Then the depth maps are essentially massaged into the correct shape to line up with their neighbors. This part is impossible for me to explain because it’s the secret mathematical sauce that the researchers cooked up. If you’re curious and like Greek, click here.

Not only does this create a smooth and accurate depth map across multiple exposures, but it does so really quickly: about a second per image, which is why the tool they created shoots at that rate, and why they call the paper “Instant 3D Photography.”

Next, the actual images are stitched together, the way a panorama normally would be. But by utilizing the new and improved depth map, this process can be expedited and reduced in difficulty by, they claim, around an order of magnitude.

Because different images captured depth differently, aligning them can be difficult, as the left and center examples show — many parts will be excluded or produce incorrect depth data. The one on the right is Facebook’s method.

Then the depth maps are turned into 3D meshes (a sort of two-dimensional model or shell) — think of it like a papier-mache version of the landscape. But then the mesh is examined for obvious edges, such as a railing in the foreground occluding the landscape in the background, and “torn” along these edges. This spaces out the various objects so they appear to be at their various depths, and move with changes in perspective as if they are.

Although this effectively creates the diorama effect I described at first, you may have guessed that the foreground would appear to be little more than a paper cutout, since, if it were a person’s face captured from straight on, there would be no information about the sides or back of their head.

This is where the final step comes in of “hallucinating” the remainder of the image via a convolutional neural network. It’s a bit like a content-aware fill, guessing on what goes where by what’s nearby. If there’s hair, well, that hair probably continues along. And if it’s a skin tone, it probably continues too. So it convincingly recreates those textures along an estimation of how the object might be shaped, closing the gap so that when you change perspective slightly, it appears that you’re really looking “around” the object.

The end result is an image that responds realistically to changes in perspective, making it viewable in VR or as a diorama-type 3D photo in the news feed.

In practice it doesn’t require anyone to do anything different, like download a plug-in or learn a new gesture. Scrolling past these photos changes the perspective slightly, alerting people to their presence, and from there all the interactions feel natural. It isn’t perfect — there are artifacts and weirdness in the stitched images if you look closely, and of course mileage varies on the hallucinated content — but it is fun and engaging, which is much more important.

The plan is to roll out the feature mid-summer. For now, the creation of 3D photos will be limited to devices with two cameras — that’s a limitation of the technique — but anyone will be able to view them.

But the paper does also address the possibility of single-camera creation by way of another convolutional neural network. The results, only briefly touched on, are not as good as the dual-camera systems, but still respectable and better and faster than some other methods currently in use. So those of us still living in the dark age of single cameras have something to hope for.

07 Jun 2018

Google’s new ‘AI principles’ forbid its use in weapons and human rights violations

Google has published a set of fuzzy but otherwise admirable “AI principles” explaining the ways it will and won’t deploy its considerable clout in the domain. “These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions,” wrote CEO Sundar Pichai.

The principles follow several months of low-level controversy surrounding Project Maven, a contract with the U.S. military that involved image analysis on drone footage. Some employees had opposed the work and even quit in protest, but really the issue was a microcosm for anxiety regarding AI at large and how it can and should be employed.

Consistent with Pichai’s assertion that the principles are binding, Google Cloud CEO Diane Green confirmed today in another post what was rumored last week, namely that the contract in question will not be renewed or followed with others. Left unaddressed are reports that Google was using Project Maven as a means to achieve the security clearance required for more lucrative and sensitive government contracts.

The principles themselves are as follows, with relevant portions quoted from their descriptions:

  1. Be socially beneficial: Take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides…while continuing to respect cultural, social, and legal norms in the countries where we operate.
  2. Avoid creating or reinforcing unfair bias: Avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief
  3. Be built and tested for safety: Apply strong safety and security practices to avoid unintended results that create risks of harm.
  4. Be accountable to people: Provide appropriate opportunities for feedback, relevant explanations, and appeal.
  5. Incorporate privacy design principles: Give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.
  6. Uphold high standards of scientific excellence: Work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches…responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.
  7. Be made available for uses that accord with these principles: limit potentially harmful or abusive applications. (Scale, uniqueness, primary purpose, and Google’s role to be factors in evaluating this.)

In addition to stating what the company will do, Pichai also outlines what it won’t do. Specifically, Google will not pursue or deploy AI in the following areas:

  • Technologies that cause or are likely to cause overall harm. (Subject to risk/benefit analysis.)
  • Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  • Technologies that gather or use information for surveillance violating internationally accepted norms.
  • Technologies whose purpose contravenes widely accepted principles of international law and human rights.

(No mention of being evil.)

In the seven principles and their descriptions, Google leaves itself considerable leeway with the liberal application of words like “appropriate.” When is an “appropriate” opportunity for feedback? What is “appropriate” human direction and control? How about “appropriate” safety constraints?

It’s arguable that it is too much to expect hard rules along these lines on such short notice, but I would argue that it is not in fact short notice; Google has been a leader in AI for years and has had a great deal of time to establish more than principles.

For instance, its promise to “respect cultural, social, and legal norms” has surely been tested in many ways. Where can we see when practices have been applied in spite of those norms, or where Google policy has bent to accommodate the demands of a government or religious authority?

And in the promise to avoid creating bias and be accountable to people, surely (based on Google’s existing work here) there is something specific to say? For instance, if any Google-involved system has outcomes based on sensitive data or categories, the system will be fully auditable and available for public attention?

The ideas here are praiseworthy but AI’s applications are not abstract; these systems are being used today to determine deployments of police forces, or choose a rate for home loans, or analyze medical data. Real rules are needed, and if Google really intends to keep its place as a leader in the field, it must establish them or, if they are already established, publish them prominently.

In the end it may be the shorter list of things Google won’t do that prove more restrictive. Although use of “appropriate” in the principles allows the company space for interpretation, the opposite case is true for its definitions of forbidden pursuits. The definitions are highly indeterminate, and broad interpretations by watchdogs of phrases like “likely to cause overall harm” or “internationally accepted norms” may result in Google’s own rules being unexpectedly prohibitive.

“We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time,” wrote Pichai. We will soon see the extent of that willingness.

07 Jun 2018

Stitch fix blows out Wall Street’s expectations and announces the launch of Stitch Fix Kids

Stitch Fix, one of last year’s high-profile IPOs, has had a bumpy ride for the past few quarters — but it blew out expectations this afternoon for its most recent quarter, and the stock went absolutely nuts.

There’s also a ton of news coming out from the company today, including the hire of a new chief marketing officer as well as the launch of Stitch Fix Kids. All this is pretty good timing because the company appears to be cramming everything into one announcement that is serving as a very pleasant surprise to Wall Street, which is looking for as many signals as it can get that the subscription e-commerce company will end up as one of the more successful IPO stories. Shares of the company are up more than 14% after the release came out, where the company beat out expectations that Wall Street set across the board — which, while not the best barometer, serves as a somewhat public barometer as well as what helps determine whether or not it can lock up the best talent.

However, following the announcement, Stitch Fix’s stock came back down to Earth and is up around 4%.

Here’s the final line for the company:

  • Q3 Revenue: $316.7 million, compared to $306.4 million in estimates from Wall Street and up 29% year-over-year.
  • Q3 Earnings: 9 cents per share, compared to 3 cents per share in estimates from Wall Street.
  • Q4 Revenue Guidance: $310 million to  $320 million, compared to Wall Street estimates of around $314 million.
  • Cost of goods sold: $178.5 million, up from $139.7 million in Q3 last year.
  • Gross Margin: 43.6%, up from 43% in Q3 last year.
  • Advertising spend: $25.2 million, up from $21.3 million in Q3 last year.
  • Active clients: 2.7 million, up 30% year-over-year (2.5 million last quarter)
  • Q3 Net income: $9.5 million ($12.4 million in adjusted EBITDA)

Stitch Fix Kids will carry sizes 2T to 14, which will be across a diverse range of aesthetics “to give kids the freedom to express themselves in clothing that they feel great wearing,” the company said. Those Fixes will include 8 to 12 items that include market and exclusive brands. Stitch Fix launched Stitch Fix Plus in February last year.

“Our new Stitch Fix Kids offering is a testament to the scalability of our platform,” CEO and founder Katrina Lake said in a statement accompanying the release. “We’re excited for Stitch Fix to style everyone in the family and to create an effortless way for parents to shop for themselves and their children. Our goal is to provide unique, affordable kids clothing in a wide range of styles, giving our littlest clients the freedom to express themselves in clothing that they love and feel great wearing.”

Stitch Fix was widely considered a successful IPO last year, though it faced some challenges over the course of the front of the year. But as it’s expanded into new lines of subscriptions, its customer base still clearly continues to grow, and the company is still finding newer areas to expand — including the upcoming launch of Kids that it announced today. Like many recent IPOs, Wall Street is likely going to look for continued growth in terms of its core business (meaning, subscribers), but Stitch Fix is showing that it’s able to not set cash on fire as fresh IPOs sometimes do.

Stitch Fix’s new CMO, Deirdre Findlay, comes to the company from Google where she oversaw for the Google Home hardware products, which included Home and Chromecast. Prior to that, Findlay has a pretty extensive history in marketing across a wide variety of verticals beyond just tech, including working with Whirlpool Brands, Allstate Insurance, MillerCoors, and Kaiser Permanente, the company said. While Stitch Fix is a digitally-native company, it’s not exactly an explicit tech company and requires expertise outside of the realm of just the typical tech marketing talent — so getting someone with a pretty robust background like that would be important as it continues to expand into new areas of growth.

07 Jun 2018

Facebook launches Fb.gg gaming video hub to compete with Twitch

Facebook wants a cut of the 3+ hours per week that young adult video gamers spend watching other people play. So today it’s launch Fb.gg – as in the post-competition courtesy of saying “good game” — a destination where viewers can find a collection of all the video game streaming on Facebook. Fb.gg will show video based on the games and streaming celebrities they follow, their Liked Pages and Groups, plus it will display featured creators, esports competitions, and gaming conference events.

Aggregating gaming content could make sure it doesn’t get lost in the fast-moving News Feed. It could be especially useful for people whose Facebook friends aren’t into the gaming niche. The personalized recommendations based on Facebook activity could help the social network out-curate video-only sites like YouTube and Twitch. And if game streamers feel like they can build a big audience on Facebook, they’ll share there. Still, Facebook is getting a late start here.

Facebook Stars tipping currency

Meanwhile, Facebook is opening up its new monetization option to more gaming broadcasters. Facebook is launching the Level Up Program for emerging gaming content creators. Available in the next few months, those with access will be able to take monetary tips from their stream viewers in the form of virtual currency.

Facebook first announced its monetization program for streamers in January, but now the virtual currency is called Facebook Stars. For each Star a streamer receives, Facebook will pay them $0.01. We’ve reached out to see if Facebook will be taking a cut of these tips. Stream viewers on desktop can now give Stars to any creator in the Level Up program. Facebook is also rolling out its Patreon-style monthly subscription fan patronage feature test to more gamers in the coming weeks.

Those admitted to Level Up will also get special custom support, HD 1080p 60fps transcoding, and a special badge on their profile. Plus, they’ll receive early access to new Facebook livestreaming features and tips on how to build their fan base. Gamers can check out the eligibility requirements for these programs here. Those include having a Gaming Video Creator Facebook Page with at least 100 followers, and broadcasting at least 4 hours with sessions on at least 2 days in the past 2 weeks

Gamers have plenty of options to earn money from YouTube ad revenue shares and Twitch’s tipping options. Facebook needs to ramp these monetization efforts up quickly to capitalize on the sudden surge in game streaming. If Facebook can convince streamers it’s not just a place for Pong-aged people, it could turn the video ads on game broadcasts into a nice little revenue generator.

07 Jun 2018

Lime brings electric scooters to LA

While electric scooter startups are at a standstill in San Francisco, Lime is taking its scooter service to Santa Monica, Calif. — competitor Bird’s home turf. Although Lime was planning to launch its new model of scooter that it built in partnership with Segway in San Francisco last month, it’s now debuting them in the Los Angeles area first.

These Segway-powered Lime scooters are designed to be safer, longer-lasting via battery power and more durable for what the sharing economy requires, Lime CEO Toby Sun told TechCrunch in May. Now, instead of a maximum distance of 23 miles or so, Lime scooters can go up to 35 miles.

“A lot of the features in the past on scooters were made for the consumer market,” Sun said. “Not for the shared, heavy-duty markets.”

On the safety side, Lime enhanced its night-light on both the front and back of the scooter, and has added a light to flash below the deck. Lime has also added an additional brake, to have one on both the front and rear wheels.

Lime, which also has its pedal-assist electric bikes out and about in the LA area, says this is the first multimodal transportation service in LA. This news comes following reports of Lime raising a $250 million round led by GV.

07 Jun 2018

Facebook alerts 14M to privacy bug that changed status composer to public

Facebook has another privacy screwup on its hands. A bug in May accidentally changed the suggested privacy setting for status updates to public from whatever users had set it to last, potentially causing them to post sensitive friends-only content to the whole world. Facebook is now notifying 14 million potentially people impacted by the bug to review their status updates and lock them down tighter if need be.

Facebook’s Chief Privacy Officer Erin Egan wrote to TechCrunch in a statement:

We recently found a bug that automatically suggested posting publicly when some people were creating their Facebook posts. We have fixed this issue and starting today we are letting everyone affected know and asking them to review any posts they made during that time. To be clear, this bug did not impact anything people had posted before – and they could still choose their audience just as they always have. We’d like to apologize for this mistake”.

The bug was active from May 18th to May 27th, with Facebook able start rolling out a fix on May 22nd. It happened because Facebook was building a ‘featured items’ option on your profile that highlights photos and other content. These featured items are publicly visible, but Facebook inadvertently extended that setting to all new posts from those users.

The issue has now been fixed, and everyone’s status composer has been changed back to default to the privacy setting they had before the bug. The notifications about the bug leads to a page of info about the issue, with a link to review affected posts.

Facebook tells TechCrunch that it hears loud and clear that it must be more transparent about its product and privacy settings, especially when it messes up. And it plans to show more of these types of alerts to be forthcoming about any other privacy issues it discovers in the future.

Facebook depends on trust in its privacy features to keep people sharing. If users are worried their personal photos, sensitive status updates, or other content could leak out to the public and embarass them or damage their reputation, they’ll stay silent. And with all the other issues swirling after the Cambridge Analytica scandal, this bug shows that Facebook’s privacy issues span both poorly thought-out policies and technical oversights. It moved too fast, and it broke something.