Year: 2018

07 May 2018

Toward transitive data privacy and securing the data you don’t share

We are spending a lot of time discussing what happens to data when you explicitly or implicitly share it. But what about data that you have never ever shared?

Your Cousin’s DNA

We all share DNA — after all, it seems we are all descendants of a few tribes. But the more closely related you are, the closer the DNA match. While we all know we share 50% DNA with siblings, and 25% with first cousins — there is still some meaningful match even between distant relatives (depending on the family tree distance).

In short, if you have never taken a DNA test but one or more of your blood relatives has, and shared that data — some of your DNA is effectively now available for a match.

While this may have seemed like theory a few weeks ago, the cops caught the Golden State Killer by using this method.

Cambridge Analytica

A similar thing happened when data was mis-used by Cambridge Analytica . Even if you never used the quiz app on Facebook platform but your friends did, they essentially revealed private information about you without your consent or knowlege.

The number of users that took the quiz was shockingly small — only 300,000 users participated. And yet, upwards of 50 million (as many as 87 million) people eventually had their data collected by Cambridge Analytica.

And all of this was done legally and while complying with the platform requirements at that time.

Transitive Data Privacy

The word transitive simply means if A is related to B in a certain way, and B to C — then A is related to C. For example, cousins is a transitive property. If Alice and Bob are cousins, and Bob and Chamath are cousins, then Alice and Chamath are cousins.

As private citizens, and corporations, we now have to think about transitive data privacy loss.

The simplest version of this is if your boyfriend or girlfriend forwards your private photo or conversation screenshot to someone else.

Transitive Sharing Upside

While we have discussed couple of clear negative examples, there are many ways transitive data relationships help us.

Every time you ask a friend to connect you to someone on Linkedin for a job or fundraise, you are leveraging the transitive relationship graph.

The DNA databases being created are primarily for social good — to help us connect with our roots & family, detect disease early and help medical research.

In fact, you could argue that a lot of challenges we face today require more data sharing not less. If your hospital cannot share data with your primary care doctor at the right time, or your clinical trial data cannot be accessed to monitor downstream effects, we cannot take care of our citizens’ health as we should. Organizations like NIH and the VA and CMS (Medicare) are working hard to encourage appropriate easier sharing by healthcare providers.

Further, the good news is that there have been significant advances in security in encryption and hashing that enable companies to protect against the unintended side effects. More research is definitely called for. We can anonymize data, we can perturb data, and apply these techniques for protection while still being able to derive value and help customers.

07 May 2018

Backstage Capital launches $36m fund to boost black female founders

Conceiving a product and turning it into a successful high-growth startup is challenging work, but it can be absolutely punishing for underrepresented founders. While a strong product and a great go-to-market is critical for early success, that’s not enough to ensure a successful venture capital fundraise. Instead, what often matters is building relationships with investors, and the evidence is clear that people love to work with people who look just like them.

In an industry dominated by white men, it can be hard for founders to find investors who look like them. That’s why Arlan Hamilton built Backstage Capital. Her firm’s mission is to provide early seed financing exclusively to founders who are women, people of color, and LGBT — groups that are massively underrepresented among Silicon Valley founders compared to the general population.

“I think the figures speak for themselves: less than .2% of all early stage venture funding goes to Black women, while we make up approx 8% of the U.S. population and are one of the fastest growing entrepreneur segments in the country,” she wrote in an email exchange with TechCrunch.

My colleague Megan Rose Dickey interviewed Hamilton early last year about raising her firm’s second fund. Dickey wrote at the time that “Hamilton wasn’t able to say the exact size of the second fund but says they’re the size of the catering budget for the Christmas party at Andreessen Horowitz.”

Well, that second fund has now launched and is targeting $36 million in commitments by its final close (Hamilton is continuing to fundraise). And unless A16Z’s Christmas party has gotten even more ritzy than I remember it, it appears that Hamilton has far exceeded her own expectations on what size fund she would be able to ultimately aggregate. The fund’s mission is to specifically focus on black female founders.

The burgeoning size of the fund can also be seen in the size of checks the fund is targeting. Originally, Hamilton said she wanted to target $25-100k seed checks. Now, she foresees the fund investing $1 million checks into 15-20 companies over the next three years, with the remainder of the fund reserved for follow-on investments.

While Backstage certainly has a mission, Hamilton sees a huge potential for outsized returns. “It is my firm belief that because Black women have had to make do with far less for centuries, equipping them with early stage capital that is on par with their white male counterparts has the potential to lead to outsized returns,“ she wrote.

Hamilton pointed to last month’s Vanity Fair spread featuring black women who had raised $1 million in venture capital. “While this was a proud moment for us all, I want to do my part to make sure that number reaches heights such that the next photo shoot isn’t contained in one single room,” she wrote.

The firm has invested in more than 80 portfolio companies, and Hamilton expects to make two or three seed investments out of this new fund by the end of the year.

07 May 2018

Tech watchdogs call on Facebook and Google for transparency around censored content

If a company like Facebook can’t even understand why its moderation tools work they do, then its users certainly don’t have a fighting shot. Anyway, that’s the idea behind what a coalition of digital rights groups are calling the Santa Clara Principles, “a set of minimum standards” aimed at Facebook, Google, Twitter and other tech companies that moderate the content published on their platforms.

The suggested guidelines grew out of a set of events addressing “Content Moderation and Removal at Scale,” the second of which is taking place today in Washington D.C. The group participating in these conversations shared the goal of coming up with a suggested ruleset for how major tech companies should disclose what content is being censored, why it is being censored and how much speech is censored overall.

“Users deserve more transparency and greater accountability from platforms that play an outsized role—in Myanmar, Australia, Europe, and China, as well as in marginalized communities in the U.S. and elsewhere—in deciding what can be said on the Internet,” Electronic Frontier Foundation (EFF) Director for International Freedom of Expression Jillian C. York said.

As the Center for Democracy and Technology explains, the Santa Clara principles (PDF) ask tech companies to disclose three categories of information:

  • Numbers (of posts removed, accounts suspended);
  • Notice (to users about content removals and account suspensions); and
  • Appeals (for users impacted by content removals or account suspensions).

“The Santa Clara Principles are the product of years of effort by privacy advocates to push tech companies to provide users with more disclosure and a better understanding of how content policing works,” EFF Senior Staff Attorney Nate Cardozo added.

“Facebook and Google have taken some steps recently to improve transparency, and we applaud that. But it’s not enough. We hope to see the companies embrace the Santa Clara Principles and move the bar on transparency and accountability even higher.”

Participants in drafting the Santa Clara Principles include the ACLU Foundation of Northern California, Center for Democracy and Technology, Electronic Frontier Foundation, New America’s Open Technology Institute, and a handful of scholars from departments studying ethics and communications.

07 May 2018

Microsoft Pay comes to Outlook, integrating Stripe, Braintree, Sage, Wave and more

Microsoft Pay — Microsoft’s answer to Android Pay and Apple Pay that was originally launched in 2016 as Microsoft Wallet — is getting a little more useful today. At Build, Microsoft announced that it will be integrating its digital wallet service into Outlook. This means that, for the first time, when a company sends you an invoice in an email, and you are using Outlook to read it, you can pay that bill directly, without needing to leave Outlook and open a different app or service. Instead, a panel that will open to the right of the main one by way of Microsoft’s Adaptive Cards.

As it launches — Microsoft says it will come first to a limited number of Outlook.com users over the next few weeks, and then more broadly over the next few months — it said that Stripe (using Stripe Connect) and Braintree will be among the payment processors powering the service, and Zuora, FreshBooks, Intuit, Invoice2Go, Sage, Wave, and Xero will be among the billing and invoicing services that will initially be using the feature. In other words, businesses using a combination of these will be able to offer Outlook-using customers the ability to use the feature.

The integration of Microsoft Pay into Outlook is part of a bigger shift that Microsoft is making to try to reduce some of the friction in its services by way of Adaptive Cards and other integration-friendly developer mechanics. The company effectively has capabilities covering many different aspects of computing and what the average user might want to do on a screen or in an app, and so it is building (and promoting to developers) more connective bridges to use Microsoft services rather than someone else’s.

Payments in Outlook is a prime example of that: Microsoft is not a bill payment service and is not a bill payment agent — so its partners are the ones making transaction commissions when the invoice is paid there — but offering this convenience to users makes Outlook itself more sticky and more useful overall to people. Down the line, it will help lock users into the Microsoft ecosystem more tightly — just as Android Pay or Apple Pay do the same for those respective platforms. But in that regard, this is also table stakes: conveniences like these have quickly moved from “nice to have” to “why didn’t Outlook have this before?”

For businesses, the sweetener is that they might just get paid that much faster, simply more making the process of paying something easier.

Stripe’s goal is to increase the GDP of the internet, which we do by providing the tools and infrastructure that make it easier to transact online from anywhere in the world,” said Richard Alfonsi, head of global revenue and growth, Stripe, in a statement. “We’re excited to work closely with Microsoft to power payments in Outlook, allowing anyone receiving an email invoice or bill in Outlook to immediately take action and pay that invoice with a few simple clicks. By removing the friction and time needed to complete a payment, Stripe and Microsoft can help businesses around the world reduce missed or late payments, ultimately increasing their revenue.”

Alongside the Outlook news, Stripe announced that it is also now supporting Microsoft Pay so that businesses that use Stripe in other apps can now offer this as an option to users who are using Microsoft Pay, to avoid inputting card details multiple times. (Stripe already offered support for Apple Pay, Android Pay and Alipay, among others.)

“Our partnership with Stripe opens up new opportunities for developers to monetize on Microsoft platforms” said Peggy Johnson, executive vice president of business development, Microsoft, in a statement. “Starting with payments in Outlook, anyone using Stripe on our platforms can now accept payments with minimal effort, creating a more powerful experience for both our partners and our customers.”

07 May 2018

Drive.ai is launching an autonomous ride-hailing network in Texas

Drive.ai, the self-driving car startup with roots in Stanford’s Artificial Intelligence Lab, has partnered with Frisco, Texas and the Hall Group to deploy the first autonomous ride-hailing platform in the state of Texas.

Initially, the platform will be available to more than 10,000 members of Hall Group’s commercial and residential communities. Through the service, people will be able to hail free autonomous rides from fixed pickup locations to fixed drop-off locations.

“Self-driving cars are here, and can improve the way we live right now,” Drive.ai co-founder and CEO Sameep Tandon said in a press release. “Our technology is safe, smart, and adaptive, and we are ready to work with governments and businesses to solve their transportation needs. Working with the City of Frisco and Frisco TMA, this pilot program will take people to the places they want to go and transform the way they experience transportation.”

Before the July launch, Drive.ai will be collecting data along the routes and working with the city to educate people about self-driving technology. During this trial period, which starts in July and will run for six months, the service will be limited to employees, residents and patrons of Hall properties. Down the road, the goal is to open up the program to all residents of Frisco.

“Today definitely marks a mobility milestone for our entire region,” Frisco Mayor Jeff Cheney said in a press release. “It also gets us closer to achieving one of our council’s ‘Top Ten’ goals, which is to improve traffic throughout Frisco, one of the fastest growing cities in the country.”

In September, Drive.ai announced a partnership with Lyft to launch an autonomous ride-hailing program in the San Francisco Bay Area. That program has yet to launch.

07 May 2018

Uber vehicle reportedly saw but ignored woman it struck

The cause of the fatal crash of an Uber self-driving car appears to have been at the software level, specifically a function that determines which objects to ignore and which to attend to, The Information reported. This puts the fault squarely on Uber’s doorstep, though there was never much reason to think it belonged anywhere else.

Given the multiplicity of vision systems and backups on board any given autonomous vehicle, it seemed impossible that any one of them failing could have prevented the car’s systems from perceiving Elaine Herzberg, who was crossing the street directly in front of the lidar and front-facing cameras. Yet the car didn’t even touch the brakes or sound an alarm. Combined with an inattentive safety driver, this failure resulted in Herzberg’s death.

The only possibilities that made sense were:

  • A: Fault in the object recognition system, which may have failed to classify Herzberg and her bike as a pedestrian. This seems unlikely since bikes and people are among the things the system should be most competent at identifying.
  • B: Fault in the car’s higher logic, which makes decisions like which objects to pay attention to and what to do about them. No need to slow down for a parked bike at the side of the road, for instance, but one swerving into the lane in front of the car is cause for immediate action. This mimics human attention and decision making and prevents the car from panicking at every new object detected.

The sources cited by The Information say that Uber has determined B was the problem. Specifically, it was that the system was set up to ignore objects that it should have attended to; Herzberg seems to have been detected but considered a false positive.

This is not good.

Autonomous vehicles have superhuman senses: lidar that stretches out hundreds of feet in pitch darkness, object recognition that tracks dozens of cars and pedestrians at once, radar and other systems to watch the road around it unblinkingly.

But all these senses are subordinate, like our own, to a “brain” — a central processing unit that takes the information from the cameras and other sensors and combines it into a meaningful picture of the world around it, then makes decisions based on that picture in real time. This is by far the hardest part of the car to create, as Uber has shown.

It doesn’t matter how good your eyes are if your brain doesn’t know what it’s looking at or how to respond properly.

Update: Uber issued the following statement, but did not comment on the claims above:

We’re actively cooperating with the NTSB in their investigation. Out of respect for that process and the trust we’ve built with NTSB, we can’t comment on the specifics of the incident. In the meantime, we have initiated a top-to-bottom safety review of our self-driving vehicles program, and we have brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture. Our review is looking at everything from the safety of our system to our training processes for vehicle operators, and we hope to have more to say soon.

As this is a situation without precedent, the NTSB and other reports may be particularly difficult to create and slow to issue, and it’s not abnormal for a company or individual to hold off from revealing too much information ahead of publication.

07 May 2018

Live Share in Visual Studio lets you code and debug together

At its Build developer conference, Microsoft today announced that Live Share, its previously announced collaborative development feature for Visual Studio and Visual Studio code, is now available to developers who want to give it a try. Until now, this feature, which allows developers to better work together, was only available in a private preview — and it’s available for free to all developers, even those who use the free Visual Studio code editor.

In a way, Live Share is a bit like using Google Docs for collaboration. Developers can see where everybody’s cursor is and when their colleagues are typing, no matter which platform they are on. All developers in a Live Share session can stay within their preferred (and customized) environment. That gives developers quite a bit more flexibility compared to a traditional screen share.

One feature Microsoft heavily emphasized at Build is the ability to share debugging sessions, too. That means everybody can set break points and get the full logs. While writing code is one thing, being able to share debugging sessions may actually be even more important to many developers.

Live Share supports all major languages, including C#, Python, Java, Go and C++.

07 May 2018

Microsoft shows off Alexa-Cortana integration, launches sign-up website for news

Microsoft still isn’t giving a timeline as to when its virtual assistant, Cortana, will support integration with Amazon Alexa – something the companies had announced last year. But the company at its Build developer conference today did show off how that integration will work, in an on-stage demo with support from Amazon, and it launched a new website for developers interested in receiving Alexa-Cortana integration news and information going forward.

When Microsoft and Amazon first discussed integrating their virtual assistants, it was described as a two-way street – that is, Cortana could pass requests back to Alexa, and vice versa. For example, Alexa customers would be able to access Cortana’s productivity features, like booking meetings, accessing work calendars, or reading work emails. Meanwhile, Cortana users could ask Alexa to control smart home devices, shop Amazon, or use Alexa’s some 40,000 skills. 

But there were some concerns those commands would be awkward, and that integrations like this could be unnecessary too.

At Build, Microsoft CEO Satya Nadella stressed the values of a more open system, saying “We want to make it possible for our customers to get the most out of their personal digital assistants – not be bound to some walled garden.”

Perhaps, though, what Microsoft really wants is to benefit from Alexa’s momentum.

In a brief demo, Microsoft Cortana GM Megan Saunders along with Amazon Alexa SVP Tom Taylor showed how Alexa and Cortana would work together. It didn’t look quite as unwieldy as you may have imagined.

Saunders directed her Echo speaker to “open Cortana,” which saw the digital assistant responding with a different voice, “Cortana here, how can I help?”

The experience seemed more like launching and using a third-party skill, rather than a series of tricky verbal commands.

She was then able to ask Cortana for information on her calendar, without having to say “Cortana,” or “Alexa” again – just “how’s my day?”

And she told Cortana to “send an email to Tom Taylor saying ‘I’ll see you tonight'” – again, without having to command the assistant by name.

After the Alexa-to-Cortana demo, Taylor showed off the reverse situation – calling up Alexa from Cortana.

While using Cortana on his PC, he said to Microsoft’s Assistant, “Hey Cortana, open Alexa.” Alexa responded in her own voice: “hi there, this is Alexa. How can I help?”

Taylor used Alexa to order an Uber using the third-party Uber skill and told her to turn off the lights.

He also asked Alexa what she thought of Cortana, to which Amazon’s assistant replied, with her typical cheesy humor, “I like Cortana. We both have experience with rings, although hers is more of a Halo.” Oh, hardy-har-har. 

Of course, what people really wanted to hear about is when the Cortana-Alexa integration would go live, and unfortunately there was no news on that front.

Saunders referred to the experience as still being in a “limited beta” for the time being, but did note the launch of a new website for developers.

Developers who are building skills for Cortana and Alexa can go to this new site in order to sign up to be notified when the integrations go live.

“For all of you developers out there building skills, Cortana and Alexa is going to enable access to more people across more devices,” said Saunders. “And we can’t wait to see what you build.”

07 May 2018

Microsoft overhauls its conversational AI chatbot tools

Microsoft CEO Satya Nadella talked briefly about a major update to the company’s conversational AI tools. You can now more easily create, test and improve bots that run on Azure or your own servers and work across multiple platforms.

“At this conference, we’re launching a hundred plus features for the bot framework so that you can continue to build these conversational interfaces and give them more customizations,” Nadella said.

There are now 30,000 active bots per month that use Microsoft’s conversational AI tools. They handle 30 million messages per day for a thousand companies, including Macy’s, Asiana Airlines, Stack Overflow, KPMG, Telefonica, HP and UPS.

And chances are you’ve already talked with one of those bots without realizing it because you don’t need to be using a Microsoft product to interact with a bot that leverages Microsoft’s technologies.

Microsoft’s Conversational AI tools let you deploy bots on a website, Slack, Facebook Messenger, Kik, SMS using Twilio, Telegram, Cortana, Skype for Business, Microsoft Teams, GroupMe and email.

It doesn’t work with anything from Amazon, Google or Apple though.

Microsoft’s Bot Builder SDK has been updated. It lets you pick a bot design and then create your own bot from this model. Starting today, QnAMaker is now also available as a final release. It lets you turn a good old FAQ into a set of questions and answers for your bot.

The bot emulator has been updated too to make it easier to debug your bots. There’s a new component called Dispatch that lets you dispatch queries to the right model or knowledge base. You can also manage authentication with third-party apps to interact with other apps. This is important for e-commerce websites and on-demand services.

Microsoft’s language understanding module called LUIS lets you convert speech into intents. You can now more easily integrate addresses, people and organizations in LUIS.

Finally, there are two new projects called Project Conversation Learner and Project Personality Chat. With the first project, you can feed conversations into the platform and let Microsoft use machine learning to learn new dialogue sequences. And the second project lets you create some small talk interactions to create the illusion that you’re talking with a real person.

07 May 2018

China closing in on massive new chip fund in bid to dominate US semiconductor industry

China’s government has made technological independence from the United States one of its highest priorities. And now, it appears to be putting its money where its messaging has been.

According to the Wall Street Journal, China is close to finalizing a $47 billion investment fund that would finance semiconductor research and chip startup development. The fund, formally the China Integrated Circuit Industry Investment Fund Co., appears to be underwritten predominantly by government capital sources.

Such a fund has been rumored for months, with the size of the fund ranging widely. Just two weeks ago, Reuters had reported that the fund would be $19 billion, while Bloomberg reported $31.5 billion two months ago. The exact number appears to be under intense negotiation among the Chinese leadership, and is also responsive to the increasingly tense trade negotiations with the United States.

If the $47 billion number pans out, it would be identical in size to a $47 billion fund that was financed by Tsinghua University, China’s leading engineering university, to spur the development of an indigenous semiconductor industry back in 2015.

China is highly dependent on foreign tech in its semiconductor industry, importing 90% of its chips in order to power its fast-growing economy. The Chinese government has always been wary of that dependency, but its fears were heightened in recent weeks after the United States banned American companies from selling components to ZTE, a prominent Chinese telecom equipment manufacturer.

Chinese President Xi Jinping has gone on something of an indigenous innovation tour in recent weeks, visiting factories across the country and encouraging further investment in the country’s technology industry. From the Communist Party of China’s official newspaper the People’s Daily two weeks ago, “National rejuvenation relies on the ‘hard work’ of the Chinese people, and the country’s innovation capacity must be raised through independent efforts, President Xi Jinping said on Tuesday.”

While the numbers discussed are eye-popping, so are the costs of developing leading-edge semiconductor technology. As semiconductors have grown more complex, costs have skyrocketed to maintain Moore’s Law. Intel spent more than $13 billion on R&D expenses alone in 2017, according to IC Insights, with Qualcomm, Broadcom, and Samsung each spending more than $3 billion.

While China may try to play catchup in the broad category of semiconductors, it is strategically placing its money on new areas like 5G wireless and artificial intelligence-focused chips where it might become a leading provider of technology. Concerns over 5G in particular have galvanized American attention on Qualcomm and its ability to compete in what is rare virgin territory in the telecom equipment space.

For American companies like Intel and Qualcomm, who are used to holding de facto monopolies on entire swaths of the semiconductor market, the renewed competition from China is going to pressure them to push their tech forward faster.