Author: azeeadmin

29 Sep 2018

Bots replacing office workers drive big valuations

A lot of people still get paid to sit in offices and do repetitive tasks. In recent years, however, employers have been pushing harder to find ways to outsource that work to machines.

Venture and growth investors are doing a lot to speed up the rise of these worker-bots. So far this year, they’ve poured hundreds of millions into developers of robotic process automation technology, the term to describe software used for performing a series of tasks previously carried out by humans.

Process automation funding activity spiked last week with a $225 million Series C round for one of the category leaders, New York-based UiPath. Sequoia Capital and Alphabet’s CapitalG led the financing, which brings total capital raised by the 13-year-old company to more than $400 million, with a most recent valuation of $3 billion.

A Crunchbase News analysis of funding for startups and growth companies involved in robotic process automation indicates this has been a busy year overall for the space, with more than $600 million in aggregate investment across at least seven sizable deals.

Below, we spotlight some of the largest 2018 rounds in the space:1

UiPath, for its part, has a grand vision and an impressive growth rate. Its broad goal, laid out to incoming employees, involves “liberating the human workforce from tedious, repetitive tasks.”

And employers are willing to pay handsomely to liberate their employees. UiPath said that in one 21-month period, it went from $1 million to $100 million in annual recurring revenue, an absolutely astounding growth rate for an enterprise software company.

The other big unicorn in the process automation space, Automation Anywhere, is also in rapid expansion mode. The company said customers have been using its tools across a broad range of industries for tasks including integrating data in electronic medical records, streamlining mortgage applications and completing complex purchase orders.

One might ask: What are employees to do all day now that the bots have freed them of their tiresome tasks? The general refrain from UiPath and others in the process automation space is that their software doesn’t eliminate jobs so much as it gives workers time to focus on higher-value projects.

That may be broadly true, but there is a significant body of employment trend forecasting that predicts widespread job losses stemming from this kind of automation. It could take the form of layoffs, or it might not. Companies may indeed transition bot-displaced existing employees to other, higher-value roles. Even if they do that, however, process automation could enable reduced hiring for future jobs.

That said, there’s plenty of funding and hiring happening at the handful of high-growth companies that could determine whether the rest of us have a job in our futures.

  1. Providing comprehensive funding numbers for robotic process automation proved challenging because many startups list automation as part of a broader suite of offerings, rather than a core focus area. 
29 Sep 2018

Facebook is weaponizing security to erode privacy

At a Senate hearing this week in which US lawmakers quizzed tech giants on how they should go about drawing up comprehensive Federal consumer privacy protection legislation, Apple’s VP of software technology described privacy as a “core value” for the company.

“We want your device to know everything about you but we don’t think we should,” Bud Tribble told them in his opening remarks.

Facebook was not at the commerce committee hearing which, as well as Apple, included reps from Amazon, AT&T, Charter Communications, Google and Twitter.

But the company could hardly have made such a claim had it been in the room, given that its business is based on trying to know everything about you in order to dart you with ads.

You could say Facebook has ‘hostility to privacy‘ as a core value.

Earlier this year one US senator wondered of Mark Zuckerberg how Facebook could run its service given it doesn’t charge users for access. “Senator we run ads,” was the almost startled response, as if the Facebook founder couldn’t believe his luck at the not-even-surface-level political probing his platform was getting.

But there have been tougher moments of scrutiny for Zuckerberg and his company in 2018, as public awareness about how people’s data is being ceaselessly sucked out of platforms and passed around in the background, as fuel for a certain slice of the digital economy, has grown and grown — fuelled by a steady parade of data breaches and privacy scandals which provide a glimpse behind the curtain.

On the data scandal front Facebook has reigned supreme, whether it’s as an ‘oops we just didn’t think of that’ spreader of socially divisive ads paid for by Kremlin agents (sometimes with roubles!); or as a carefree host for third party apps to party at its users’ expense by silently hovering up info on their friends, in the multi-millions.

Facebook’s response to the Cambridge Analytica debacle was to loudly claim it was ‘locking the platform down‘. And try to paint everyone else as the rogue data sucker — to avoid the obvious and awkward fact that its own business functions in much the same way.

All this scandalabra has kept Facebook execs very busy with year, with policy staffers and execs being grilled by lawmakers on an increasing number of fronts and issues — from election interference and data misuse, to ad transparencyhate speech and abuse, and also directly, and at times closely, on consumer privacy and control

Facebook shielded its founder from one sought for grilling on data misuse, as UK MPs investigated online disinformation vs democracy, as well as examining wider issues around consumer control and privacy. (They’ve since recommended a social media levy to safeguard society from platform power.) 

The DCMS committee wanted Zuckerberg to testify to unpick how Facebook’s platform contributes to the spread of disinformation online. The company sent various reps to face questions (including its CTO) — but never the founder (not even via video link). And committee chair Damian Collins was withering and public in his criticism of Facebook sidestepping close questioning — saying the company had displayed a “pattern” of uncooperative behaviour, and “an unwillingness to engage, and a desire to hold onto information and not disclose it.”

As a result, Zuckerberg’s tally of public appearances before lawmakers this year stands at just two domestic hearings, in the US Senate and Congress, and one at a meeting of the EU parliament’s conference of presidents (which switched from a behind closed doors format to being streamed online after a revolt by parliamentarians) — and where he was heckled by MEPs for avoiding their questions.

But three sessions in a handful of months is still a lot more political grillings than Zuckerberg has ever faced before.

He’s going to need to get used to awkward questions now that lawmakers have woken up to the power and risk of his platform.

Security, weaponized 

What has become increasingly clear from the growing sound and fury over privacy and Facebook (and Facebook and privacy), is that a key plank of the company’s strategy to fight against the rise of consumer privacy as a mainstream concern is misdirection and cynical exploitation of valid security concerns.

Simply put, Facebook is weaponizing security to shield its erosion of privacy.

Privacy legislation is perhaps the only thing that could pose an existential threat to a business that’s entirely powered by watching and recording what people do at vast scale. And relying on that scale (and its own dark pattern design) to manipulate consent flows to acquire the private data it needs to profit.

Only robust privacy laws could bring Facebook’s self-serving house of cards tumbling down. User growth on its main service isn’t what it was but the company has shown itself very adept at picking up (and picking off) potential competitors — applying its surveillance practices to crushing competition too.

In Europe lawmakers have already tightened privacy oversight on digital businesses and massively beefed up penalties for data misuse. Under the region’s new GDPR framework compliance violations can attract fines as high as 4% of a company’s global annual turnover.

Which would mean billions of dollars in Facebook’s case — vs the pinprick penalties it has been dealing with for data abuse up to now.

Though fines aren’t the real point; if Facebook is forced to change its processes, so how it harvests and mines people’s data, that could knock a major, major hole right through its profit-center.

Hence the existential nature of the threat.

The GDPR came into force in May and multiple investigations are already underway. This summer the EU’s data protection supervisor, Giovanni Buttarelli, told the Washington Post to expect the first results by the end of the year.

Which means 2018 could result in some very well known tech giants being hit with major fines. And — more interestingly — being forced to change how they approach privacy.

One target for GDPR complainants is so-called ‘forced consent‘ — where consumers are told by platforms leveraging powerful network effects that they must accept giving up their privacy as the ‘take it or leave it’ price of accessing the service. Which doesn’t exactly smell like the ‘free choice’ EU law actually requires.

It’s not just Europe, either. Regulators across the globe are paying greater attention than ever to the use and abuse of people’s data. And also, therefore, to Facebook’s business — which profits, so very handsomely, by exploiting privacy to build profiles on literally billions of people in order to dart them with ads.

US lawmakers are now directly asking tech firms whether they should implement GDPR style legislation at home.

Unsurprisingly, tech giants are not at all keen — arguing, as they did at this week’s hearing, for the need to “balance” individual privacy rights against “freedom to innovate”.

So a lobbying joint-front to try to water down any US privacy clampdown is in full effect. (Though also asked this week whether they would leave Europe or California as a result of tougher-than-they’d-like privacy laws none of the tech giants said they would.)

The state of California passed its own robust privacy law, the California Consumer Privacy Act, this summer, which is due to come into force in 2020. And the tech industry is not a fan. So its engagement with federal lawmakers now is a clear attempt to secure a weaker federal framework to ride over any more stringent state laws.

Europe and its GDPR obviously can’t be rolled over like that, though. Even as tech giants like Facebook have certainly been seeing how much they can get away with — to force a expensive and time-consuming legal fight.

While ‘innovation’ is one oft-trotted angle tech firms use to argue against consumer privacy protections, Facebook included, the company has another tactic too: Deploying the ‘S’ word — security — both to fend off increasingly tricky questions from lawmakers, as they finally get up to speed and start to grapple with what it’s actually doing; and — more broadly — to keep its people-mining, ad-targeting business steamrollering on by greasing the pipe that keeps the personal data flowing in.

In recent years multiple major data misuse scandals have undoubtedly raised consumer awareness about privacy, and put greater emphasis on the value of robustly securing personal data. Scandals that even seem to have begun to impact how some Facebook users Facebook. So the risks for its business are clear.

Part of its strategic response, then, looks like an attempt to collapse the distinction between security and privacy — by using security concerns to shield privacy hostile practices from critical scrutiny, specifically by chain-linking its data-harvesting activities to some vaguely invoked “security purposes”, whether that’s security for all Facebook users against malicious non-users trying to hack them; or, wider still, for every engaged citizen who wants democracy to be protected from fake accounts spreading malicious propaganda.

So the game Facebook is here playing is to use security as a very broad-brush to try to defang legislation that could radically shrink its access to people’s data.

Here, for example, is Zuckerberg responding to a question from an MEP in the EU parliament asking for answers on so-called ‘shadow profiles’ (aka the personal data the company collects on non-users) — emphasis mine:

It’s very important that we don’t have people who aren’t Facebook users that are coming to our service and trying to scrape the public data that’s available. And one of the ways that we do that is people use our service and even if they’re not signed in we need to understand how they’re using the service to prevent bad activity.

At this point in the meeting Zuckerberg also suggestively referenced MEPs’ concerns about election interference — to better play on a security fear that’s inexorably close to their hearts. (With the spectre of re-election looming next spring.) So he’s making good use of his psychology major.

“On the security side we think it’s important to keep it to protect people in our community,” he also said when pressed by MEPs to answer how a person who isn’t a Facebook user could delete its shadow profile of them.

He was also questioned about shadow profiles by the House Energy and Commerce Committee in April. And used the same security justification for harvesting data on people who aren’t Facebook users.

“Congressman, in general we collect data on people who have not signed up for Facebook for security purposes to prevent the kind of scraping you were just referring to [reverse searches based on public info like phone numbers],” he said. “In order to prevent people from scraping public information… we need to know when someone is repeatedly trying to access our services.”

He claimed not to know “off the top of my head” how many data points Facebook holds on non-users (nor even on users, which the congressman had also asked for, for comparative purposes).

These sorts of exchanges are very telling because for years Facebook has relied upon people not knowing or really understanding how its platform works to keep what are clearly ethically questionable practices from closer scrutiny.

But, as political attention has dialled up around privacy, and its become harder for the company to simply deny or fog what it’s actually doing, Facebook appears to be evolving its defence strategy — by defiantly arguing it simply must profile everyone, including non-users, for user security.

No matter this is the same company which, despite maintaining all those shadow profiles on its servers, famously failed to spot Kremlin election interference going on at massive scale in its own back yard — and thus failed to protect its users from malicious propaganda.

TechCrunch/Bryce Durbin

Nor was Facebook capable of preventing its platform from being repurposed as a conduit for accelerating ethnic hate in a country such as Myanmar — with some truly tragic consequences. Yet it must, presumably, hold shadow profiles on non-users there too. Yet was seemingly unable (or unwilling) to use that intelligence to help protect actual lives…

So when Zuckerberg invokes overarching “security purposes” as a justification for violating people’s privacy en masse it pays to ask critical questions about what kind of security it’s actually purporting to be able deliver. Beyond, y’know, continued security for its own business model as it comes under increasing attack.

What Facebook indisputably does do with ‘shadow contact information’, acquired about people via other means than the person themselves handing it over, is to use it to target people with ads. So it uses intelligence harvested without consent to make money.

Facebook confirmed as much this week, when Gizmodo asked it to respond to a study by some US academics that showed how a piece of personal data that had never been knowingly provided to Facebook by its owner could still be used to target an ad at that person.

Responding to the study, Facebook admitted it was “likely” the academic had been shown the ad “because someone else uploaded his contact information via contact importer”.

“People own their address books. We understand that in some cases this may mean that another person may not be able to control the contact information someone else uploads about them,” it told Gizmodo.

So essentially Facebook has finally admitted that consentless scraped contact information is a core part of its ad targeting apparatus.

Safe to say, that’s not going to play at all well in Europe.

Basically Facebook is saying you own and control your personal data until it can acquire it from someone else — and then, er, nope!

Yet given the reach of its network, the chances of your data not sitting on its servers somewhere seems very, very slim. So Facebook is essentially invading the privacy of pretty much everyone in the world who has ever used a mobile phone. (Something like two-thirds of the global population then.)

In other contexts this would be called spying — or, well, ‘mass surveillance’.

It’s also how Facebook makes money.

And yet when called in front of lawmakers to asking about the ethics of spying on the majority of the people on the planet, the company seeks to justify this supermassive privacy intrusion by suggesting that gathering data about every phone user without their consent is necessary for some fuzzily-defined “security purposes” — even as its own record on security really isn’t looking so shiny these days.

WASHINGTON, DC – APRIL 11: Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC. This is the second day of testimony before Congress by Zuckerberg, 33, after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

It’s as if Facebook is trying to lift a page out of national intelligence agency playbooks — when governments claim ‘mass surveillance’ of populations is necessary for security purposes like counterterrorism.

Except Facebook is a commercial company, not the NSA.

So it’s only fighting to keep being able to carpet-bomb the planet with ads.

Profiting from shadow profiles

Another example of Facebook weaponizing security to erode privacy was also confirmed via Gizmodo’s reportage. The same academics found the company uses phone numbers provided to it by users for the specific (security) purpose of enabling two-factor authentication, which is a technique intended to make it harder for a hacker to take over an account, to also target them with ads.

In a nutshell, Facebook is exploiting its users’ valid security fears about being hacked in order to make itself more money.

Any security expert worth their salt will have spent long years encouraging web users to turn on two factor authentication for as many of their accounts as possible in order to reduce the risk of being hacked. So Facebook exploiting that security vector to boost its profits is truly awful. Because it works against those valiant infosec efforts — so risks eroding users’ security as well as trampling all over their privacy.

It’s just a double whammy of awful, awful behavior.

And of course, there’s more.

A third example of how Facebook seeks to play on people’s security fears to enable deeper privacy intrusion comes by way of the recent rollout of its facial recognition technology in Europe.

In this region the company had previously been forced to pull the plug on facial recognition after being leaned on by privacy conscious regulators. But after having to redesign its consent flows to come up with its version of ‘GDPR compliance’ in time for May 25, Facebook used this opportunity to revisit a rollout of the technology on Europeans — by asking users there to consent to switching it on.

Now you might think that asking for consent sounds okay on the surface. But it pays to remember that Facebook is a master of dark pattern design.

Which means it’s expert at extracting outcomes from people by applying these manipulative dark arts. (Don’t forget, it has even directly experimented in manipulating users’ emotions.)

So can it be a free consent if ‘individual choice’ is set against a powerful technology platform that’s both in charge of the consent wording, button placement and button design, and which can also data-mine the behavior of its 2BN+ users to further inform and tweak (via A/B testing) the design of the aforementioned ‘consent flow’? (Or, to put it another way, is it still ‘yes’ if the tiny greyscale ‘no’ button fades away when your cursor approaches while the big ‘YES’ button pops and blinks suggestively?)

In the case of facial recognition, Facebook used a manipulative consent flow that included a couple of self-serving ‘examples’ — selling the ‘benefits’ of the technology to users before they landed on the screen where they could choose either yes switch it on, or no leave it off.

One of which explicitly played on people’s security fears — by suggesting that without the technology enabled users were at risk of being impersonated by strangers. Whereas, by agreeing to do what Facebook wanted you to do, Facebook said it would help “protect you from a stranger using your photo to impersonate you”…

That example shows the company is not above actively jerking on the chain of people’s security fears, as well as passively exploiting similar security worries when it jerkily repurposes 2FA digits for ad targeting.

There’s even more too; Facebook has been positioning itself to pull off what is arguably the greatest (in the ‘largest’ sense of the word) appropriation of security concerns yet to shield its behind-the-scenes trampling of user privacy — when, from next year, it will begin injecting ads into the WhatsApp messaging platform.

These will be targeted ads, because Facebook has already changed the WhatsApp T&Cs to link Facebook and WhatsApp accounts — via phone number matching and other technical means that enable it to connect distinct accounts across two otherwise entirely separate social services.

Thing is, WhatsApp got fat on its founders promise of 100% ad-free messaging. The founders were also privacy and security champions, pushing to roll e2e encryption right across the platform — even after selling their app to the adtech giant in 2014.

WhatsApp’s robust e2e encryption means Facebook literally cannot read the messages users are sending each other. But that does not mean Facebook is respecting WhatsApp users’ privacy.

On the contrary; The company has given itself broader rights to user data by changing the WhatsApp T&Cs and by matching accounts.

So, really, it’s all just one big Facebook profile now — whichever of its products you do (or don’t) use.

This means that even without literally reading your WhatsApps, Facebook can still know plenty about a WhatsApp user, thanks to any other Facebook Group profiles they have ever had and any shadow profiles it maintains in parallel. WhatsApp users will soon become 1.5BN+ bullseyes for yet more creepily intrusive Facebook ads to seek their target.

No private spaces, then, in Facebook’s empire as the company capitalizes on people’s fears to shift the debate away from personal privacy and onto the self-serving notion of ‘secured by Facebook spaces’ — in order that it can keep sucking up people’s personal data.

Yet this is a very dangerous strategy, though.

Because if Facebook can’t even deliver security for its users, thereby undermining those “security purposes” it keeps banging on about, it might find it difficult to sell the world on going naked just so Facebook Inc can keep turning a profit.

What’s the best security practice of all? That’s super simple: Not holding data in the first place.

29 Sep 2018

Two weeks with a $16,000 Hasselblad kit

For hobbyist photographers like myself, Hasselblad has always been the untouchable luxury brand reserved for high-end professionals.

To fill the gap between casual and intended photography, they released the X1D — a compact, mirrorless medium format. Last summer when Stefan Etienne reviewed the newly released camera, I asked to take a picture.

After importing the raw file into Lightroom and flipping through a dozen presets, I joked that I would eat Ramen packets for the next year so I could buy this camera. It was that impressive.

XCD 3.5/30mm lens

Last month Hasselblad sent us the XCD 4/21mm (their latest ultra wide-angle lens) for a two-week review, along with the X1D body and XCD 3,2/90mm portrait lens for comparison. I wanted to see what I could do with the kit and had planned the following:

  • Swipe right on everyone with an unflattering Tinder profile picture and offer to retake it for them
  • Travel somewhere with spectacular landscapes

My schedule didn’t offer much time for either, so a weekend trip to the cabin would have to suffice.

[gallery type="slideshow" link="none" columns="1" size="full" ids="1722181,1722182,1722183,1722184,1722185,1722186,1722187,1722188,1722201"]

As an everyday camera

The weekend upstate was rather quiet and uneventful, but it served to be the perfect setting to test out the camera kit because the X1D is slow A. F.

It takes approximately 8 seconds to turn on, with an additional 2-3 seconds of processing time after each shutter click — top that off with a slow autofocus, slow shutter release and short battery life (I went through a battery within a day, approximately 90 shots fired). Rather than reiterating Stefan’s review, I would recommend reading it here for full specifications.

Coming from a Canon 5D Mark IV, I’m used to immediacy and a decent hit rate. The first day with the Hasselblad was filled with constant frustration from missed moments, missed opportunities. It felt impractical as an everyday camera until I shifted toward a more deliberate approach — reverting back to high school SLR days when a roll of film held a limited 24 exposures.

When I took pause, I began to appreciate the camera’s details: a quiet shutter, a compact but sturdy body and an intuitive interface, including a touchscreen LCD display/viewfinder.

[gallery type="slideshow" link="none" columns="1" size="full" ids="1722796,1722784,1722775"]

Nothing looks or feels cheap about the Swiss-designed, aluminum construction of both the body and lenses. It’s heavy for a mirrorless camera, but it feels damn good to hold.

XCD 4/21mm lens

[gallery type="slideshow" link="none" columns="1" size="full" ids="1722190,1722191,1722489,1722490"]

Dramatic landscapes and cityscapes without an overly exaggerated perspective — this is where the XCD 4/21mm outperforms other super wide-angle lenses.

With a 105° angle of view and 17mm field of view equivalent on a full-framed DSLR, I was expecting a lot more distortion and vignetting, but the image automatically corrected itself and flattened out when imported into Lightroom. The latest deployment of Creative Cloud has the Hasselblad (camera and lens) profile integrated into Lightroom, so there’s no need for downloading and importing profiles. 

Oily NYC real estate brokers should really consider using this lens to shoot their dinky 250 sq. ft. studio apartments to feel grand without looking comically fish-eyed.

XCD 3,2/90mm lens

The gallery below was shot using only the mirror’s vanity lights as practicals. It was also shot underexposed to see how much detail I could pull in post. Here are the downsized, unedited versions, so you don’t have to wait for each 110mb file to load.

[gallery type="slideshow" link="none" columns="1" size="full" ids="1722193,1722194,1722195,1722196"]

I’d like to think that if I had time and was feeling philanthropic, I could fix a lot of love lives on Tinder with this lens.

Where it shines

Normally, images posted in reviews are unedited, but I believe the true test of raw images lies in post-production. This is where the X1D’s slow processing time and quick battery drainage pays off. With the camera’s giant 50 MP 44 x 33mm CMOS sensor, each raw file was approximately 110mb (compared to my Mark IV’s 20-30mb) — that’s a substantial amount of information packed into 8272 x 6200 pixels.

Resized to 2000 x 1500 pixels and cropped to 2000 x 1500 pixels

While other camera manufacturers tend to favor certain colors and skin tones, Dan Wang, a Hasselblad rep, told me, “We believe in seeing a very natural or even palette with very little influence. We’re not here to gatekeep what color should be. We’re here to give you as much data as possible, providing as much raw detail, raw color information that allows you to interpret it to your extent.”

As someone who enjoys countless hours tweaking colors, shifting pixels and making things pretty, I’m appreciative of this. It allows for less fixing, more creative freedom.

Who is this camera for?

My friend Peter, a fashion photographer (he’s done editorial features for Harper’s Bazaar, Cosmopolitan and the likes), is the only person I know who shoots on Hasselblad, so it felt appropriate to ask his opinion. “It’s for pretentious rich assholes with money to burn,” he snarked. I disagree. The X1D is a solid step for Hasselblad to get off heavy-duty tripods and out of the studio.

At this price point though, one might expect the camera to do everything, but it’s aimed at a narrow demographic: a photographer who is willing to overlook speediness for quality and compactibility.

With smartphone companies like Apple and Samsung stepping up their camera game over the past few years, the photography world feels inundated with inconsequential, throw-away images (self-indulgent selfies, “look what I had for lunch,” OOTD…).

My two weeks with the Hasselblad was a kind reminder of photography as a methodical art form, rather than a spray and pray hobby.

Reviewed kit runs $15,940, pre-taxed:

29 Sep 2018

What each cloud company could bring to the Pentagon’s $10 B JEDI cloud contract

The Pentagon is going to make one cloud vendor exceedingly happy when it chooses the winner of the $10 billion, ten-year enterprise cloud project dubbed the Joint Enterprise Defense Infrastructure (or JEDI for short). The contract is designed to establish the cloud technology strategy for the military over the next 10 years as it begins to take advantage of current trends like Internet of Things, artificial intelligence and big data.

Ten billion dollars spread out over ten years may not entirely alter a market that’s expected to reach $100 billion a year very soon, but it is substantial enough give a lesser vendor much greater visibility, and possibly deeper entree into other government and private sector business. The cloud companies certainly recognize that.

Photo: Glowimages/Getty Images

That could explain why they are tripping over themselves to change the contract dynamics, insisting, maybe rightly, that a multi-vendor approach would make more sense.

One look at the Request for Proposal (RFP) itself, which has dozens of documents outlining various criteria from security to training to the specification of the single award itself, shows the sheer complexity of this proposal. At the heart of it is a package of classified and unclassified infrastructure, platform and support services with other components around portability. Each of the main cloud vendors we’ll explore here offers these services. They are not unusual in themselves, but they do each bring a different set of skills and experiences to bear on a project like this.

It’s worth noting that it’s not just interested in technical chops, the DOD is also looking closely at pricing and has explicitly asked for specific discounts that would be applied to each component. The RFP process closes on October 12th and the winner is expected to be chosen next April.

Amazon

What can you say about Amazon? They are by far the dominant cloud infrastructure vendor. They have the advantage of having scored a large government contract in the past when they built the CIA’s private cloud in 2013, earning $600 million for their troubles. It offers GovCloud, which is the product that came out of this project designed to host sensitive data.

Jeff Bezos, Chairman and founder of Amazon.com. Photo: Drew Angerer/Getty Images

Many of the other vendors worry that gives them a leg up on this deal. While five years is a long time, especially in technology terms, if anything, Amazon has tightened control of the market. Heck, most of the other players were just beginning to establish their cloud business in 2013. Amazon, which launched in 2006, has maturity the others lack and they are still innovating, introducing dozens of new features every year. That makes them difficult to compete with, but even the biggest player can be taken down with the right game plan.

Microsoft

If anyone can take Amazon on, it’s Microsoft. While they were somewhat late the cloud they have more than made up for it over the last several years. They are growing fast, yet are still far behind Amazon in terms of pure market share. Still, they have a lot to offer the Pentagon including a combination of Azure, their cloud platform and Office 365, the popular business suite that includes Word, PowerPoint, Excel and Outlook email. What’s more they have a fat contract with the DOD for $900 million, signed in 2016 for Windows and related hardware.

Microsoft CEO, Satya Nadella Photo: David Paul Morris/Bloomberg via Getty Images

Azure Stack is particularly well suited to a military scenario. It’s a private cloud you can stand up and have a mini private version of the Azure public cloud. It’s fully compatible with Azure’s public cloud in terms of APIs and tools. The company also has Azure Government Cloud, which is certified for use by many of the U.S. government’s branches, including DOD Level 5. Microsoft brings a lot of experience working inside large enterprises and government clients over the years, meaning it knows how to manage a large contract like this.

Google

When we talk about the cloud, we tend to think of the Big Three. The third member of that group is Google. They have been working hard to establish their enterprise cloud business since 2015 when they brought in Diane Greene to reorganize the cloud unit and give them some enterprise cred. They still have a relatively small share of the market, but they are taking the long view, knowing that there is plenty of market left to conquer.

Head of Google Cloud, Diane Greene Photo: TechCrunch

They have taken an approach of open sourcing a lot of the tools they used in-house, then offering cloud versions of those same services, arguing that who knows better how to manage large-scale operations than they do. They have a point, and that could play well in a bid for this contract, but they also stepped away from an artificial intelligence contract with DOD called Project Maven when a group of their employees objected. It’s not clear if that would be held against them or not in the bidding process here.

IBM

IBM has been using its checkbook to build a broad platform of cloud services since 2013 when it bought Softlayer to give it infrastructure services, while adding software and development tools over the years, and emphasizing AI, big data, security, blockchain and other services. All the while, it has been trying to take full advantage of their artificial intelligence engine, Watson.

IBM Chairman, President and CEO Ginni Romett Photo: Ethan Miller/Getty Images

As one of the primary technology brands of the 20th century, the company has vast experience working with contracts of this scope and with large enterprise clients and governments. It’s not clear if this translates to its more recently developed cloud services, or if it has the cloud maturity of the others, especially Microsoft and Amazon. In that light, it would have its work cut out for it to win a contract like this.

Oracle

Oracle has been complaining since last spring to anyone who will listen, including reportedly the president, that the JEDI RFP is unfairly written to favor Amazon, a charge that DOD firmly denies. They have even filed a formal protest against the process itself.

That could be a smoke screen because the company was late to the cloud, took years to take it seriously as a concept, and barely registers today in terms of market share. What it does bring to the table is broad enterprise experience over decades and one of the most popular enterprise databases in the last 40 years.

Larry Ellison, chairman of Oracle Corp.

Larry Ellison, chairman of Oracle. Photo: David Paul Morris/Bloomberg via Getty Images

It recently began offering a self-repairing database in the cloud that could prove attractive to DOD, but whether its other offerings are enough to help it win this contract remains to be to be seen.

28 Sep 2018

What Instagram users need to know about Facebook’s security breach

Even if you never log into Facebook itself these days, the other apps and services you use might be impacted by Facebook’s latest big, bad news.

In a follow-up call on Friday’s revelation that Facebook has suffered a security breach affecting at least 50 million accounts, the company clarified that Instagram users were not out of the woods — nor were any other third-party services that utilized Facebook Login. Facebook Login is the tool that allows users to sign in with a Facebook account instead of traditional login credentials and many users choose it as a convenient way to sign into a variety of apps and services.

Third-party apps and sites affected too

Due to the nature of the hack, Facebook cannot rule out the fact that attackers may have also accessed any Instagram account linked to an affected Facebook account through Facebook Login. Still, it’s worth remembering that while Facebook can’t rule it out, the company has no evidence (yet) of this kind of activity.

“So the vulnerability was on Facebook, but these access tokens enable someone to use [a connected account] as if they were the account holder themselves — this does mean they could have access other third party apps that were using Facebook login,” Facebook Vice President of Product Management Guy Rosen explained on the call.

“Now that we have reset all of those access tokens as part of protecting the security of people’s accounts, developers who use Facebook login will be able to detect that those access tokens has been reset, identify those users and as a user, you will simply have to log in again into those third party apps.”

Rosen reiterated that there is plenty Facebook does not know about the hack, including the extent to which attackers manipulated the three security bugs in question to obtain access to external accounts through Facebook Login.

“The vulnerability was on Facebook itself and we’ve yet to determine, given the investigation is really early, [what was] the exact nature of misuse and whether there was any access to Instagram accounts, for example,” Rosen said.

Anyone with a Facebook account affected by the breach — you should have been automatically logged out and will receive a notification — will need to unlink and relink their Instagram account to Facebook in order to continue cross-posting content to Facebook.

How to relink your Facebook account and do a security check

To do relink your Instagram account to Facebook, if you choose to, open Instagram Settings > Linked Accounts and select the checkbox next to Facebook. Click Unlink and confirm your selection. If you’d like to reconnect Instagram with Facebook, you’ll need to select Facebook in the Linked Accounts menu and login with your credentials like normal.

If you know your Facebook account was affected by the breach, it’s wise to check for suspicious activity on your account. You can do this on Facebook through the Security and Login menu.

There, you’ll want to browse the activity listed to make sure you don’t see anything that doesn’t look like you — logins from other countries, for example. If you’re concerned or just want to play it safe, you can always find the link to “Log Out Of All Sessions” by scrolling toward the bottom of the page.

While we know a little bit more now about Facebook’s biggest security breach to date, there’s still a lot that we don’t. Expect plenty of additional information in the coming days and weeks as Facebook surveys the damage and passes that information along to its users. We’ll do the same.

28 Sep 2018

Facebook is blocking users from posting some stories about its security breach

Some users are reporting that they are unable to post today’s big story about a security breach affecting 50 million Facebook users. The issue appears to only affect particular stories from certain outlets, at this time one story from The Guardian and one from the Associated Press, both reputable press outlets.

When going to share the story to their news feed, some users, including members of the staff here at TechCrunch who were able to replicate the bug, were met with the following error message which prevented them from sharing the story.

According to the message, Facebook is flagging the stories as spam due to how widely they are being shared or as the message puts it, the system’s observation that “a lot of people are posting the same content.”

To be clear, this isn’t one Facebook content moderator sitting behind a screen rejecting the link somewhere or the company conspiring against users spreading damning news. The situation is another example of Facebook’s automated content flagging tools marking legitimate content as illegitimate, in this case calling it spam. Still, it’s strange and difficult to understand why such a bug wouldn’t affect many other stories that regularly go viral on the social platform.

This instance is by no means a first for Facebook. The platform’s automated tools — which operate at unprecedented scale for a social network — are well known for at times censoring legitimate posts and flagging benign content while failing to detect harassment and hate speech. We’ve reached out to Facebook for details about how this kind of thing happens but the company appears to have its hands full with the bigger news of the day.

While the incident is nothing particularly new, it’s an odd quirk — and in this instance quite a bad look given that the bad news affects Facebook itself.

28 Sep 2018

Betterment keeps growing as fintech competitors rise

Betterment, which Barron’s recently declared the largest independent online financial adviser, is betting that the future of online investing includes a blend of robot and human advisers. And the plan is working, according to chief executive Jon Stein.

However, incumbents like Vanguard have leveraged existing strengths to move in to the market, and other startups like Robinhood have carved out swathes of the fast-growing market.
In response, Betterment has launched a series of new high-touch features on the platform, including “advice packages” that its users can buy to receive one-time advice from professional human experts.

In the interview below, Stein shares new details on the company’s growth, its plans to fend off the rise of commission-free trading, an eventual bear market and the many other challenges in the space, and eventually going public.

Gregg Schoenberg: Things have changed a lot for Betterment and the entire sector since we first sat down in early 2017. What are Betterment’s assets under management these days?

Jon Stein: We now have $15.5 billion under management and we’ve crossed 400,000 customers.

GS: Congratulations. Is each billion getting easier to accumulate or harder?

JS: We’ve seen acceleration every year we’ve been in the business. Back in the day, I like to say that it took us a year to get to our first $10 million under management. And then six months to get to $20 million, and three months to get to $30 million. Today, $10 million is a bad day. So the scale is far greater today because assets beget assets.

GS: That’s impressive, but when you look at the competitive environment, there are clearly some other online peers that have managed to build traction, and perhaps the incumbents watch you more closely now. What’s your core reason for optimism that when the dust settles, Betterment emerges better off?

JS: Part of it is our customer obsession and commitment to innovate around what the customer wants in financial services, and part of it is that it’s still very early in the journey for us. Just as Jeff Bezos always talks about his Day One, that’s how I feel about our space. We’ve got a long list of projects that we are working on and there’s so much more for us to do.

It’s about trust, and it’s about who you want to manage your money. Is it somebody whose sole focus is to help you make the most of it? Or somebody who is trying to gamify it or trying to make money off you in ways they’re not telling you about?

GS: But at the same time, you’re aware of Acorns and Robinhood and some others that are also building traction. Robinhood, for example, talks about becoming a full-service financial institution.

JS: I think some of these firms have different philosophies than what we do. I started this company because people were coming to ask me, “What should I do with my money?” It’s a really hard question, but we sought to excel on the three pillars that we think are most important in answering it: performance, convenience and peace of mind. I think that none of the companies you’ve mentioned do a better job than we do.

GS: Okay, but you have to acknowledge that dangling free commissions before a younger investor starting out is enticing, right? I mean, free works. Look at how Google and Facebook have trained a generation of people to expect free.

JS: Free isn’t new, right? There have been free offers for millions of years. I’ll agree with you that it’s powerful, but people are wise to the fact that companies are making money. And if the product that you’re being sold is free, well, you know, you’re the product.

GS: Right.

JS: And probably in ways that are less well aligned with your interests as a customer. We’ve always been transparent about our fee. It’s always been up front. That’s one of the ways we establish peace of mind. Because the only way we make money is that 25 basis point fee that we charge. That’s it. These other companies are selling you data, they’re trading against you—

GS: —You’re referring to selling the order flow?

JS: I’m saying order flow. I’m saying they’re actually selling trade data to other firms who can trade against you. They’re not there principally to make the most of your money. Betterment is. Betterment is a mission-driven company that’s going to make the most of our customer’s money, which is an increasingly unique position.

GS: So you’re really speaking to the issue of trust.

JS: It’s about trust, and it’s about who you want to manage your money. Is it somebody whose sole focus is to help you make the most of it? Or somebody who is trying to gamify it or trying to make money off you in ways they’re not telling you about?

GS: Let’s turn to the incumbents. Recently, your new board member, Donna Wells, said this: “Betterment is directly causing people to ask better informed and pretty uncomfortable questions of the incumbents.” Doesn’t that serve the ends of the Schwabs and Vanguards who have massive marketing and tech budgets? Haven’t you just motivated the behemoths?

JS: [Charles] Schwab is still trying to put everyone into cash and paying nothing on that cash. They’re trying to put all of their customers into their own funds and they make a lot of money off those funds — even though those funds probably aren’t what’s best for the customer. So they’re not acting in their customers’ best interests with the products they’re selling. Vanguard is a great company. We’ve learned a lot from them, but the only funds they’ll put you in on their platform are Vanguard funds. They refuse to look at other funds.

GS: But you use Vanguard funds.

JS: Yes, we use a lot of Vanguard funds, but they’re not right for everything. Vanguard is a mutual fund sales company. That’s all they’re doing … selling you mutual funds. So these companies are not thinking about the customer. And none of these incumbents can do that because they have so much to lose from the way that they are doing business today. Also, it’s a big market. There are lots of companies out there. You named a couple of big ones. But if we think more broadly about financial services competition, there are other big firms out there. There’s Raymond James and Edward Jones and there’s Financial Engines, J.P. Morgan Chase, Goldman Sachs, Bank of America, etc.

GS: Right, and we’ll get back to J.P. and Goldman in a moment, but the competition–

JS: –All of these firms see what we’re doing. And I think our vision probably isn’t as unique as it was eight years ago because we’ve moved the industry forward. We’ve set a standard of what customers should expect. And lots of people are trying to run at that now. But we keep moving the standard down the field. And I think it’s going to get harder and harder for these firms to catch up. Will one or two get there? I wouldn’t be surprised. There are a lot of smart people running these firms. Will all of them get there? No. But it doesn’t worry me that we’ll have competition. There’s always been competition in this space.

GS: Fair enough, but when you talk specifically about Vanguard, whose robo has crossed $100 billion in assets under management, and Schwab’s, which has over $30 billion, what you’re saying is that you’re not fazed because your near-$16 billion is unconflicted.

JS: That’s a big piece. I could also expand on why we’re better than them from a customer perspective. Our mobile and web apps are better than what they produce. We also have higher-performance services; the tax management that we do is better than what anybody else offers. The kinds of reporting and tools that you get are better than anybody else’s. The behavioral guard rails that we have are better, too. So we give you more performance, more convenience, and I believe better peace of mind.

GS: I think that JP and Goldman are especially interesting to touch on. JP’s You Invest, as you know, is dangling free trading out there and Goldman has embraced retail customers through Marcus, buying Clarity Money, etc.

JS: I think it’s great that more and more folks are going after the zero commission model. Because I’ve always thought that commissions should be zero. And that’s going to compete things away, to where there’s no longer a real competitive advantage in having zero commissions. Right? It should just be the way it is. But ultimately, trading stocks is not a productive activity for most Americans.

GS: Some people like to be self-directed.

JS: There’s a segment that wants to do that because it’s like a hobby. But it’s not actually the way to make the most of your money. I compare the financial system that we’ve built to the healthcare system. Imagine if you had all the drugs on the shelf, and anyone could take as much as they want of anything. It’s all cheap, but there are no doctors. You would never design a healthcare system that way because everyone would basically have to become an expert in managing their own situation. And that’s really expensive for people who are engaged in other careers and have busy lives.

Something like 40% of the 2,000 people that we surveyed thought that the market hadn’t gone up since 2008.

GS: Despite Betterment’s customer-centric attributes, it’s not immune to the competitive realities out there. In fact, Betterment, by virtue of the teaser rates that it offers, is playing the game, too.

JS: Yes, we do have a deal where people get three months free if you refer a customer. That’s always been the No. 1 way that we’ve attracted people. And that’s kept our cost of customer acquisition low, and kept us growing faster and faster, while spending less money each year. And so I think we’ve got a model that continues to generate return. By the way, with all the competition that you’re talking about, we’re still growing more customers at a lower cost than we have in any year ever.

GS: Is there any color you can give me on your customer acquisition costs?

JS: We don’t reveal our customer acquisition costs publicly, but they are a fraction of the numbers that I see quoted publicly. They’re also a fraction of what I see in the financials of the big competitors out there.

GS: If you put Betterment’s name on a stadium, I’m going to call you out on that, Jon. I want to turn to the topic of individual stock trading and specifically, to this recent commercial you’re airing featuring the actress, Maggie Siff.

JS: Yes, they’re filming “Billions” near me.

GS: The commercial, as you know, features your tagline, Outsmart Average.

JS: Yes.

GS: As you also know, her character on “Billions,” Wendy Rhoades, isn’t helping Bobby Axelrod pick a diversified portfolio of low-cost ETFs. So while I understand your view that most people shouldn’t be in individual stocks, aren’t you using the Wendy Rhoades character to send your target market another message?

JS: Well, Maggie is a strong spokesperson, because across a number of different characters, she’s played someone who’s wise, a coach and a leader. This campaign came out of a place of shifting the conversation away from Betterment versus the old way of investing, which conjures up images of boiler-room brokers and all those bad practices that traditional finance is peddling. But the problem with talking about all of the negative things in the industry is that people often don’t want to hear that.

GS: We’ve heard it ad nauseam.

JS: Yes, most people don’t want to hear that they’ve been doing the wrong thing with their money for a long time. But what we discovered is that we can shift away from talking about the industry, and shift the focus on our customers. There are people who are okay with the way things are, and there are people who are constantly striving for more. For example, I’ve got the right credit card for going to restaurants because it gives me 4 percent back. I’ve also got the right one for buying other stuff.

GS: You get 4 percent cash back at restaurants?

JS: Yes, the Uber card gives you 4 percent back on certain restaurants. So I’m an optimizer. When I go on a vacation, I’ll look at a number of sites and figure out exactly what’s the best place to go, and then I’ll book an Airbnb in the best neighborhood. It’s the same when it comes to my money. I want it managed really well and I demand more than whatever the status quo provides. That’s who we serve and that’s what we’re saying in the commercial. It’s about people who demand better than the status quo.

GS: But no individual stocks?

JS: Individual stocks are fine. There’s nothing wrong with managing your money that way. It’s just not the way most people want to manage their money.

GS: Let’s talk about the bear market, which I’m absolutely certain will happen in our lifetimes. As you know, many of Betterment’s customers have never lived through a bear market as an investor. The standard thing to say is that when it comes, the right thing to do is to stay the course, think long-term, etc.

JS: Yes.

I’ve always said, we’re building an institution and building to go public. It’s something that we want to ultimately do. My view is we’ll probably be at least twice as big as we are today before we go out.

GS: What happens when the market headlines get really ugly and people start seeing a sea of red in their Betterment account?

JS: A bear market is bad for everyone in this industry, not just Betterment. And we’ve been preparing for that in a number of ways. One, we have messaging that we’ve tested and have shown can help make those customers stay the course. We’ll also do things like suggest that instead of just pulling all your money out, maybe you want to think about changing your allocation. Take 2008, for example. Betterment wasn’t yet in business, but we saw a lot of people blow themselves up by getting out of the market.

GS: It was very tempting to run for cover.

JS: Actually, we ran a survey of customer attitudes since then and it was shocking to me that something like 40 percent of the 2,000 people that we surveyed thought that the market hadn’t gone up since 2008.

GS: Wow.

JS: Yes, it’s sad. And I think back to our mission, which is to help people make the most of their money and keep them invested. So it’s important to us that we do that throughout the cycle. We’re also preparing for it by thinking about our strategic options.

GS: Can you elaborate?

JS: Just this month, we launched a smart saver account, which gets you a higher yield on your cash. It’s currently paying 1.83 percent net of all of our fees, and it’s actually higher than that if you consider that it’s a tax advantaged account.

GS: So that’s not FDIC-insured then.

JS: It’s not FDIC-insured, but it’s SIPC-insured. Another area that we think is an interesting countercyclical play is our B2B business. Throughout the market cycle, people are contributing to their retirement, which makes our 401k business an attractive place for us to be. Similarly, our Betterment for Advisors business is a good place for us to be investing.

GS: How do you feel about adding life insurance and college savings products?

JS: Actually, many people already are saving for college with Betterment through things like IRAs, which can be used for college. As far as life insurance is concerned, we’re talking to a lot of financial partners about it because we think it’s interesting.

GS: I agree. So last topic: When does Betterment go public?

JS: I’ve always said, we’re building an institution and building to go public. It’s something that we want to ultimately do. My view is we’ll probably be at least twice as big as we are today before we go out. Is that going to take two years or five years? I can’t tell you exactly when it’s going to be because It will depend not just on our scale, but also on the capital markets, and a lot of other factors. But we continue to drive towards it, and I believe we’re in a great position. We’re audited, we have an amazing finance team, we’ve got great risk management, security processes … all of those things that companies that are preparing to IPO ought to be doing.

GS: Well, you appear to be big enough, and you have a great customer base and everybody knows who Betterment is. But as you said, timing matters.

JS: Yes, and they’re probably aren’t enough public companies out there today. But there’s innovation happening around how companies go public, which is needed. I’m also really encouraged by what some of our peers are doing out in the market, and I want us to continue to innovate in financial services, even around our IPO.

GS: On that note, Jon, I wish you and the team great luck.

JS: Thanks very much, Gregg.

This interview has been edited for content, length and clarity.

28 Sep 2018

5 takeaways on the state of AI from Disrupt SF

The promise of artificial intelligence is immense, but the roadmap to achieving those goals still remains unclear. Onstage at TechCrunch Disrupt SF, some of AI’s leading minds shared their thoughts on current competition in the market, how to ensure algorithms don’t perpetuate racism and the future of human-machine interaction.

Here are five takeaways on the state of AI from Disrupt SF 2018:

1. U.S. companies will face many obstacles if they look to China for AI expansion

Sinnovation CEO Kai-Fu Lee (Photo: TechCrunch/Devin Coldewey)

The meteoric rise in China’s focus on AI has been well-documented and has become impossible to ignore these days. With mega companies like Alibaba and Tencent pouring hundreds of millions of dollars into home-grown businesses, American companies are finding less and less room to navigate and expand in China. AI investor and Sinnovation CEO Kai-Fu Lee described China as living in a “parallel universe” to the U.S. when it comes to AI development.

“We should think of it as electricity,” explained Lee, who led Google’s entrance into China. “Thomas Edison and the AI deep learning inventors – who were American – they invented this stuff and then they generously shared it. Now, China, as the largest marketplace with the largest amount of data, is really using AI to find every way to add value to traditional businesses, to internet, to all kinds of spaces.”

“The Chinese entrepreneurial ecosystem is huge so today the most valuable AI companies in computer vision, speech recognition, drones are all Chinese companies.”

2. Bias in AI is a new face on an old problem

SAN FRANCISCO, CA – SEPTEMBER 07: (L-R) UC Berkeley Professor Ken Goldberg, Google AI Research Scientist Timnit Gebru, UCOT Founder and CEO Chris Ategeka, and moderator Devin Coldewey speak onstage during Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch)

AI promises to increase human productivity and efficiency by taking the grunt work out of many processes. But the data used to train many AI systems often falls victim to the same biases of humans and, if unchecked, can further marginalize communities caught up in systemic issues like income disparity and racism.

“People in lower socio-economic statuses are under more surveillance and go through algorithms more,” said Google AI’s Timnit Gebru. “So if they apply for a job that’s lower status they are likely to go through automated tools. We’re right now in a stage where these algorithms are being used in different places and we’re not event checking if they’re breaking existing laws like the Equal Opportunity Act.”

A potential solution to prevent the spread of toxic algorithms was outlined by UC Berkeley’s Ken Goldberg who cited the concept of ensemble theory, which involves multiple algorithms with various classifiers working together to produce a single result.

We’re right now in a stage where these algorithms are being used in different places and we’re not even checking if they’re breaking existing laws.

But how do we know if the solution to inadequate tech is more tech? Goldberg says this is where having individuals from multiple backgrounds, both in and outside the world of AI, is vital to developing just algorithms. “It’s very relevant to think about both machine intelligence and human intelligence,” explained Goldberg. “Having people with different viewpoints is extremely valuable and I think that’s starting to be recognized by people in business… it’s not because of PR, it’s actually because it will give you better decisions if you get people with different cognitive, diverse viewpoints.”

3. The future of autonomous travel will rely on humans and machines working together

Uber CEO Dara Khosrowshahi (Photo: TechCrunch/Devin Coldewey)

Transportation companies often paint a flowery picture of the near future where mobility will become so automated that human intervention will be detrimental to the process.

That’s not the case, according to Uber CEO Dara Khosrowshahi. In an era that’s racing to put humans on the sidelines, Khosrowshahi says humans and machines working hand-in-hand is the real thing.

“People and computers actually work better than each of them work on a stand-alone basis and we are having the capability of bringing in autonomous technology, third-party technology, Lime, our own product all together to create a hybrid,” said Khosrowshahi.

Khosrowshahi ultimately envisions the future of Uber being made up of engineers monitoring routes that present the least amount of danger for riders and selecting optimal autonomous routes for passengers. The combination of these two systems will be vital in the maturation of autonomous travel, while also keeping passengers safe in the process.

4. There’s no agreed definition of what makes an algorithm “fair”

SAN FRANCISCO, CA – SEPTEMBER 07: Human Rights Data Analysis Group Lead Statistician Kristian Lum speaks onstage during Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch)

Last July ProPublica released a report highlighting how machine learning can falsely develop its own biases. The investigation examined an AI system used in Fort Lauderdale, Fla., that falsely flagged black defendants as future criminals at a rate twice that of white defendants. These landmark findings set off a wave of conversation on the ingredients needed to build a fair algorithms.

One year later AI experts still don’t have the recipe fully developed, but many agree a contextual approach that combines mathematics and an understanding of human subjects in an algorithm is the best path forward.

“Unfortunately there is not a universally agreed upon definition of what fairness looks like,” said Kristian Lum, lead statistician at the Human Rights Data Analysis Group. “How you slice and dice the data can determine whether you ultimately decide the algorithm is unfair.”

Lum goes on to explain that research in the past few years has revolved around exploring the mathematical definition of fairness, but this approach is often incompatible to the moral outlook on AI.

“What makes an algorithm fair is highly contextually dependent, and it’s going to depend so much on the training data that’s going into it,” said Lum. “You’re going to have to understand a lot about the problem, you’re going to have to understand a lot about the data, and even when that happens there will still be disagreements on the mathematical definitions of fairness.”

5. AI and Zero Trust are a “marriage made in heaven” and will be key in the evolution of cybersecurity

SAN FRANCISCO, CA – SEPTEMBER 06: (l-R) Duo VP of Security Mike Hanley, Okta Executive Director of Cybersecurity Marc Rogers, and moderator Mike Butcher speak onstage during Day 2 of TechCrunch Disrupt SF 2018 at Moscone Center on September 6, 2018 in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch)

If previous elections have taught us anything it’s that security systems are in dire need of improvement to protect personal data, financial assets and the foundation of democracy itself. Facebook’s ex-chief security officer Alex Stamos shared a grim outlook on the current state of politics and cybersecurity at Disrupt SF, stating the security infrastructure for the upcoming Midterm elections isn’t much better than it was in 2016.

So how effective will AI be in improving these systems? Marc Rodgers of Okta and Mike Hanley of Duo Security believe the combination of AI and a security model called Zero Trust, which cuts off all users from accessing a system until they can prove themselves, are the key to developing security systems that actively fight off breaches without the assistance of humans.

“AI and Zero Trust are a marriage made in heaven because the whole idea behind Zero Trust is you design policies that sit inside your network,” said Rodgers. “AI is great at doing human decisions much faster than a human ever can and I have great hope that as Zero Trust evolves, we’re going to see AI baked into the new Zero Trust platforms.”

By handing much of the heavy lifting to machines, cybersecurity professionals will also have the opportunity to solve another pressing issue: being able to staff qualified security experts to manage these systems.

“There’s also a substantial labor shortage of qualified security professionals that can actually do the work needed to be done,” said Hanley. “That creates a tremendous opportunity for security vendors to figure out what are those jobs that need to be done, and there are many unsolved challenges in that space. Policy engines are one of the more interesting ones.”

28 Sep 2018

5 takeaways on the state of AI from Disrupt SF

The promise of artificial intelligence is immense, but the roadmap to achieving those goals still remains unclear. Onstage at TechCrunch Disrupt SF, some of AI’s leading minds shared their thoughts on current competition in the market, how to ensure algorithms don’t perpetuate racism and the future of human-machine interaction.

Here are five takeaways on the state of AI from Disrupt SF 2018:

1. U.S. companies will face many obstacles if they look to China for AI expansion

Sinnovation CEO Kai-Fu Lee (Photo: TechCrunch/Devin Coldewey)

The meteoric rise in China’s focus on AI has been well-documented and has become impossible to ignore these days. With mega companies like Alibaba and Tencent pouring hundreds of millions of dollars into home-grown businesses, American companies are finding less and less room to navigate and expand in China. AI investor and Sinnovation CEO Kai-Fu Lee described China as living in a “parallel universe” to the U.S. when it comes to AI development.

“We should think of it as electricity,” explained Lee, who led Google’s entrance into China. “Thomas Edison and the AI deep learning inventors – who were American – they invented this stuff and then they generously shared it. Now, China, as the largest marketplace with the largest amount of data, is really using AI to find every way to add value to traditional businesses, to internet, to all kinds of spaces.”

“The Chinese entrepreneurial ecosystem is huge so today the most valuable AI companies in computer vision, speech recognition, drones are all Chinese companies.”

2. Bias in AI is a new face on an old problem

SAN FRANCISCO, CA – SEPTEMBER 07: (L-R) UC Berkeley Professor Ken Goldberg, Google AI Research Scientist Timnit Gebru, UCOT Founder and CEO Chris Ategeka, and moderator Devin Coldewey speak onstage during Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch)

AI promises to increase human productivity and efficiency by taking the grunt work out of many processes. But the data used to train many AI systems often falls victim to the same biases of humans and, if unchecked, can further marginalize communities caught up in systemic issues like income disparity and racism.

“People in lower socio-economic statuses are under more surveillance and go through algorithms more,” said Google AI’s Timnit Gebru. “So if they apply for a job that’s lower status they are likely to go through automated tools. We’re right now in a stage where these algorithms are being used in different places and we’re not event checking if they’re breaking existing laws like the Equal Opportunity Act.”

A potential solution to prevent the spread of toxic algorithms was outlined by UC Berkeley’s Ken Goldberg who cited the concept of ensemble theory, which involves multiple algorithms with various classifiers working together to produce a single result.

We’re right now in a stage where these algorithms are being used in different places and we’re not even checking if they’re breaking existing laws.

But how do we know if the solution to inadequate tech is more tech? Goldberg says this is where having individuals from multiple backgrounds, both in and outside the world of AI, is vital to developing just algorithms. “It’s very relevant to think about both machine intelligence and human intelligence,” explained Goldberg. “Having people with different viewpoints is extremely valuable and I think that’s starting to be recognized by people in business… it’s not because of PR, it’s actually because it will give you better decisions if you get people with different cognitive, diverse viewpoints.”

3. The future of autonomous travel will rely on humans and machines working together

Uber CEO Dara Khosrowshahi (Photo: TechCrunch/Devin Coldewey)

Transportation companies often paint a flowery picture of the near future where mobility will become so automated that human intervention will be detrimental to the process.

That’s not the case, according to Uber CEO Dara Khosrowshahi. In an era that’s racing to put humans on the sidelines, Khosrowshahi says humans and machines working hand-in-hand is the real thing.

“People and computers actually work better than each of them work on a stand-alone basis and we are having the capability of bringing in autonomous technology, third-party technology, Lime, our own product all together to create a hybrid,” said Khosrowshahi.

Khosrowshahi ultimately envisions the future of Uber being made up of engineers monitoring routes that present the least amount of danger for riders and selecting optimal autonomous routes for passengers. The combination of these two systems will be vital in the maturation of autonomous travel, while also keeping passengers safe in the process.

4. There’s no agreed definition of what makes an algorithm “fair”

SAN FRANCISCO, CA – SEPTEMBER 07: Human Rights Data Analysis Group Lead Statistician Kristian Lum speaks onstage during Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch)

Last July ProPublica released a report highlighting how machine learning can falsely develop its own biases. The investigation examined an AI system used in Fort Lauderdale, Fla., that falsely flagged black defendants as future criminals at a rate twice that of white defendants. These landmark findings set off a wave of conversation on the ingredients needed to build a fair algorithms.

One year later AI experts still don’t have the recipe fully developed, but many agree a contextual approach that combines mathematics and an understanding of human subjects in an algorithm is the best path forward.

“Unfortunately there is not a universally agreed upon definition of what fairness looks like,” said Kristian Lum, lead statistician at the Human Rights Data Analysis Group. “How you slice and dice the data can determine whether you ultimately decide the algorithm is unfair.”

Lum goes on to explain that research in the past few years has revolved around exploring the mathematical definition of fairness, but this approach is often incompatible to the moral outlook on AI.

“What makes an algorithm fair is highly contextually dependent, and it’s going to depend so much on the training data that’s going into it,” said Lum. “You’re going to have to understand a lot about the problem, you’re going to have to understand a lot about the data, and even when that happens there will still be disagreements on the mathematical definitions of fairness.”

5. AI and Zero Trust are a “marriage made in heaven” and will be key in the evolution of cybersecurity

SAN FRANCISCO, CA – SEPTEMBER 06: (l-R) Duo VP of Security Mike Hanley, Okta Executive Director of Cybersecurity Marc Rogers, and moderator Mike Butcher speak onstage during Day 2 of TechCrunch Disrupt SF 2018 at Moscone Center on September 6, 2018 in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch)

If previous elections have taught us anything it’s that security systems are in dire need of improvement to protect personal data, financial assets and the foundation of democracy itself. Facebook’s ex-chief security officer Alex Stamos shared a grim outlook on the current state of politics and cybersecurity at Disrupt SF, stating the security infrastructure for the upcoming Midterm elections isn’t much better than it was in 2016.

So how effective will AI be in improving these systems? Marc Rodgers of Okta and Mike Hanley of Duo Security believe the combination of AI and a security model called Zero Trust, which cuts off all users from accessing a system until they can prove themselves, are the key to developing security systems that actively fight off breaches without the assistance of humans.

“AI and Zero Trust are a marriage made in heaven because the whole idea behind Zero Trust is you design policies that sit inside your network,” said Rodgers. “AI is great at doing human decisions much faster than a human ever can and I have great hope that as Zero Trust evolves, we’re going to see AI baked into the new Zero Trust platforms.”

By handing much of the heavy lifting to machines, cybersecurity professionals will also have the opportunity to solve another pressing issue: being able to staff qualified security experts to manage these systems.

“There’s also a substantial labor shortage of qualified security professionals that can actually do the work needed to be done,” said Hanley. “That creates a tremendous opportunity for security vendors to figure out what are those jobs that need to be done, and there are many unsolved challenges in that space. Policy engines are one of the more interesting ones.”

28 Sep 2018

Everything you need to know about Facebook’s data breach affecting 50M users

Facebook is cleaning up after a major security incident exposed the account data of millions of users. What’s already been a rocky year after the Cambridge Analytica scandal, the company is scrambling to regain its users trust after another security incident exposed user data.

Here’s everything you need to know so far.

What happened?

Facebook says at least 50 million users’ data were confirmed at risk after attackers exploited a vulnerability that allowed them access to personal data. The company also preventively secure 40 million additional accounts out of an abundance of caution.

What data were the hackers after?

Facebook CEO Mark Zuckerberg said that the company has not seen any accounts compromised and improperly accessed — although it’s early days and that may change. But Zuckerberg said that the attackers were using Facebook developer APIs to obtain some information, like “name, gender, and hometowns” that’s linked to a user’s profile page.

What data wasn’t taken?

Facebook said that it looks unlikely that private messages were accessed. No credit card information was taken in the breach, Facebook said. Again, that may change as the company’s investigation continues.

What’s an access token? Do I need to change my password?

When you enter your username and password on most sites and apps, including Facebook, your browser or device is set an access tokens. This keeps you logged in, without you having to enter your credentials every time you log in. But the token doesn’t store your password — so there’s no need to change your password.

Is this why Facebook logged me out of my account?

Yes, Facebook says it reset the access tokens of all users affected. That means some 90 million users will have been logged out of their account — either on their phone or computer — in the past day. This also includes users on Facebook Messenger.

When did this attack happen?

The vulnerability was introduced on the site in July 2017, but Facebook didn’t know about it until this month, on September 16, 2018, when it spotted a spike in unusual activity. That means the hackers could have had access to user data for a long time, as Facebook is not sure right now when the attack began.

Who would do this?

Facebook doesn’t know who attacked the site, but the FBI is investigating, it says.

However, Facebook has in the past found evidence of Russia’s attempts to meddle in American democracy and influence our elections — but it’s not to say that Russia is behind this new attack. Attribution is incredibly difficult and takes a lot of time and effort. It recently took the FBI more than two years to confirm that North Korea was behind the Sony hack in 2016 — so we might be in for a long wait.

How did the attackers get in? 

Not one, but three bugs led to the data exposure.

In July 2017, Facebook inadvertently introduced three vulnerabilities in its video uploader, said Guy Rosen, Facebook’s vice president of product management, in a call with reporters. When using the “View As” feature to view your profile as someone else, the video uploader would occasionally appear when it shouldn’t display at all. When it appeared, it generated an access token using the person who the profile page was being viewed as. If that token was obtained, an attacker could log into the account of the other person.

Is the problem fixed? 

Facebook says it fixed the vulnerability on September 27, and then began resetting the access tokens of people to protect the security of their accounts.

Did this affect WhatsApp and Instagram accounts?

Facebook said that it’s not yet sure if Instagram accounts are affected, but were automatically secured once Facebook access tokens were revoked. Affected Instagram users will have to unlink and relink their Facebook accounts in Instagram in order to cross post to Facebook.

On a call with reporters, Facebook said there is no impact on WhatsApp users at all.

Will Facebook be fined or punished?

If Facebook is found to have breached European data protection rules — the newly implemented General Data Protection Regulation (GDPR) — the company can face fines of up to four percent of its global revenue.

However, that fine can’t be levied until Facebook knows more about the nature of the breach and the risk to users.

Another data breach of this scale – especially coming in the wake of the Cambridge Analytica scandal and other data leaks – has some in Congress calling for the social network to be regulated. Sen. Mark Warner (D-VA) issued a stern reprimand to Facebook over today’s news, and again pushed his proposal for regulating companies holding large data sets as ““information fiduciaries” with additional consequences for improper security.

FTC Commissioner Rohit Chopra also tweeted that “I want answers” regarding the Facebook hack. It’s reasonable to assume that there could be investigators in both the U.S. and Europe to figure out what happened.

Can I check to see if my account was improperly accessed?

You can. Once you log back into your Facebook account, you can go to your account’s security and login page, which lets you see where you’ve logged in. If you had your access tokens revoked and had to log in again, you should see only the devices that you logged back in with.

Should I delete my Facebook account?

That’s up to you! But you may want to take some precautions like changing your password and turning on two-factor authentication, if you haven’t done so already. If you’re weren’t impacted by this, you may want to take the time to delete some of the personal information you’ve shared to Facebook to reduce your risk of exposure in future attacks, if they were to occur.