Category: UNCATEGORIZED

09 Aug 2019

Why AI needs more social workers, with Columbia University’s Desmond Patton

Sometimes it does seem the entire tech industry could use someone to talk to, like a good therapist or social worker. That might sound like an insult, but I mean it mostly earnestly: I am a chaplain who has spent 15 years talking with students, faculty, and other leaders at Harvard (and more recently MIT as well), mostly nonreligious and skeptical people like me, about their struggles to figure out what it means to build a meaningful career and a satisfying life, in a world full of insecurity, instability, and divisiveness of every kind.

In related news, I recently took a year-long paid sabbatical from my work at Harvard and MIT, to spend 2019-20 investigating the ethics of technology and business (including by writing this column at TechCrunch). I doubt it will shock you to hear I’ve encountered a lot of amoral behavior in tech, thus far.

A less expected and perhaps more profound finding, however, has been what the introspective founder Priyag Narula of LeadGenius tweeted at me recently: that behind the hubris and Machiavellianism one can find in tech companies is a constant struggle with anxiety and an abiding feeling of inadequacy among tech leaders.

In tech, just like at places like Harvard and MIT, people are stressed. They’re hurting, whether or not they even realize it.

So when Harvard’s Berkman Klein Center for Internet and Society recently posted an article whose headline began, “Why AI Needs Social Workers…”… it caught my eye.

The article, it turns out, was written by Columbia University Professor Desmond Patton. Patton is a Public Interest Technologist and pioneer in the use of social media and artificial intelligence in the study of gun violence. The founding Director of Columbia’s SAFElab and Associate Professor of Social Work, Sociology and Data Science at Columbia University.

desmond cropped 800x800

Desmond Patton. Image via Desmond Patton / Stern Strategy Group

A trained social worker and decorated social work scholar, Patton has also become a big name in AI circles in recent years. If Big Tech ever decided to hire a Chief Social Work Officer, he’d be a sought-after candidate.

It further turns out that Patton’s expertise — in online violence & its relationship to violent acts in the real world — has been all too “hot” a topic this past week, with mass murderers in both El Paso, Texas and Dayton, Ohio having been deeply immersed in online worlds of hatred which seemingly helped lead to their violent acts.

Fortunately, we have Patton to help us understand all of these issues. Here is my conversation with him: on violence and trauma in tech on and offline, and how social workers could help; on deadly hip-hop beefs and “Internet Banging” (a term Patton coined); hiring formerly gang-involved youth as “domain experts” to improve AI; how to think about the likely growing phenomenon of white supremacists live-streaming barbaric acts; and on the economics of inclusion across tech.

Greg Epstein: How did you end up working in both social work and tech?

Desmond Patton: At the heart of my work is an interest in root causes of community-based violence, so I’ve always identified as a social worker that does violence-based research. [At the University of Chicago] my dissertation focused on how young African American men navigated violence in their community on the west side of the city while remaining active in their school environment.

[From that work] I learned more about the role of social media in their lives. This was around 2011, 2012, and one of the things that kept coming through in interviews with these young men was how social media was an important tool for navigating both safe and unsafe locations, but also an environment that allowed them to project a multitude of selves. To be a school self, to be a community self, to be who they really wanted to be, to try out new identities.

09 Aug 2019

Proterra, the Tesla of electric buses, closes in on $1 billion valuation

Proterra has authorized shares to raise $75 million, a new round of funding that would push the electric bus maker’s valuation past $1 billion, TechCrunch has learned.

The company authorized the sale of 10,857,762 shares at a price of $6.91 in a Series 8 round, according to a securities filing that was obtained by the Prime Unicorn Index, a company that tracks the performance of private U.S. companies, and reviewed by TechCrunch. If all of the shares are issued, the company’s total valuation would be $1.04 billion, pushing it into “unicorn” territory, according to Prime Unicorn Index.

Proterra declined to comment.

Efforts to raise capital come as the company explores an IPO, according to a report last month by Reuters that said Proterra had hired underwriters from Deutsche Bank, JPMorgan Chase and Morgan Stanley.

Prior to this August 2 filing, Proterra had raised a total of $$551.77 million in funding from investors that include G2VP, Kleiner Perkins Caufield & Byers, Constellation Ventures, Mitsui & Co as well as BMW i Ventures, Edison Energy, the Federal Transportation Administration, General Motors’s venture arm and Tao Capital Partners.

Proterra’s roots are producing electric buses for municipal, federal and commercial transit agencies; it has a line of electric buses, hundreds of which have been sold, that can travel 350 miles on a single charge. The Burlingame, Calif. company, which has a number of former Tesla employees in leadership positions including CEO Ryan Popple, has since diversified its business.

Proterra rolled out in April a $200 million credit facility backed by Japanese investment giant Mitsui & Co. to scale up a battery leasing program aimed at lowering the barrier of entry of buying an electric bus.

And just this month, company announced it has added a new business line called Proterra Powered that will sell its vehicle battery systems, powertrain technology and charging infrastructure to commercial truck and manufacturers of heavy-duty vehicles like garbage trucks.

This new business line stems from its previous relationships with companies like Van Hool and Daimler . Proterra announced last year it was working with Daimler to electrify the company’s Thomas Built Buses division, which makes a line of school buses. That relationship comes with some financial backing and an agreement to share technologies.

Daimler co-led a $155 million funding along with Tao Capital Partner. Proterra is lending its battery and drive train expertise; Daimler will show Proterra how to scale its manufacturing business even further.

The partnership has already been fruitful. Thomas Built Buses received certifications from the California Air Resources Board and the Hybrid and Zero-Emission Truck and Bus Voucher Incentive Project for an electric bus, known as the Saf-T-Liner C2 Jouley, that uses Proterra technology. Electric school bus production for demonstration and innovation vehicles begins in 2019 and commercial production begins in 2020.

09 Aug 2019

Last chance for early-bird tickets to TC Sessions: Enterprise 2019

It’s down to the wire folks. Today’s the last day you can save $100 on your ticket to TC Sessions: Enterprise 2019, which takes place on September 5 at the Yerba Buena Center in San Francisco. The deadline expires in mere hours — at 11:59 p.m. (PT). Get the best possible price and buy your early-bird ticket right now.

We expect more than 1,000 attendees representing the enterprise software community’s best and brightest. We’re talking founders of companies in every stage and CIOs and systems architects from some of the biggest multinationals. And, of course, managing partners from the most influential venture and corporate investment firms.

Take a look at just some of the companies joining us for TC Sessions: Enterprise: Bain & Company, Box, Dell Technologies Capital, Google, Oracle, SAP and SoftBank. Let the networking begin!

You can expect a full day of main-stage interviews and panel discussions, plus break-out sessions and speaker Q&As. TechCrunch editors will dig into the big issues enterprise software companies face today along with emerging trends and technologies.

Data, for example, is a mighty hot topic, and you’ll hear a lot more about it during a session entitled, Innovation Break: Data – Who Owns It?: Enterprises have historically competed by being closed entities, keeping a closed architecture and innovating internally. When applying this closed approach to the hottest new commodity, data, it simply does not work anymore. But as enterprises, startups and public institutions open themselves up, how open is too open? Hear from leaders who explore data ownership and the questions that need to be answered before the data floodgates are opened. Sponsored by SAP .

If investment is on your mind, don’t miss the Investor Q&A. Some of greatest investors in enterprise will be on hand to answer your burning questions. Want to know more? Check out the full agenda.

Maximize your last day of early-bird buying power and take advantage of the group discount. Buy four or more tickets at once and save 20%. Here’s a bonus. Every ticket you buy to TC Sessions: Enterprise includes a free Expo Only pass to TechCrunch Disrupt SF on October 2-4.

It’s now o’clock startuppers. Your opportunity to save $100 on tickets to TC Sessions: Enterprise ends tonight at precisely 11:59 p.m. (PT). Buy your early-bird tickets now and join us in September!

Is your company interested in sponsoring or exhibiting at TC Sessions: Enterprise? Contact our sponsorship sales team by filling out this form.

09 Aug 2019

Preclusio uses machine learning to comply with GDPR, other privacy regulations

As privacy regulations like GDPR and the California Consumer Privacy Act proliferate, more startups are looking to help companies comply. Enter Preclusio, a member of the Y Combinator Summer 2019 class, which has developed a machine learning-fueled solution to help companies adhere to these privacy regulations.

“We have a platform that is deployed on prem in our customer’s environment, and helps them identify what data they’re collecting, how they’re using it, where it’s being stored and how it should be protected. We help companies put together this broad view of their data, and then we continuously monitor their data infrastructure to ensure that this data continues to be protected,” company co-founder and CEO Heather Wade told TechCrunch.

She says that the company made a deliberate decision to keep the solution on-prem.”We really believe in giving our clients control over their data. We don’t want to be just another third-party SaaS vendor that you have to ship your data to,” Wade explained.

That said, customers can run it wherever they wish, whether that’s on prem or in the cloud in Azure or AWS. Regardless of where it’s stored, the idea is to give customers direct control over their own data. “We are really trying to alert our customers to threats or to potential privacy exceptions that are occurring in their environment in real time, and being in their environment is really the best way to facilitate this,” she said.

The product works by getting read-only access to the data, then begins to identify sensitive data in an automated fashion using machine learning. “Our product automatically looks at the schema and samples of the data, and uses machine learning to identify common protected data,” she said. Once that process is completed, a privacy compliance team can review the findings and adjust these classifications as needed.

Wade, who started the company in March, says the idea formed at previous positions where she was responsible for implementing privacy policies and found there weren’t adequate solutions on the market to help. “I had to face the challenges first-hand of dealing with privacy and compliance and seeing how resources were really taken away from our engineering teams and having to allocate these resources to solving these problems internally, especially early on when GDPR was first passed, and there really were not that many tools available in the market,” she said.

Interestingly Wade’s co-founder is her husband, John. She says they deal with the intensity of being married and startup founders by sticking to their areas of expertise. He’s the marketing person and she’s the technical one.

She says they applied to Y Combinator because they wanted to grow quickly, and that timing is important with more privacy laws coming online soon. She has been impressed with the generosity of the community in helping them reach their goals. “It’s almost indescribable how generous and helpful other folks who’ve been through the YC program are to the incoming batches, and they really do have that spirit of paying it forward,” she said.

09 Aug 2019

Robocall blocking apps caught sending your private data without permission

Robocall-blocking apps promise to rid your life of spoofed and spam phone calls. But are they as trustworthy as they claim to be?

One security researcher said many of these apps can violate your privacy as soon as they are opened.

Dan Hastings, a senior security consultant cybersecurity firm NCC Group, analyzed some of the most popular robocall-blocking apps — including TrapCall, Truecaller, and Hiya — and found egregious privacy violations.

Robocalls are getting worse, with some getting tens or dozens of calls a day. These automated calls demand you “pay the IRS” a fine you don’t owe or pretend to be tech support. They often try to trick you into picking up the phone by spoofing their number to look like a local caller. But as much as the cell networks are trying to cut down on spam, many are turning to third-party apps to filter their incoming calls.

But many of these apps, said Hastings, send user or device data to third-party data analytics companies — often to monetize your information — without your explicit consent, instead burying the details in their privacy policies.

One app, TrapCall, sent users’ phone numbers to a third-party analytics firm, AppsFlyer, without telling users — either in the app nor in the privacy policy.

He also found Truecaller and Hiya uploaded device data — device type, model and software version, among other things — before a user could accept their privacy policies. Those apps, said Hastings, violate Apple’s app guidelines on data use and sharing, which mandate that app makers first obtain permission before using or sending data to third-parties.

Many of the other apps aren’t much better. Several other apps that Hastings tested immediately sent some data to Facebook as soon as the app loaded.

“Without having a technical background, most end users aren’t able to evaluate what data is actually being collected and sent to third parties,” said Hastings. “Privacy policies are the only way that a non-technical user can evaluate what data is collected about them while using an app.”

None of the companies acted on emails from Hastings warning about the privacy issues, he said. It was only after he contacted Apple when TrapCall later updated its privacy policy.

But he reserved some criticism for Apple, noting that app privacy policies “don’t appear to be monitored” as he discovered with Truecaller and Hiya.

“Privacy policies are great, but apps need to get better about abiding by them,” said Hastings.

“If most people took the time to read and try to understand privacy policies for all the apps they use (and are able to understand them!), they might be surprised to see how much these apps collect,” he said. “Until that day, end-users will have to rely on security researchers performing manual deep dives into how apps handle their private information in practice.”

Spokespeople for TrapCall, Truecaller, and Hiya did not comment when reached prior to publication.

09 Aug 2019

Robocall blocking apps caught sending your private data without permission

Robocall-blocking apps promise to rid your life of spoofed and spam phone calls. But are they as trustworthy as they claim to be?

One security researcher said many of these apps can violate your privacy as soon as they are opened.

Dan Hastings, a senior security consultant cybersecurity firm NCC Group, analyzed some of the most popular robocall-blocking apps — including TrapCall, Truecaller, and Hiya — and found egregious privacy violations.

Robocalls are getting worse, with some getting tens or dozens of calls a day. These automated calls demand you “pay the IRS” a fine you don’t owe or pretend to be tech support. They often try to trick you into picking up the phone by spoofing their number to look like a local caller. But as much as the cell networks are trying to cut down on spam, many are turning to third-party apps to filter their incoming calls.

But many of these apps, said Hastings, send user or device data to third-party data analytics companies — often to monetize your information — without your explicit consent, instead burying the details in their privacy policies.

One app, TrapCall, sent users’ phone numbers to a third-party analytics firm, AppsFlyer, without telling users — either in the app nor in the privacy policy.

He also found Truecaller and Hiya uploaded device data — device type, model and software version, among other things — before a user could accept their privacy policies. Those apps, said Hastings, violate Apple’s app guidelines on data use and sharing, which mandate that app makers first obtain permission before using or sending data to third-parties.

Many of the other apps aren’t much better. Several other apps that Hastings tested immediately sent some data to Facebook as soon as the app loaded.

“Without having a technical background, most end users aren’t able to evaluate what data is actually being collected and sent to third parties,” said Hastings. “Privacy policies are the only way that a non-technical user can evaluate what data is collected about them while using an app.”

None of the companies acted on emails from Hastings warning about the privacy issues, he said. It was only after he contacted Apple when TrapCall later updated its privacy policy.

But he reserved some criticism for Apple, noting that app privacy policies “don’t appear to be monitored” as he discovered with Truecaller and Hiya.

“Privacy policies are great, but apps need to get better about abiding by them,” said Hastings.

“If most people took the time to read and try to understand privacy policies for all the apps they use (and are able to understand them!), they might be surprised to see how much these apps collect,” he said. “Until that day, end-users will have to rely on security researchers performing manual deep dives into how apps handle their private information in practice.”

Spokespeople for TrapCall, Truecaller, and Hiya did not comment when reached prior to publication.

09 Aug 2019

Skip scooters are returning to Washington, D.C. after battery fires

Following battery issues and a single-alarm fire caused by improperly disposed of batteries in Washington, D.C., Skip has been given the green light to resume operations in Washington, D.C. and the surrounding areas of Alexandria and Arlington. The plan is to redeploy the scooters in the coming weeks.

In June, a battery on one of Skip’s scooters caught fire in D.C., prompting the company to ground its scooters in both D.C. and San Francisco. The scooter in question was found with its external battery on fire, which caused “minor damage” to a wall nearby. In light of that incident, Skip identified other potential at-risk batteries and quarantined them in its warehouse.

“In DC, they weren’t disposed of properly, which helped create the right conditions for a single-alarm fire,” Skip wrote in a blog post. “After the incident, DDOT asked us to suspend operations. Frankly, that was the right call. We didn’t just let our cities and riders down, we let ourselves down.”

Since then, Skip says it has consulted with battery experts and OSHA compliance firms to put in place new procedures and operations around handling and disposing of damaged equipment. Now, Skip has real-time monitoring and alerting for battery and vehicle issues to ensure batteries are disposed of before exhibiting any safety issues. Among other steps, Skip is now reporting its handling of batteries and employee injuries to the District Department Of Transportation.

Skip is not the only micromobility company that has experienced issues with battery fires. Last month, a couple of Lyft’s electric bike batteries caught on fire in San Francisco, prompting the company to pull its bikes from the streets. Late last year, Lime recalled some of its Ninebot scooters due to fire concerns.

And battery fires do not only affect electric bikes and scooters. You may remember the year of the exploding hoverboards, as well as exploding smartphones and laptops. What all of those have in common are lithium-ion batteries, which are very commonly used for portable electronics and now, personal electric vehicles. The downside to these types of batteries is potential overheating, which can lead to a failure mode called “thermal runaway” and result in a battery fire.

Other potential issues that can lead to battery failure is a bad design and the mere fact that scooters can be banged around by users. In the case of Skip, the issue seemed to fall on the latter.

“The investigation found the main cause to be physical damage, but it was not able to determine whether the damage was intentional or unintentional,” a Skip spokesperson told TechCrunch.

Given the amount of scrutiny all of these companies are under, coupled with their reliance on approval from cities, the likes of Skip, Lyft and Lime need to make sure their respective safety procedures are buttoned up if they want to thrive in this space.

09 Aug 2019

Skip scooters are returning to Washington, D.C. after battery fires

Following battery issues and a single-alarm fire caused by improperly disposed of batteries in Washington, D.C., Skip has been given the green light to resume operations in Washington, D.C. and the surrounding areas of Alexandria and Arlington. The plan is to redeploy the scooters in the coming weeks.

In June, a battery on one of Skip’s scooters caught fire in D.C., prompting the company to ground its scooters in both D.C. and San Francisco. The scooter in question was found with its external battery on fire, which caused “minor damage” to a wall nearby. In light of that incident, Skip identified other potential at-risk batteries and quarantined them in its warehouse.

“In DC, they weren’t disposed of properly, which helped create the right conditions for a single-alarm fire,” Skip wrote in a blog post. “After the incident, DDOT asked us to suspend operations. Frankly, that was the right call. We didn’t just let our cities and riders down, we let ourselves down.”

Since then, Skip says it has consulted with battery experts and OSHA compliance firms to put in place new procedures and operations around handling and disposing of damaged equipment. Now, Skip has real-time monitoring and alerting for battery and vehicle issues to ensure batteries are disposed of before exhibiting any safety issues. Among other steps, Skip is now reporting its handling of batteries and employee injuries to the District Department Of Transportation.

Skip is not the only micromobility company that has experienced issues with battery fires. Last month, a couple of Lyft’s electric bike batteries caught on fire in San Francisco, prompting the company to pull its bikes from the streets. Late last year, Lime recalled some of its Ninebot scooters due to fire concerns.

And battery fires do not only affect electric bikes and scooters. You may remember the year of the exploding hoverboards, as well as exploding smartphones and laptops. What all of those have in common are lithium-ion batteries, which are very commonly used for portable electronics and now, personal electric vehicles. The downside to these types of batteries is potential overheating, which can lead to a failure mode called “thermal runaway” and result in a battery fire.

Other potential issues that can lead to battery failure is a bad design and the mere fact that scooters can be banged around by users. In the case of Skip, the issue seemed to fall on the latter.

“The investigation found the main cause to be physical damage, but it was not able to determine whether the damage was intentional or unintentional,” a Skip spokesperson told TechCrunch.

Given the amount of scrutiny all of these companies are under, coupled with their reliance on approval from cities, the likes of Skip, Lyft and Lime need to make sure their respective safety procedures are buttoned up if they want to thrive in this space.

09 Aug 2019

Amazon’s lead EU data regulator is asking questions about Alexa privacy

Amazon’s lead data regulator in Europe, Luxembourg’s National Commission for Data Protection, has raised privacy concerns about its use of manual human reviews of Alexa AI voice assistant recordings.

A spokesman for the regulator confirmed in an email to TechCrunch it is discussing the matter with Amazon, adding: “At this stage, we cannot comment further about this case as we are bound by the obligation of professional secrecy.” The development was reported earlier by Reuters.

We’ve reached out to Amazon for comment.

Amazon’s Alexa voice AI, which is embedded in a wide array of hardware — from the company’s own brand Echo smart speaker line to an assortment of third party devices (such as this talkative refrigerator or this oddball table lamp) — listens pervasively for a trigger word which activates a recording function, enabling it to stream audio data to the cloud for processing and storage.

However trigger-word activated voice AIs have been shown to be prone to accidental activation. While a device may be being used in a multi-person household. So there’s always a risk of these devices recording any audio in their vicinity, not just intentional voice queries…

In a nutshell, the AIs’ inability to distinguish between intentional interactions and stuff they overhear means they are natively prone to eavesdropping — hence the major privacy concerns.

These concerns have been dialled up by recent revelations that tech giants — including Amazon, Apple and Google — use human workers to manually review a proportion of audio snippets captured by their voice AIs, typically for quality purposes. Such as to try to improve the performance of voice recognition across different accents or environments. But that means actual humans are listening to what might be highly sensitive personal data.

Earlier this week Amazon quietly added an option to the settings of the Alexa smartphone app to allow users to opt out of their audio snippets being added to a pool that may be manually reviewed by people doing quality control work for Amazon — having not previously informed Alexa users of its human review program.

The policy shift followed rising attention on the privacy of voice AI users — especially in Europe.

Last month thousands of recordings of users of Google’s AI assistant were leaked to the Belgian media which was able to identify some of the people in the clips.

A data protection watchdog in Germany subsequently ordered Google to halt manual reviews of audio snippets.

Google responded by suspending human reviews across Europe. While its lead data watchdog in Europe, the Irish DPC, told us it’s “examining” the issue.

Separately, in recent days, Apple has also suspended human reviews of Siri snippets — doing so globally, in its case — after a contractor raised privacy concerns in the UK press over what Apple contractors are privy to when reviewing Siri audio.

The Hamburg data protection agency which intervened to halt human reviews of Google Assistant snippets urged its fellow EU privacy watchdogs to prioritize checks on other providers of language assistance systems — and “implement appropriate measures” — naming both Apple and Amazon.

In the case of Amazon, scrutiny from European watchdogs looks to be fast dialling up.

At the time of writing it is the only one of the three tech giants not to have suspended human reviews of voice AI snippets, either regionally or globally.

In a statement provided to the press at the time it changed Alexa settings to offer users an opt-out from the chance of their audio being manually reviewed, Amazon said:

We take customer privacy seriously and continuously review our practices and procedures. For Alexa, we already offer customers the ability to opt-out of having their voice recordings used to help develop new Alexa features. The voice recordings from customers who use this opt-out are also excluded from our supervised learning workflows that involve manual review of an extremely small sample of Alexa requests. We’ll also be updating information we provide to customers to make our practices more clear.

09 Aug 2019

The smartwatch category is growing, as Apple remains dominant

Last week Samsung and Fossil kicked off the week by announcing new smartwatches. On the same day. At the same time. From a brief moment, it felt like 2015 all over again, when the world of smartwatches felt exciting and new.

Mid-way through 2019, the good news for smartwatches is that the category continues to grow. Numbers from Strategy Analytics show some truly impressive movement on that front, with shipments from 44 percent year over year in Q2, from 8.6 million to 12.3.

Lots of reason to celebrate there if you’re a smartwatch maker — or, rather, if you’re one very specific smartwatch maker. The very important caveat to the rosy numbers is that they start to look considerably less rosy when you take Apple out of the equation. The Apple Watch accounted for 5.7 million of those Q2 numbers. That’s 46 percent of the category, up slightly from 44 the year prior.

The numbers were reflected in Apple’s last earnings. The wearables category (which, notably, also includes AirPods) was a bright spot in the company’s otherwise disappointing hardware numbers. Compare that to the company with the second-largest numbers for the quarter: Samsung, which shipped two million smartwatches in that time period.