Author: azeeadmin

10 Aug 2021

The art of startup storytelling with Julian Shapiro

Although he’s coming from a numbers-driven background, growth expert Julian Shapiro focuses on the emotional power of storytelling these days.

“I like to think of successful brand-building as creating a company that customers would be upset to separate from their identity,” he says in an interview below. “For example, they’d cease to be the man with Slack stickers all over his laptop. Or the woman who no longer wears Nike shoes every day. And that bugs them.”

A prolific Twitter user, writer and now podcaster, he advises startups to “just blow your own mind” to best explain the value of what you are offering. Don’t overthink it. Your own excitement will take your audience on a journey with you in ways that paid acquisition can’t.

He’s informed by years working with hundreds of startups as the cofounder of Demand Curve (growth training courses) and Bell Curve (a growth agency), but also as a repeat startup founder, angel investor and open-source web developer (Velocity.js).

In the discussion below, he tells us more about the path he’s taken through the startup world, how companies are changing their public communication and what he’s most excited about.

An illustration of Julian Shapiro

Image Credits: Julian Shapiro

 

What has led you from web development and startups to growth marketing in recent years, and most recently to your own personal writing for the public on Twitter, etc.? Many people in your position would be more comfortable just founding new tech companies, or investing in crypto or what-have-you.

I try to avoid being contained by momentum. If something’s going well in one field (engineering) but I’ve found something more fulfilling (like growth strategy), then I’ll switch. I don’t switch for the sake of switching, but I will keep switching until I find something I love. That eventually brought me to writing, which I will never stop doing.

Trying a little bit of a lot of things gives you exposure to learn what else you could (and should) be doing. To break out of a local maximum, you need to always remain curious: What else don’t I know about?

What I ultimately stick with is whatever process I enjoy (not just enjoy the outcome). This usually means the process is fun, educational, adventurous and helps me meet like-minded people. Writing is a bat signal for like-minded people.

You’re focused on the art of storytelling right now — what are the key things that the startups you work with continue to get wrong here?

The story of a startup is essentially their (1) investor pitch and (2) customer-facing brand.

I like to think of successful brand-building as creating a company that customers would be upset to separate from their identity. For example, they’d cease to be the man with Slack stickers all over his laptop. Or the woman who no longer wears Nike shoes every day. And that bugs them.

To get to that point, you need a mix of goodwill, what-we-stand-for ideology, social prestige and customer delight—among other affinity-building ingredients.

How do you see companies changing the way they talk about themselves to the public in the future, given larger societal trends? Fewer press releases full of canned legalese, more public engagement on social media from the CEO?

Employees with audiences who broadcast corporate messages on a human-to-human level. Plus company social media accounts gaining personality and acting more like their employees.

It’s really hard to grab attention otherwise, especially with the explosion of content creators who are very good at hogging attention and optimizing content for the platforms. Corporate blogs haven’t been competing with them. I’m not sure they could. So much of this is personality-driven, to my point in the previous paragraph.

I’m also hoping, but not expecting, companies to dial back frequency of content production and increase quality of content production. Signal is more important than frequency in an era where we’re overloaded with content.

Given how many startups you work with across YC, etc., and since you are also an angel investor, what are key industry trends that have you most excited?

I’m interested in businesses with product-led growth, brand affinity moats and who get harder to compete with the larger they get. In other words, customers do the selling, customers fall in love and defensibility is in their design.

This is in part a reaction to not wanting startups to be so reliant on paid acquisition. And it’s also a reflection of how I want startups to be thinking more about not just providing transactional value to customers but also making customers truly delighted.

10 Aug 2021

Amazon says it will now compensate consumers for defective products sold on its marketplace

Amazon today is making a significant change to its returns policy, known as the A-to-Z guarantee, to address issues with defective products sold through Amazon’s marketplace of third-party sellers. In the past, Amazon directed consumers to the sellers in the case where a defective product caused property damage or personal injury. Now, Amazon says it will directly pay customers for their claims under $1,000, which would cover more than 80% of cases, at no cost to sellers.

It also says it may step in to pay claims for higher amounts if the seller rejects a claim or is unresponsive on a claim Amazon understands to be valid.

For years, Amazon has attempted to skirt responsibility for the products sold through its marketplace, saying it was only the platform that enabled these transactions to take place — not the liable party in the event of defective product claims. Some U.S. courts over the years have agreed, but others have not, complicating matters. Most recently, a California appellate court ruled that Amazon could be sued when consumers were injured by third-party products it sold on its website. The case at hand was a lawsuit over a defective hoverboard a mother bought for her son in 2015, which burned the customer’s hands and started a fire.

Meanwhile, as Amazon’s marketplace has grown, how defective products and consumer complaints are handled has become even more of problem. Today, Amazon’s marketplace has 6.3 million total sellers, 1.5 million of which are currently active, according to estimates from Marketplace Pulse.

This situation recently came to a head, when last month Amazon was sued by the U.S. Consumer Product Safety Commission, which aims to force Amazon to accept responsibility for recalling potentially hazardous products sold on Amazon.com. The named products in the complaint included “24,000 faulty carbon monoxide detectors that fail to alarm, numerous children’s sleepwear garments that are in violation of the flammable fabric safety standard risking burn injuries to children, and nearly 400,000 hair dryers sold without the required immersion protection devices that protect consumers against shock and electrocution,” the federal agency said.

As a part of that action, the CPSC also wanted Amazon to step in and issue refunds, naming it as a distributor of these products by way of its FBA (Fulfilled by Amazon) program. It pointed out that Amazon stores products at its warehouse, inventories them, and sorts and ships them — and earns fees for doing so. The agency also argued that consumers that consumers who then buy these products may “reasonably believe” they are purchasing from Amazon.

Today, Amazon says it will step in to handle these types of consumer complaints. Instead of telling customers to reach out to the seller, it will allow customers to begin their claims process through Amazon Customer Service.

Starting September 1st, Amazon will take the claim information and notify the seller to help them address the claim. If the seller doesn’t respond, Amazon will step in to address the customer concern at its own cost while it separately tries to pursue the seller. And if the seller rejects a claim that Amazon believes is valid, it will compensate the customer.

The retailer says it will use its existing fraud detection and abuse systems and work with external, independent insurance fraud experts to analyze customers’ claims for validity. This will provide an initial layer of seller protection, as Amazon will stop sellers from having to deal with “unsubstantiated, frivolous, or abusive claims,” Amazon explains. It will also offer product liability insurance to sellers through a new service, Amazon Insurance Accelerator, which will offer a selection of trusted providers to shop from.

Amazon likely believes this new policy will help to head off new regulations that could impact how it runs its marketplace business. In announcing the news, Amazon stated that it’s “going far beyond our legal obligations and what any other marketplace service provider is doing today to protect customers” — a message clearly meant to dissuade further regulation.

The changes will roll out initially in the U.S., Amazon says.

10 Aug 2021

Interview: Apple’s Head of Privacy details child abuse detection and Messages safety features

Last week, Apple announced a series of new features targeted at child safety on its devices. Though not live yet, the features will arrive later this year for users. Though the goals of these features are universally accepted to be good ones — the protection of minors and the limit of the spread of Child Sexual Abuse Material (CSAM), there have been some questions about the methods Apple is using.

I spoke to Erik Neuenschwander, Head of Privacy at Apple, about the new features launching for its devices. He shared detailed answers to many of the concerns that people have about the features and talked at length to some of the tactical and strategic issues that could come up once this system rolls out. 

I also asked about the rollout of the features, which come closely intertwined but are really completely separate systems that have similar goals. To be specific, Apple is announcing three different things here, some of which are being confused with one another in coverage and in the minds of the public. 

CSAM detection in iCloud Photos – A detection system called NeuralHash creates identifiers it can compare with IDs from the National Center for Missing and Exploited Children and other entities to detect known CSAM content in iCloud Photo libraries. Most cloud providers already scan user libraries for this information — Apple’s system is different in that it does the matching on device rather than in the cloud.

Communication Safety in Messages – A feature that a parent opts to turn on for a minor on their iCloud Family account. It will alert children when an image they are going to view has been detected to be explicit and it tells them that it will also alert the parent.

Interventions in Siri and search – A feature that will intervene when a user tries to search for CSAM-related terms through Siri and search and will inform the user of the intervention and offer resources.

For more on all of these features you can read our articles linked above or Apple’s new FAQ that it posted this weekend.

From personal experience, I know that there are people who don’t understand the difference between those first two systems, or assume that there will be some possibility that they may come under scrutiny for innocent pictures of their own children that may trigger some filter. It’s led to confusion in what is already a complex rollout of announcements. These two systems are completely separate, of course, with CSAM detection looking for precise matches with content that is already known to organizations to be abuse imagery. Communication Safety in Messages takes place entirely on the device and reports nothing externally — it’s just there to flag to a child that they are or could be about to be viewing explicit images. This feature is opt-in by the parent and transparent to both parent and child that it is enabled.

Apple’s Communication Safety in Messages feature. Image Credits: Apple

There have also been questions about the on-device hashing of photos to create identifiers that can be compared with the database. Though NeuralHash is a technology that can be used for other kinds of features like faster search in photos, it’s not currently used for anything else on iPhone aside from CSAM detection. When iCloud Photos is disabled, the feature stops working completely. This offers an opt-out for people but at an admittedly steep cost given the convenience and integration of iCloud Photos with Apple’s operating systems.

Though this interview won’t answer every possible question related to these new features, this is the most extensive on-the-record discussion by Apple’s senior privacy member. It seems clear from Apple’s willingness to provide access and its ongoing FAQ’s and press briefings (there have been at least 3 so far and likely many more to come) that it feels that it has a good solution here. 

Despite the concerns and resistance, it seems as if it is willing to take as much time as is necessary to convince everyone of that. 

This interview has been lightly edited for clarity.

TC: Most other cloud providers have been scanning for CSAM for some time now. Apple has not. Obviously there are no current regulations that say that you must seek it out on your servers, but there is some roiling regulation in the EU and other countries. Is that the impetus for this? Basically, why now?

Erik Neuenschwander: Why now comes down to the fact that we’ve now got the technology that can balance strong child safety and user privacy. This is an area we’ve been looking at for some time, including current state of the art techniques which mostly involves scanning through entire contents of users libraries on cloud services that — as you point out — isn’t something that we’ve ever done; to look through user’s iCloud Photos. This system doesn’t change that either, it neither looks through data on the device, nor does it look through all photos in iCloud Photos. Instead what it does is gives us a new ability to identify accounts which are starting collections of known CSAM.

So the development of this new CSAM detection technology is the watershed that makes now the time to launch this. And Apple feels that it can do it in a way that it feels comfortable with and that is ‘good’ for your users?

That’s exactly right. We have two co-equal goals here. One is to improve child safety on the platform and the second is to preserve user privacy, And what we’ve been able to do across all three of the features, is bring together technologies that let us deliver on both of those goals.

Announcing the Communications safety in Messages features and the CSAM detection in iCloud Photos system at the same time seems to have created confusion about their capabilities and goals. Was it a good idea to announce them concurrently? And why were they announced concurrently, if they are separate systems?

Well, while they are [two] systems they are also of a piece along with our increased interventions that will be coming in Siri and search. As important as it is to identify collections of known CSAM where they are stored in Apple’s iCloud Photos service, It’s also important to try to get upstream of that already horrible situation. So CSAM detection means that there’s already known CSAM that has been through the reporting process, and is being shared widely re-victimizing children on top of the abuse that had to happen to create that material in the first place. for the creator of that material in the first place. And so to do that, I think is an important step, but it is also important to do things to intervene earlier on when people are beginning to enter into this problematic and harmful area, or if there are already abusers trying to groom or to bring children into situations where abuse can take place, and Communication Safety in Messages and our interventions in Siri and search actually strike at those parts of the process. So we’re really trying to disrupt the cycles that lead to CSAM that then ultimately might get detected by our system.

The process of Apple’s CSAM detection in iCloud Photos system. Image Credits: Apple

Governments and agencies worldwide are constantly pressuring all large organizations that have any sort of end-to-end or even partial encryption enabled for their users. They often lean on CSAM and possible terrorism activities as rationale to argue for backdoors or encryption defeat measures. Is launching the feature and this capability with on-device hash matching an effort to stave off those requests and say, look, we can provide you with the information that you require to track down and prevent CSAM activity — but without compromising a user’s privacy?

So, first, you talked about the device matching so I just want to underscore that the system as designed doesn’t reveal — in the way that people might traditionally think of a match — the result of the match to the device or, even if you consider the vouchers that the device creates, to Apple. Apple is unable to process individual vouchers; instead, all the properties of our system mean that it’s only once an account has accumulated a collection of vouchers associated with illegal, known CSAM images that we are able to learn anything about the user’s account. 

Now, why to do it is because, as you said, this is something that will provide that detection capability while preserving user privacy. We’re motivated by the need to do more for child safety across the digital ecosystem, and all three of our features, I think, take very positive steps in that direction. At the same time we’re going to leave privacy undisturbed for everyone not engaged in the illegal activity.

Does this, creating a framework to allow scanning and matching of on-device content, create a framework for outside law enforcement to counter with, ‘we can give you a list, we don’t want to look at all of the user’s data but we can give you a list of content that we’d like you to match’. And if you can match it with this content you can match it with other content we want to search for. How does it not undermine Apple’s current position of ‘hey, we can’t decrypt the user’s device, it’s encrypted, we don’t hold the key?’

It doesn’t change that one iota. The device is still encrypted, we still don’t hold the key, and the system is designed to function on on-device data. What we’ve designed has a device side component — and it has the device side component by the way, for privacy improvements. The alternative of just processing by going through and trying to evaluate users data on a server is actually more amenable to changes [without user knowledge], and less protective of user privacy.

Our system involves both an on-device component where the voucher is created, but nothing is learned, and a server-side component, which is where that voucher is sent along with data coming to Apple service and processed across the account to learn if there are collections of illegal CSAM. That means that it is a service feature. I understand that it’s a complex attribute that a feature of the service has a portion where the voucher is generated on the device, but again, nothing’s learned about the content on the device. The voucher generation is actually exactly what enables us not to have to begin processing all users’ content on our servers which we’ve never done for iCloud Photos. It’s those sorts of systems that I think are more troubling when it comes to the privacy properties — or how they could be changed without any user insight or knowledge to do things other than what they were designed to do.

One of the bigger queries about this system is that Apple has said that it will just refuse action if it is asked by a government or other agency to compromise by adding things that are not CSAM to the database to check for them on-device. There are some examples where Apple has had to comply with local law at the highest levels if it wants to operate there, China being an example. So how do we trust that Apple is going to hew to this rejection of interference If pressured or asked by a government to compromise the system?

Well first, that is launching only for US, iCloud accounts, and so the hypotheticals seem to bring up generic countries or other countries that aren’t the US when they speak in that way, and the therefore it seems to be the case that people agree US law doesn’t offer these kinds of capabilities to our government. 

But even in the case where we’re talking about some attempt to change the system, it has a number of protections built in that make it not very useful for trying to identify individuals holding specifically objectionable images. The hash list is built into the operating system, we have one global operating system and don’t have the ability to target updates to individual users and so hash lists will be shared by all users when the system is enabled. And secondly, the system requires the threshold of images to be exceeded so trying to seek out even a single image from a person’s device or set of people’s devices won’t work because the system simply does not provide any knowledge to Apple for single photos stored in our service. And then, thirdly, the system has built into it a stage of manual review where, if an account is flagged with a collection of illegal CSAM material, an apple team will review that to make sure that it is a correct match of illegal CSAM material prior to making any referral to any external entity. And so the hypothetical requires jumping over a lot of hoops, including having Apple change its internal process to refer material that is not illegal, like known CSAM and that we don’t believe that there’s a basis on which people will be able to make that request in the US. And the last point that I would just add is that it does still preserve user choice, if a user does not like this kind of functionality, they can choose not to use iCloud Photos and if iCloud Photos is not enabled no part of the system is functional.

So if iCloud Photos is disabled, the system does not work, which is the public language in the FAQ. I just wanted to ask specifically, when you disable iCloud Photos, does this system continue to create hashes of your photos on device, or is it completely inactive at that point?

If users are not using iCloud Photos, NeuralHash will not run and will not generate any vouchers. CSAM detection is a neural hash being compared against a database of the known CSAM hashes that are part of the operating system image. None of that piece, nor any of the additional parts including the creation of the safety vouchers or the uploading of vouchers to iCloud Photos is functioning if you’re not using iCloud Photos. 

In recent years, Apple has often leaned into the fact that on-device processing preserves user privacy. And in nearly every previous case and I can think of that’s true. Scanning photos to identify their content and allow me to search them, for instance. I’d rather that be done locally and never sent to a server. However, in this case, it seems like there may actually be a sort of anti-effect in that you’re scanning locally, but for external use cases, rather than scanning for personal use — creating a ‘less trust’ scenario in the minds of some users. Add to this that every other cloud provider scans it on their servers and the question becomes why should this implementation being different from most others engender more trust in the user rather than less?

I think we’re raising the bar, compared to the industry standard way to do this. Any sort of server side algorithm that’s processing all users photos is putting that data at more risk of disclosure and is, by definition, less transparent in terms of what it’s doing on top of the user’s library. So, by building this into our operating system, we gain the same properties that the integrity of the operating system provides already across so many other features, the one global operating system that’s the same for all users who download it and install it, and so it in one property is much more challenging, even how it would be targeted to an individual user. On the server side that’s actually quite easy — trivial. To be able to have some of the properties and building it into the device and ensuring it’s the same for all users with the features enable give a strong privacy property. 

Secondly, you point out how use of on device technology is privacy preserving, and in this case, that’s a representation that I would make to you, again. That it’s really the alternative to where users’ libraries have to be processed on a server that is less private.

The things that we can say with this system is that it leaves privacy completely undisturbed for every other user who’s not into this illegal behavior, Apple gain no additional knowledge about any users cloud library. No user’s Cloud Library has to be processed as a result of this feature. Instead what we’re able to do is to create these cryptographic safety vouchers. They have mathematical properties that say, Apple will only be able to decrypt the contents or learn anything about the images and users specifically that collect photos that match illegal, known CSAM hashes, and that’s just not something anyone can say about a cloud processing scanning service, where every single image has to be processed in a clear decrypted form and run by routine to determine who knows what? At that point it’s very easy to determine anything you want [about a user’s images] versus our system only what is determined to be those images that match a set of known CSAM hashes that came directly from NCMEC and and other child safety organizations. 

Can this CSAM detection feature stay holistic when the device is physically compromised? Sometimes cryptography gets bypassed locally, somebody has the device in hand — are there any additional layers there?

I think it’s important to underscore how very challenging and expensive and rare this is. It’s not a practical concern for most users though it’s one we take very seriously, because the protection of data on the device is paramount for us. And so if we engage in the hypothetical where we say that there has been an attack on someone’s device: that is such a powerful attack that there are many things that that attacker could attempt to do to that user. There’s a lot of a user’s data that they could potentially get access to. And the idea that the most valuable thing that an attacker — who’s undergone such an extremely difficult action as breaching someone’s device — was that they would want to trigger a manual review of an account doesn’t make much sense. 

Because, let’s remember, even if the threshold is met, and we have some vouchers that are decrypted by Apple. The next stage is a manual review to determine if that account should be referred to NCMEC or not, and that is something that we want to only occur in cases where it’s a legitimate high value report. We’ve designed the system in that way, but if we consider the attack scenario you brought up, I think that’s not a very compelling outcome to an attacker.

Why is there a threshold of images for reporting, isn’t one piece of CSAM content too many?

We want to ensure that the reports that we make to NCMEC are high value and actionable, and one of the notions of all systems is that there’s some uncertainty built in to whether or not that image matched, And so the threshold allows us to reach that point where we expect a false reporting rate for review of one in 1 trillion accounts per year. So, working against the idea that we do not have any interest in looking through users’ photo libraries outside those that are holding collections of known CSAM the threshold allows us to have high confidence that those accounts that we review are ones that when we refer to NCMEC, law enforcement will be able to take up and effectively investigate, prosecute and convict.

10 Aug 2021

Jerry raises $75M at a $450M valuation to build a car ownership ‘super app’

Just months after raising $28 million, Jerry announced today that it has raised $75 million in a Series C round that values the company at $450 million.

Existing backer Goodwater Capital doubled down on its investment in Jerry, leading the “oversubscribed” round. Bow Capital, Kamerra, Highland Capital Partners and Park West Asset Management also participated in the financing, which brings Jerry’s total raised to $132 million since its 2017 inception. Goodwater Capital also led the startup’s Series B earlier this year. Jerry’s new valuation is about “4x” that of the company at its Series B round, according to co-founder and CEO Art Agrawal

“What factored into the current valuation is our annual recurring revenue, growing customer base and total addressable market,” he told TechCrunch, declining to be more specific about ARR other than to say it is growing “at a very fast rate.” He also said the company “continues to meet and exceed growth and revenue targets” with its first product, a service for comparing and buying car insurance. At the time of the company’s last raise, Agrawal said Jerry saw its revenue surge by “10x” in 2020 compared to 2019.

Jerry, which says it has evolved its model to a mobile-first car ownership “super app,” aims to save its customers time and money on car expenses. The Palo Alto-based startup launched its car insurance comparison service using artificial intelligence and machine learning in January 2019. It has quietly since amassed nearly 1 million customers across the United States as a licensed insurance broker.

“Today as a consumer, you have to go to multiple different places to deal with different things,” Agrawal said at the time of the company’s last raise. “Jerry is out to change that.”

The new funding round fuels the launch of the company’s “compare-and-buy” marketplaces in new verticals, including financing, repair, warranties, parking, maintenance and “additional money-saving services.” Although Jerry also offers a similar product for home insurance, its focus is on car ownership.

Agrawal told TechCrunch that the company is on track to triple last year’s policy sales, and that its policy sales volume makes Jerry the number one broker for a few of the top 10 insurance carriers.
“The U.S. auto insurance industry is an at least $250 billion market,” he added. “The market opportunity for our first auto financing service is $260 billion. As we enter more car expense categories, our total addressable market continues to grow.”

Image Credits: Jerry

“Access to reliable and affordable transportation is critical to economic empowerment,” said Rafi Syed, Jerry board member and general partner at Bow Capital, which also doubled down on its investment in the company. “Jerry is helping car owners make the most of every dollar they earn. While we see Jerry as an excellent technology investment showcasing the power of data in financial services, it’s also a high-performing investment in terms of the financial inclusion it supports.” 

Goodwater Capital Partner Chi-Hua Chien said the firm’s recurring revenue model makes it stand out from lead generation-based car insurance comparison sites.

CEO Agrawal agrees, noting that Jerry’s high-performing annual recurring revenue model has made the company “attractive to investors” in addition to the fact that the startup “straddles” the auto, e-commerce, fintech and insurtech industries.

“We recognized those investment opportunities could drive our business faster and led to raising the round earlier than expected,” he told TechCrunch. “We’re eager to launch new categories to save customers time and money on auto expenses and the new investment shortens our time to market.”

Agrawal also believes Jerry is different from other auto-related marketplaces out there in that it aims to help consumers with various aspects of car ownership (from repair to maintenance to insurance to warranties), rather than just one. The company also believes it is set apart from competitors in that it doesn’t refer a consumer to an insurance carrier’s site so that they still have to do the work of signing up with them separately, for example. Rather, Jerry uses automation to give consumers customized quotes from more than 45 insurance carriers “in 45 seconds.” The consumers can then sign on to the new carrier via Jerry, which can then cancel former policies on their behalf.

Jerry makes recurring revenue from earning a percentage of the premium when a consumer purchases a policy on its site from carriers such as Progressive.

10 Aug 2021

Motional to begin testing autonomous vehicles in LA as part of California expansion plan

Motional, the autonomous vehicle company born out of a $4 billion joint venture with Aptiv and Hyundai, is expanding its presence in California by opening a new operations facility in Los Angeles to support testing on public roads, hiring more engineers and adding an office in Silicon Valley.

The investment into the area follows a hiring spree that has pushed Motional’s total headcount to more than 1,000 people, an expansion into Seoul and its announcement last December to launch fully driverless robotaxi services in major U.S. cities in 2023 using the Lyft ride-hailing network.

While Motional declined to disclose its investment into the California expansion, the company is clearly putting its capital to work with plans to hire dozens of people and scale up operations in Los Angeles and the San Francisco Bay Area.

Motional has had an office in Los Angeles since 2016. The LA office is where some of the company’s machine learning and hardware engineers are based. As part of its expansion plan, Motional has moved the team into a larger location in Santa Monica near the Santa Monica Pier.

Mortional is also opening a new operations facility located a few miles away and plans to more than double the number of employees based in Los Angeles to more than 100 people. The operations facility will support Motional’s plans to begin mapping roads and eventually testing its autonomous vehicles on public roads. Testing routes will initially be centered in and around the Santa Monica area, near its office and operations facility.

Motional said it will use the all-electric Hyundai IONIQ 5, the vehicle that will be the cornerstone of its eventual commercial robotaxi service, in its testing there. The Hyundai IONIQ 5, which was revealed in February 2021 with a consumer release date expected later this year, will be fully integrated with Motional’s driverless system. The vehicles will be equipped with the hardware and software needed for Level 4 autonomous driving capabilities such as lidar, radar and cameras. Level 4, is a designation by SAE, that means the vehicle will handle all driving operations in certain conditions and environments.

For now, the testing will involve autonomous vehicles with a safety driver behind the wheel. The company does not yet have a permit in the state to test its AVs without a human operator behind the wheel. That permit issued by the California Department of Motor Vehicles,

This is first time the company has tested on public roads in Los Angeles. Motional already tests its AVs in Boston, Las Vegas, Pittsburgh and Singapore.

Motional’s President and CEO Karl Iagnemma described this as a “doubling down” of its West Coast footprint. “This expansion is the latest in our growth trajectory and will position Motional with the talent, testing capabilities, and R&D resources we need to deliver on our commercialization roadmap, Iagnemma said, adding that Los Angeles has long been an important part of its global operations.

Motional has also opened an office in Milpitas, a Silicon Valley town located in the southern section of the San Francisco Bay. The company’s compute design team will be based out of this office, Motional said in its announcement.

10 Aug 2021

Oh hey, Xiaomi has its own creepy robot dog now

Xiaomi has today announced the CyberDog, an open-source quadruped robot intended for developers to “build upon” and create applications for. The machine, which resembles a beefier version of Boston Dynamics’ Spot, is a showcase for Xiaomi’s engineering know-how, including its proprietary servo motors.

Xiaomi CyberDog
Xiaomi

Running the show is a version of NVIDIA’s Jetson Xavier NX, which has been dubbed the world’s smallest AI supercomputer. In terms of being able to experience the world, CyberDog has 11 sensors over its body, including touch and ultrasonic sensors, cameras and GPS to help it “interact with its environment.”

Xiaomi says that this technology is good enough to enable CyberDog to follow its owner and navigate around obstacles. It is also capable of identifying posture and tracking human faces, enabling it to pick out and track individuals in a group.

Render of the Xiaomi CyberDog in all of its terrifying glory.
Xiaomi

Rather than selling this as a general-sale product, the company is for now going to release 1,000 of these things to “Xiaomi fans, engineers and robotics enthusiasts to jointly explore the immense possibility of CyberDog.” This will be facilitated by an open-source community, hosted by Xiaomi, which may be followed by the construction of a robotics laboratory to lay a pathway for “future innovations.”

Of course, this thing isn’t cheap, and those folks willing to get involved will need to shell out 9,999 yuan or around $1,540 to get one of these for themselves.

Editor’s note: This post first appeared on Engadget.

10 Aug 2021

With liberty and privacy for some: Widening inequality on the digital frontier

Privacy is emotional — we often value privacy the most when we feel vulnerable or powerless when confronted with creepy data practices. But in the eyes of the court, emotions don’t always constitute harm or a reason for structural change in how privacy is legally codified.

It might take a material perspective on widening privacy disparities — and their implication in broader social inequality — to catalyze the privacy improvements the U.S. desperately needs.

Apple’s leaders announced their plans for the App Tracking Transparency (ATT) update in 2020. In short, iOS users can refuse an app’s ability to track their activity on other apps and websites. The ATT update has led to a sweeping three-quarters of iOS users opting out of cross-app tracking.

Whenever one user base gears up with privacy protections, companies simply redirect their data practices along the path of least resistance.

With less data available to advertisers looking to develop individual profiles for targeted advertising, targeted ads for iOS users look less effective and appealing to ad agencies. As a result, new findings show that advertisers are spending one-third less in advertising spending on iOS devices.

They are redirecting that capital into advertising on Android systems, which account for just over 42.06% of the mobile OS market share, compared to iOS at 57.62%.

Beyond a vague sense of creepiness, privacy disparities increasingly pose risks of material harm: emotional, reputational, economic and otherwise. If privacy belongs to all of us, as many tech companies say, then why does it cost so much? Whenever one user base gears up with privacy protections, companies simply redirect their data practices along the path of least resistance, toward the populations with fewer resources, legal or technical, to control their data.

More than just ads

As more money goes into Android ads, we could expect advertising techniques to become more sophisticated, or at least more aggressive. It is not illegal for companies to engage in targeted advertising, so long as it is done in compliance with users’ legal rights to opt out under relevant laws like CCPA in California.

This raises two immediate issues. First, residents of every state except California currently lack such opt-out rights. Second, granting some users the right to opt out of targeted advertising strongly implies that there are harms, or at least risks, to targeted advertising. And indeed, there can be.

Targeted advertising involves third parties building and maintaining behind-the-scenes profiles of users based on their behavior. Gathering data on app activity, such as fitness habits or shopping patterns, could lead to further inferences about sensitive aspects of a user’s life.

At this point, a representation of a user exists in an under-regulated data system containing — whether correctly or incorrectly inferenced — data that the user did not consent to sharing. (Unless the user lives in California, but let’s suppose they live anywhere else in the U.S.)

Further, research finds that targeted advertising, in building detailed profiles of users, can enact discrimination in housing and employment opportunities, sometimes in violation of federal law. And targeted advertising can impede individuals’ autonomy, preemptively narrowing their window of purchasing options, even when they don’t want to. On the other hand, targeted advertising can support niche or grassroots organizations in connecting them directly with interested audiences. Regardless of a stance on targeted advertising, the underlying problem is when users have no say in whether they are subject to it.

Targeted advertising is a massive and booming practice, but it is only one practice within a broader web of business activities that do not prioritize respect for users’ data. And these practices are not illegal in much of the U.S. Instead of the law, your pocketbook can keep you clear of data disrespect.

Privacy as a luxury

Prominent tech companies, particularly Apple, declare privacy a human right, which makes complete sense from a business standpoint. In the absence of the U.S. federal government codifying privacy rights for all consumers, a bold privacy commitment from a private company sounds pretty appealing.

If the government isn’t going to set a privacy standard, at least my phone manufacturer will. Even though only 6% of Americans claim to understand how companies use their data, it is companies that are making the broad privacy moves.

But if those declaring privacy as a human right only make products affordable to some, what does that say about our human rights? Apple products skew toward wealthier, more educated consumers compared to competitors’ products. This projects a troubling future of increasingly exacerbated privacy disparities between the haves and the have-nots, where a feedback loop is established: Those with fewer resources to acquire privacy protections may have fewer resources to navigate the technical and legal challenges that come with a practice as convoluted as targeted advertising.

Don’t take this as me siding with Facebook in its feud with Apple about privacy versus affordability (see: systemic access control issues recently coming to light). In my view, neither side of that battle is winning.

We deserve meaningful privacy protections that everyone can afford. In fact, to turn the phrase on its head, we deserve meaningful privacy protections that no company can afford to omit from their products. We deserve a both/and approach: privacy that is both meaningful and widely available.

Our next steps forward

Looking ahead, there are two key areas for privacy progress: privacy legislation and privacy tooling for developers. I again invoke the both/and approach. We need lawmakers, rather than tech companies, setting reliable privacy standards for consumers. And we need widely available developer tools that give developers no reason — financially, logistically or otherwise — to implement privacy at the product level.

On privacy legislation, I believe that policy professionals are already raising some excellent points, so I’ll direct you to some of my favorite recent writing from them.

Stacey Gray and her team at the Future of Privacy Forum have begun an excellent blog series on how a federal privacy law could interact with the emerging patchwork of state laws.

Joe Jerome published an outstanding recap of the 2021 state-level privacy landscape and the routes toward widespread privacy protections for all Americans. A key takeaway: The effectiveness of privacy regulation hinges on how well it harmonizes among individuals and businesses. That’s not to say that regulation should be business-friendly, but rather that businesses should be able to reference clear privacy standards so they can confidently and respectfully handle everyday folks’ data.

On privacy tooling, if we make privacy tools readily accessible and affordable for all developers, we really leave tech with zero excuses to meet privacy standards. Take the issue of access control, for instance. Engineers attempt to build manual controls over which personnel and end users can access various data in a complex data ecosystem already populated with sensitive personal information.

The challenge is twofold. First, the horse has already bolted. Technical debt accumulates rapidly, while privacy has remained outside of software development. Engineers need tools that enable them to build privacy features like nuanced access control prior to production.

This leads into the second aspect of the challenge: Even if the engineers overcame all of the technical debt and could make structural privacy improvements at the code level, what standards and widely available tools are available to use?

As a June 2021 report from the Future of Privacy Forum makes clear, privacy technology is in dire need of consistent definitions, which are required for widespread adoption of trustworthy privacy tools. With more consistent definitions and widely available developer tools for privacy, these technical transformations translate into material improvements in how tech at large — not just tech of Brand XYZ — gives users control over their data.

We need privacy rules set by an institution that is not itself playing the game. Regulation alone cannot save us from modern privacy perils, but it is a vital ingredient in any viable solution.

Alongside regulation, every software engineering team should have privacy tools immediately available. When civil engineers are building a bridge, they cannot make it safe for a subset of the population; it must work for all who cross it. The same must hold for our data infrastructure, lest we exacerbate disparities within and beyond the digital realm.

10 Aug 2021

Robinhood buys Say Technologies for $140M to improve shareholder-company relations

U.S. consumer investing and trading service Robinhood announced this morning that it will acquire Say Technologies in a $140 million cash deal.

Say Technologies is a venture-backed startup, having raised $8 million in 2018, per Crunchbase data. PitchBook data indicates that the company was worth $28 million on a post-money basis following the investment, implying that the company’s backers managed a roughly 5x return on their investment.

Say was backed by Point72 Ventures, among other investors.

The deal is notable because it is Robinhood’s first major purchase since going public in late July, and because it illustrates where Robinhood may look to invest some of its newly liquid equity wealth; when a company goes public, it can more easily purchase other companies thanks to recharged cash balances and a floating stock.

In a blog post, Robinhood wrote that “Say was built on the belief that everyone should have the same access to the financial markets as Wall Street insiders.” What does that mean? In practice, Say has built a communications platform that allows even smaller shareholders to pose questions to the companies in which they invest. Sure, some companies are including retail questions in their earnings calls, but what Say has in mind is broader.

You can see how Say and Robinhood might fit together. Robinhood has a huge user pool of retail investors who like to trade and invest. Say has the technology to connect retail investors to the companies that they own. With Robinhood’s database of which retail investor owns what, and Say’s communications tech, the trading platform may be able to offer a better shareholder experience than what rival platforms can offer.

By offering a service like what Say has built to its user base, Robinhood can offer a unique twist on retail investing. This feels somewhat analogous to Spotify spending heavily to procure exclusive rights to certain podcasts; such efforts differentiate Spotify from rivals despite having a commoditized core offering. Trading is now free in many places, so Robinhood layering specialized services on top of its investing service makes good sense, perhaps helping drive user loyalty and net new-user adds.

Shares of Robinhood are off around 1.2% today, despite generally higher markets. We might say that investors were selling lightly in wake of the news, but that would be a somewhat bold read of the day’s trading.

10 Aug 2021

Venmo to allow credit cardholders to automatically buy cryptocurrency with their cash back

PayPal-owned Venmo is expanding its support for cryptocurrency with today’s launch of a new feature that will allow users to automatically buy cryptocurrency using the cash back they earned from their Venmo credit card purchases. Unlike when buying cryptocurrency directly, these automated purchases will have no transaction fees associated with them — a feature Venmo says is not a promotion, but how the system will work long term. Instead, a cryptocurrency conversion spread is built into each monthly transaction.

Cardholders will be able to purchase Bitcoin, Ethereum, Litecoin and Bitcoin Cash through the new “Cash Back to Crypto” option, rolling out now to the Venmo app.

Venmo had first introduced the ability for customers to buy, hold and sell cryptocurrency in April of this year, as part of a larger investment in cryptocurrency led by parent company, PayPal. In partnership with Paxos Trust Company, a regulated provider of cryptocurrency products and services, Venmo’s over 70 million users are now able to access cryptocurrency from within the Venmo app. 

The cash back feature, meanwhile, could help drive sign-ups for the Venmo Credit Card, by interlinking it with the cryptocurrency functionality. Currently, Venmo cardholders can earn monthly cash back across eight different spending categories, with up to 3% back on their top eligible spending category, then 2% and 1% back on the second highest and all other purchases, respectively. The top two categories are adjusted monthly, based on where consumers are spending the most.

To enable Cash Back to Crypto, Venmo customers will navigate to the Venmo Credit Card home screen in the app, select the Rewards tab, then “Get Started.” From here, they’ll agree to the terms, select the crypto of their choice, and confirm their selection. Once enabled, when the cash back funds hit the customer’s Venmo balance, the money is immediately used to make a crypto purchase — no interaction on the user’s part is required.

The feature will not include any transaction fees, as a cryptocurrency conversion spread is built into each monthly transaction. This is similar to how PayPal is handling Checkout with Crypto, which allows online shoppers to make purchases using their cryptocurrency. The cryptocurrency is converted to fiat, but there are not transaction fees.

The feature can also be turned on or off at any time, Venmo notes.

The company views Cash Back to Crypto as a way for newcomers to cryptocurrency to enter the market, without having to worry about the process of making crypto purchases. It’s more of a set-it-and-forget-it type of feature. However, unless users make regular and frequent transactions with their Venmo Credit Card, these cash back-enabled crypto purchases will likely be fairly small.

The company has yet to offer details on how many Venmo credit cardholders are active in the market. So far, PayPal CEO Dan Schulman has only said, during Q1 earnings, that the card “is outpacing our expectations for both new accounts and transactions.” This past quarter, the exec noted that the company was also seeing “strong adoption and trading of crypto on Venmo.”

“The introduction of the Cash Back to Crypto feature for the Venmo Credit Card offers customers a new way to start exploring the world of crypto, using their cash back earned each month to automatically and seamlessly purchase one of four cryptocurrencies on Venmo,” noted Darrell Esch, SVP and GM at Venmo, in a statement. “We’re excited to bring this new level of feature interconnectivity on the Venmo platform, linking our Venmo Credit Card and crypto experiences to provide another way for our customers to spend and manage their money with Venmo,” he added.

The new option will begin rolling out starting today to Venmo Credit Cardholders.

10 Aug 2021

Surfside, a marketing technology for the cannabis space, inhales $4 million

Surfside, a data and marketing platform aimed directly at the Cannabis industry, has today announced the close of a $4 million seed round led by Casa Verde. The firm’s managing partner, Karan Wadhera, will join the Surfside board.

The startup is relatively new, but helmed by two founders experienced in the marketing and data space. Jon Lowen and Michael Blanche both hail from SITO Mobile, where they worked on location-based advertising and marketing via mobile devices.

The duo has built a platform that uses many of the same principles to bring better marketing acumen and strategies to bbusinesses in the cannabis space.

One of the major marketing challenges for dispensaries and cannabis brands is that the usual channels through which small brands advertise — Facebook and Google — are not available to them. Not only are those powerful marketing channels, but they also provide valuable analytics about potential and existing customers.

Surfside looks to fill that gap in the cannabis space, offering businesses data around their customers using a combination of existing data, publicly available data, loyalty program data, etc. to help these brands better understand their customers.

The startup creates profiles of customers, looking to understand their location, their other interests, and other important details that can help brands and businesses market and advertise to both existing and new customers.

But Surfside actually goes a step further and develops campaigns for these businesses, allowing them to work one-on-one with a marketing expert to develop and activate these campaigns.

Originally, Surfside generated revenue via the campaign itself, taking a cut of the spend.

“We’re almost an extension of the marketing departments at a dispensary,” said Lowen. “Small dispensaries don’t necessarily have the bandwidth to run these campaigns or log into extra software. Our team can help plan, execute and get the most value out of the data for them. Now, we want to start empowering brands, retailers, dispensaries as they progress to be able to have ownership of the data, with these consumer insights and research at their fingertips, and give them the ability to decide if they want to activate on their own or continue using the experts we have here.”

With this new financing, the company is adding another revenue stream through its data platform, allowing cannabis businesses to purchase a SaaS license. This way, businesses who’d like to use the data but develop their own campaigns and marketing strategies have the option to do so.

Surfside believes that their technology can expand well beyond cannabis and into other verticals, but see an opportunity in the cannabis space to grow and build alongside the burgeoning industry.

Moreover, the complicated regulatory landscape in the cannabis space allows Surfside to become a compliance monitor, as well, which may end up being yet another revenue stream for the company.