Month: October 2018

01 Oct 2018

Facebook breach hit up to 5M EU users, and it faces up to $1.63B in fines

Less than 10 percent of the 50 million users attacked in Facebook’s recent breach lived in the European Union, tweeted the Irish Data Protection Commission which oversees privacy in the region. However, Facebook still could be liable for up to $1.63 billion in fines, or 4 percent of its $40.7 billion in annual global revenue for the prior financial year, if the EU determines it didn’t do enough to protect the security of its users.

Facebook wrote in response to the IDPC’s tweet that “We’re working with regulators including the Irish Data Protection Commission to share preliminary data about Friday’s security issue. As we work to confirm the location of those potentially affected, we plan to release further info soon.”

Facebook alerted regulators and the public to the breach Friday morning after discovering it Tuesday afternoon. That’s important because it came under the 72-hour deadline for announcing hacks that can trigger an additional fine of up to 2 percent of a company’s global revenue if not met.

That hack saw sophisticated attackers combine three bugs in Facebook’s profile, privacy, and video uploading features to steal the access token of 50 million users. These access tokens could allow the attackers to take over user accounts and act as them on Facebook, Instagram, Oculus, and other sites that rely on Facebook’s login system. The EU’s GDPR laws threaten heavy fines for improper security practices and are seen as stricter than those in the US, so its findings during this investigation carry weight.

The big question remains what data was stolen and how it could potentially be misused. Unless investigators or journalists discover a nefarious application for that data, such as how Cambridge Analytica’s illgotten data was used to inform Donald Trump’s campaign strategy, it’s unlikely for the public to see this as more than just another of Facebook’s constant privacy scandals. It could still trigger regulation, or push partners away from using Facebook’s login system, but the world seems to be growing numb to the daily cybersecurity breaches that plague the internet.

01 Oct 2018

Why Blissfully decided to go all in on serverless

Serverless has become a big buzzword of late, and with good reason. It has the potential to completely alter how developers write code. They can simply write a series of event triggers, while letting the cloud vendor worry about providing whatever amount of compute resources are required to complete the job. It represents a huge shift in how programs are developed, but it’s been difficult to find companies who were built from the ground up using this methodology because it’s fairly new.

Blissfully, a startup that helps customers manage their Software as a Service usage inside their companies, is one company that decided to do just that. Aaron White, co-founder and CTO, says that when he was building early versions of Blissfully, he found he needed quick bursts of compute power to deliver a list of all the SaaS products an organization is using.

He figured he could set aside a bunch of servers to provide that burst of power as needed, but that would have required a ton of overhead on his part to manage. At this point, he was a lone programmer trying to prove his SaaS management idea was even possible. As he looked at the pros and cons of serverless versus traditional virtual machines, he began to see serverless as a viable approach.

What he learned along the way was that serverless offers many advantages to a company with a bursty approach like Blissfully, scaling up and down as needed. But it isn’t perfect and there are issues around management and tooling and handling the pros and cons of that scaling ability that he had to learn about on the fly, especially coming in as early as he did with this approach.

Serverless makes sense

Blissfully is a service where serverless made a lot of sense. It wouldn’t have to manage or pay for servers it wasn’t using. Nor would it have to worry about the underlying infrastructure at all. That would be up to the cloud provider, and it would only pay for the bursts as they happened.

Serverless is actually a misnomer in that it doesn’t mean that there are no servers. It actually means that you don’t have to set up a servers in order to run your program, which is a pretty mind-blowing transformation. In traditional programming you have to write your code and set up all the underlying hardware ahead of time, whether it’s in your data center or in the cloud. With serverless, you just write the code and the cloud provider handles all of that for you.

The way it works in practice is that programmers set up a series of event triggers, so when a certain thing happens, the cloud provider sees this and provides the necessary resources on demand. Most of the cloud vendors are offering this type of service, whether AWS Lambda, Azure Functions or Google Functions.

At this point, White began to think about serverless as a way of freeing him from thinking about managing and maintaining infrastructure and all that entailed. “I started thinking, let’s see how far we can take this. Can we really we do absolutely everything serverless, and if so that reduces a ton of traditional DevOps-style work you have to do in practice. There’s still plenty, but that was the thinking at the beginning,” he said.

Overcoming obstacles

But there were issues, especially getting into serverless as early as he did. For starters, White needed to find developers who could work in this fashion, and in 2016 when it launched there weren’t a large number of people out there with  serverless skills. White said he wasn’t looking for direct experience so much as people who were curious to learn and were flexible enough to deal with new technology, regardless of how Blissfully implemented that.

Once he figured out the basics, he needed to think about how this would work structurally. “Part of the challenge is figuring out where do you draw the boundaries between different serverless functions? How do you think about how much you want to overload the capability of one function versus another? How do you want to split it up? You could go way too specific, and you can of course, go way too broad. So there’s a lot of judgement calls to be made in terms of how you want to split your code base to work in this way,” he said.

The other challenge he faced going with a serverless approach so early was a dearth of tooling around it. White found Serverless, Inc right way, which helped him with a basic framework for developing, but he lacked good logging tools and says that the company still struggles with this even now. “DevOps doesn’t go away. This is still running on a server somewhere (even if you don’t control that) and you will run into issues.” One such issue he calls a “cold start issue.”

Getting the resources right

Blissfully uses AWS Lambda, and as their customers require resources, it isn’t as though Amazon has a set of dedicated resources set aside waiting for such an event. If it needs to start servers cold, that could result in latency. To compensate for that, Blissfully runs a job that pings Lambda continually, so that it’s always ready to run the actual application, and there isn’t a lag time related to starting from scratch.

The other issue could be the opposite problem. You can scale much faster than you’re ready to deal with and that can be a problem for a small team. He says in that case, you want to put a limiter on the speed of the calls so you don’t end up spending more than you can afford, and it doesn’t scale beyond your team’s ability to manage it, “I think, in some ways, this actually accelerates you running into problems where you would normally be larger scale before you really had to think about them,” White said.

The other piece is that once Lambda gets everything going, it can move data faster than your external APIs can handle, and that could require limiters to actually slow things down. “I never had that problem in the past where I was provisioning so many computational resources that Google was yelling at me for being too fast. Being too fast for Google takes a lot of effort, but it doesn’t take a lot of effort with Lambda. When it does decide to spool up whatever resources, you can do some serious outbound damaged to other APIs.” That meant he and his team actually had to think very early on about building sophisticated rate limiting schemes.

As for costs, White estimates that his costs are much lower now that he has the service built and in place. “Our costs are so low right now, and far lower than if we had server-based infrastructure. Our computational pattern is very bursty.” That’s because it re-parses the SaaS database once a day or when the customer first signs up, and in between, usage is fairly low beyond interacting with the data.

“So for us that was perfect for serverless because I don’t really need to keep capacity around that would be pure waste.”

01 Oct 2018

Uber’s JUMP bike fleet may soon double in size in SF

If you live in San Francisco, expect to see more of those bright orange electric bikes on the road in the coming weeks. JUMP, the Uber-owned electric bike-share service that is about halfway through its 18-month bike-share pilot in San Francisco, may deploy an additional 250 bikes in the city. The pilot initially enabled JUMP to deploy 250 bikes with the potential to deploy an additional 250, if the first nine months went well.

Next week, the San Francisco Municipal Transportation Agency Director of Transportation is expected to make a formal decision around the expansion. Though, the SFMTA staff is recommending the city allow JUMP to deploy another 250 bikes.

Since deploying the bikes in January, JUMP has clocked more than 326,000 total trips across 38,000 unique riders, with about 2,250 trips taken each week day. Meanwhile, the average JUMP bike gets used between eight to ten times a day, with an average trip length of 2.6 miles.

JUMP bikes operate in tandem with Motivate’s Ford GoBike system, which includes both regular pedal and pedal-assist bikes. Based on the SFMTA’s preliminary conclusions, there is high demand for shared, electric bikes. Also, they seem to serve different trip lengths, origins and destinations, according to the SFMTA.

In San Francisco, there 1,200 Ford GoBikes with about 5,500 active riders. Per weekday, there are about 6,000 trips taken. While each JUMP bike makes about eight to 10 trips a day, a single Ford GoBike makes about one or two. Below, you can see just how much more popular JUMP’s shared, dockless electric bikes are than the shared, station-based bikes from Ford GoBike.

From the beginning, meaning prior to the Uber acquisition, JUMP has been focused on serving traditionally underserved communities. Although 55 percent of JUMP trips start or end in those areas, which the SFMTA identifies as “the most disadvantaged communities in the city,” there are still some communities that have reported a lack of service. Moving forward, the SFMTA says it will work with JUMP to “improve geographic equity and distribution.

01 Oct 2018

What to expect from tomorrow’s Microsoft Surface event

It’s fall. That means every big to mid-sized tech company is holding an event to debut its latest offerings in time for the holidays. Even if said offering is just a laptop made out of leather. Not to be left out, Microsoft’s got an event planned for tomorrow afternoon in New York.

As we noted early last month, we know a few thing for sure. First, it’s a hardware event. Second, it’s focused on Surface products. Third, there’s not going to be a Surface Phone this time.

The invite itself doesn’t offer a lot of information. It’s plain and white, bearing the words “a moment of your time.” Could there be a Surface Watch? I mean, I guess, but I wouldn’t bet on it. We have, however, seen enough credible rumors and leaks that we’ve got a pretty decent handle on what to expect tomorrow.

The Surface Pro 6 is the clear frontrunner here. It’s the product that’s been leaked the most ahead of the event — and honestly, it’s the member of the Surface family most overdue for a refresh. From what we’ve seen so far, I wouldn’t anticipate anything major on the design front. In fact, the product looks nearly identical to its predecessor.

In fact, the company’s apparently staying with the full-size USB ports found on the earlier units, rather than embracing USB-C. Seems like an odd choice for what’s traditionally been a forward-thinking line, though Microsoft appears to prize backward compatibility above all else here.

The internals fare a bit better here. The processors are being upgraded to 8th-gen Intel Cores with between 128GB and 1TB of storage, coupled with 4, 8 or 16GB of RAM.

The same appears to go for the Surface Laptop. I liked the original quite a bit, so I wouldn’t be entirely disappointed if they company doesn’t tweak the design language, as expected — though the supposed lack of USB-C ports is an odd one. As with the Pro, there’s expected to be a black version for the models with higher-end specs.

The Laptop is said to ship in both Core i5 and i7 configurations, coupled with storage starting at 128GB (up to 1TB) and 8 or 16GB of RAM.

Other potential additions include a refreshed Surface Studio and updates to the HoloLens line. The event kicks off tomorrow at 4PM ET.

01 Oct 2018

Sales engagement startup Apollo says its massive contacts database was stolen in a data breach

Apollo, a sales engagement startup boasting a database of more than 200 million contact records, has been hacked.

The YC Combinator-backed company, formerly known as ZenProspect, helps salespeople connect with prospective customers. Using its massive prospect database of 200 million contacts at 10 million companies, Apollo matches sellers with potential buyers.

Apollo said that the bulk of the stolen data was from its prospect database.

Bjoern Zinssmeister, co-founder of Templarbit, which posts details of data breaches on its Breachroom page, obtained a copy of the email sent to affected customers and forwarded it to TechCrunch.

The email said that company said the breach was discovered weeks after system upgrades in July.

“We have confirmed that the majority of exposed information came from our publicly gathered prospect database, which could include name, email address, company names, and other business contact information,” said the email to customers. “Some client-imported data was also accessed without authorization,” the company said, but did not say what kind of data that included.

Apollo’s database contains publicly available data, including names, job titles, employers, social media handles, phone numbers and email addresses. It doesn’t include Social Security numbers, financial data or email addresses and passwords, Apollo said.

Although the company’s chief executive Tim Zheng said that the company had contacted customers in line with its “values of transparency,” Zheng declined to answer TechCrunch’s questions — including what data was taken and how many customers were affected.

“The investigation is still ongoing,” said Zheng in an email. He added that the “only statement that we’re making to press at this time is the customer communication” sent to affected users.

Zheng also refused to say if the company has informed state authorities of the breach. A spokesperson for the California attorney general did not immediately comment on whether Apollo has notified the state about the breach.

Apollo may also face action from European authorities under GDPR.

The data breach may not pose an immediate security risk to users such as if usernames and passwords are stolen, but exposed contact information can have a long-term effect on user security, such as making it easier for attackers to send targeted phishing emails.

Even if the stolen data isn’t considered that sensitive, the breach adds to a growing pile of companies hoarding vast amounts of data but failing to keep it safe.

01 Oct 2018

Facebook can’t keep you safe

Another day, another announcement from Facebook that it has failed to protect your personal information. Were you one of the 50 million (and likely far more, given the company’s graduated disclosure style) users whose accounts were completely exposed by a coding error in play for more than a year? If not, don’t worry — you’ll get your turn being failed by Facebook . It’s incapable of keeping its users safe.

Facebook has proven over and over again that it prioritizes its own product agenda over the safety and privacy of its users. And even if it didn’t, the nature and scale of its operations make it nearly impossible to avoid major data breaches that expose highly personal data.

For one thing, the network has grown so large that its surface area is impossible to secure completely. That was certainly demonstrated Friday when it turned out that a feature rollout had let hackers essentially log in as millions of users and do who knows what. For more than a year.

This breach wasn’t a worst case scenario exactly, but it was close. To Facebook it would not have appeared that an account was behaving oddly — the hacker’s activity would have looked exactly like normal user activity. You wouldn’t have been notified via two-factor authentication, since it would be piggybacking on an existing login. Install some apps? Change some security settings? Export your personal data? All things a hacker could have done, and may very well have.

This happened because Facebook is so big and complicated that even the best software engineers in the world, many of whom do in fact work there, could not reasonably design and code well enough to avoid unforeseen consequences like the bugs in question.

I realize that sounds a bit hand-wavy, and I don’t mean simply that “tech is hard.” I mean that realistically speaking, Facebook has too many moving parts for the mere humans that run it to do so infallibly. It’s testament to their expertise that so few breaches have occurred; the big ones like Cambridge Analytica were failures of judgment, not code.

A failure is not just inevitable but highly incentivized in the hacking community. Facebook is by far the largest and most valuable collection of personal data in history. That makes it a natural target, and while it is far from an easy mark, these aren’t script kiddies trying to find sloppy scripts in their free time.

Facebook itself said that the bugs discovered Friday weren’t simple; it was a coordinated, sophisticated process to piece them together and produce the vulnerability. The people who did this were experts, and it seems likely that they have reaped enormous rewards for their work.

The consequences of failure are also huge. All your eggs are in the same basket. A single problem like this one could expose all the data you put on the platform, and potentially everything your friends make visible to you as well. Not only that, but even a tiny error, a highly specific combination of minor flaws in the code, will affect astronomical numbers of people.

Of course, a bit of social engineering or a badly configured website elsewhere could get someone your login and password as well. This wouldn’t be Facebook’s error, exactly, but it is a simple fact that because of the way Facebook has been designed — a centralized repository of all the personal data it can coax out of its users — a minor error could result in a total loss of privacy.

I’m not saying other social platforms could do much better. I’m saying this is just another situation in which Facebook has no way to keep you safe.

And if your data doesn’t get taken, Facebook will find a way to give it away. Because it’s the only thing of value that they have; the only thing anyone will pay for.

The Cambridge Analytica scandal, while it was the most visible, was only one of probably hundreds of operations that leveraged lax access controls into enormous data sets scraped with Facebook’s implicit permission. It was their job to keep that data safe, and they gave it to anyone who asked.

It’s worth noting here that not only does it only take one failure along the line to expose all your data, but failures beyond the first are in a way redundant. All that personal information you’ve put online can’t be magically sucked back in. In a situation where, for example, your credit card has been skimmed and duplicated, the risk of abuse is real, but it ends as soon as you get a new card. For personal data, once it’s out there, that’s it. Your privacy is irreversibly damaged. Facebook can’t change that.

Well, that’s not exactly right. It could, for example, sandbox all data older than three months and require verification to access it. That would limit breach damage considerably. It could also limit its advertising profiles to data from that period, so it isn’t building a sort of shadow profile of you based on analysis of years of data. It could even opt not to read everything you write and instead let you self-report categories for advertising. That would solve a lot of privacy issues right there. It won’t, though. No money in that.

One more thing Facebook can’t protect you from is the content on Facebook itself. The spam, bots, hate, echo chambers — all that is baked on in. The 20,000-strong moderation team they’ve put on the task is almost certainly totally inadequate, and of course the complexity of the global stage and all its cultures and laws ensures that there will always be conflict and unhappiness on this subject. At the very best it can remove the worst of it after it’s already been posted or streamed.

Again, it’s not really Facebook’s fault exactly that there are people abusing its platform. People are the worst, after all. But Facebook can’t save you from them. It can’t prevent the new category of harm that it has created.

What can you do about it? Nothing. It’s out of your hands. Even if you were to quit Facebook right now, your personal data may already have been leaked and no amount of quitting will stop it from propagating online forever. If it hasn’t already, it’s probably just a matter of time. There’s nothing you, or Facebook, can do about it. The sooner we, and Facebook, accept this as the new normal, the sooner we can get to work taking real measures toward our security and privacy.

01 Oct 2018

Meet Adam Mosseri, the new head of Instagram

Former Facebook VP of News Feed and recently appointed Instagram VP of Product Adam Mosseri has been named the new head of Instagram. “We are thrilled to hand over the reins to a product leader with a strong design background and a focus on craft and simplicity — as well as a deep understanding of the importance of community” Instagram’s founders Kevin Systrom and Mike Krieger write. “These are the values and principles that have been essential to us at Instagram since the day we started, and we’re excited for Adam to carry them forward.”

Instagram’s founders announced last week that they were resigning after sources told TechCrunch the pair had dealt with dwindling autonomy from Facebook and rising tensions with its CEO Mark Zuckerberg. The smiling photo above seems meant to show peace has been restored to Instaland, and counter the increasing perception that Facebook breaks its promises to acquired founders.

Mosseri’s experience dealing with the unintended consequences of the News Feed such as fake news in the wake of the 2016 election could help him predict how Instagram’s growth will affect culture, politics, and user well-being. Over the years of interviewing him, Mosseri has always come across as sharp, serious, and empathetic. He comes across as a true believer that Facebook and its family of apps can make a positive impact in the world, but congniscent of the hard work and complex choices required to keep them from being misused.

Born and raised in New York, Mosseri started his own design consultancy while attending NYU’s Gallatin School Of Interdisciplinary Study to learn about media and information design. Mosseri joined Facebook in 2008 after briefly working at a startup called TokBox. Tasked with helping Facebook embrace mobile as design director, he’s since become part of Zuckerberg’s inner circle of friends and lieutenants. Mosseri later moved into product management and oversaw Facebook’s News Feed, turn it into the world’s most popular social technology and the driver of billions in profit from advertising.

After going on parental leave this year, he returned to take over the role of Instagram VP of Product Kevin Weil as he move to Facebook’s blockchain team. A source tells TechCrunch he was well-received and productive since joining Instagram, and has gotten along well with Systrom. Mosseri now lives in San Francisco, close enough to work from both Instagram’s city office and South Bay headquarters.

“The impact of their work over the past eight years has been incredible. They built a product people love that brings joy and connection to so many lives” Mosseri wrote about Instagram’s founders in an…Instagram post. I’m humbled and excited about the opportunity to now lead the Instagram team. I want to thank them for trusting me to carry forward the values that they have established. I will do my best to make them, the team, and the Instagram community proud.”

Mosseri will be tasked with balancing the needs of Instagram such as headcount, engineering resources, and growth with the priorities of its parent company Facebook, such as cross-promotion to Instagram’s younger audience and revenue to contribute to the corporation’s earnings reports. Some see Mosseri as more sympathetic to Facebook’s desire than Instagram’s founders, given his long-stint at the parent company and his close relationship with Zuckerberg.

The question will be whether users will end up seeing more notifications and shortcuts linking back to Facebook, or more ads in the Stories and feed. Instagram hasn’t highlighted the ability to syndicate your Stories to Facebook, which could be boon for that parallel product. Instagram Stories now has 400 million daily users compared to Facebook Stories and Messenger Stories’ combined 150 million users. Tying them more closely could seem more content flow into Facebook, but it might also make users second guess whether what they’re sharing is appropriate for all of their Facebook friends, which might include family or professional colleagues.

Mosseri’s most pressing responsibility will be reassurring users that the culture of Instagram and its app won’t be assimilated into Facebook now that he’s running things instead of the founders. He’ll also need to snap into action to protect Instagram from being used as a pawn for election interference in the run-up to the 2018 US mid-terms.

01 Oct 2018

Google gets into game streaming with Project Stream and Assassin’s Creed Odyssey in Chrome

Earlier this year, we heard rumors that Google was working on a game-streaming service. It looks like those rumors were true. The company today unveiledProject Stream,” and while Google calls this a “technical test” to see how well game streaming to Chrome works, it’s clear that this is the foundational technology for a game-streaming service.

To sweeten the pot, Google is launching this test in partnership with Ubisoft and giving a limited number of players free access to Assassin’s Creed Odyssey for the duration of the test. You can sign up for the test now; starting on October 5, Google will invite a limited number of participants to play the game for free in Chrome.

As Google notes, the team wanted to work with a AAA title because that’s obviously far more of a challenge than working with a less graphics-intense game. And for any game-streaming service to be playable, the latency has to be minimal and the graphics can’t be worse than on a local machine. “When streaming TV or movies, consumers are comfortable with a few seconds of buffering at the start, but streaming high-quality games requires latency measured in milliseconds, with no graphics degradation,” the company notes in today’s announcement.

If you want to participate, though, you’ll have to be fast. Google is only taking a limited number of testers. Your internet connection has to be able to handle 25 megabits per second and you must live in the U.S. and be older than 17 to participate. You’ll also need both a Ubisoft and Google account. The service will support wired PlayStation and Xbox One and 360 controllers, though you can obviously also play with your mouse and keyboard.

While it remains to be seen if Google plans to expand this test and turn it into a full-blown paid service, it’s clear that it’s working on the technology to make this happen. And chances are Google wouldn’t pour resources into this if it didn’t have plans to commercialize its technology.

01 Oct 2018

Khosla GP Ben Ling is raising his own VC fund and it’s called Bling Capital

Ben Ling has filed to raise up to $60 million for a new fund called Bling Capital. Bling has long been Ling’s nickname both professionally and among friends.

The early Facebook executive is still a general partner at Khosla Ventures, the firm confirmed this morning. We’ve reached out to Ling for comment.

Ling was Facebook’s director of platform from 2007 to 2008. After stints at YouTube, Google and Badoo, where he was COO, Ling began his career in venture capital at Khosla in 2013.

Ling started out strong, leading a number of deals for the firm in 2013, including rounds for ThirdLove and Zenefits, but has since pulled back substantially, according to data from both Crunchbase and PitchBook. His most recent completed deal on record was a $5.5 million round for Bay Labs in December 2017. He joined the medical technology startup’s board as part of the deal.

Ling, who’s also on the boards of storytelling platform Wattpad, home security startup Canary and mobile commerce app Tapingo, hasn’t served as lead partner on any deals this year.

It’s worth noting that Khosla did participate in Plastiq’s $27 million Series C. No lead partner was disclosed, but because Ling has previously led the firm’s investments in the business expense platform and he serves on its board of directors, it’s likely he led the most recent deal as well.

In March, Khosla joined a growing list of firms targeting billion-dollar-plus funds to keep up with SoftBank’s enormous pool of capital when it filed to raise $1.4 billion across a pair of new VC funds.

01 Oct 2018

Google wants to make Chrome extensions safer

Google today announced a number of upcoming changes to how Chrome will handle extensions that request a lot of permissions, as well as new requirements for developers who want to publish their extensions in the Chrome Web Store.

It’s no secret that, no matter which browser you use, extensions are one of the main vectors that malicious developers use to gain access to your data. Over the years, Google has improved its ability to automatically detect malicious extensions before they ever make it into the store. The company has also made quite a few changes to the browser itself to ensure that extensions can wreak havoc once they have been installed. Now, it’s taking this a bit further.

Starting with Chrome 70, users can restrict host access to their own custom list of sites. That’s important because, by default, most extensions can see and manipulate any website you go to. Whitelists are hard to maintain, though, so users can also opt to only provide an extension with access to the current page after a click.

“While host permissions have enabled thousands of powerful and creative extension use cases, they have also led to a broad range of misuse – both malicious and unintentional – because they allow extensions to automatically read and change data on websites,” Google explains in today’s announcement.

Any extensions that request what Google calls “powerful permissions” will now also be subject to a more extensive review process. In addition, Google will also take a closer look at extensions that use remotely hosted code (since that code could be changed at any time, after all).

As far as permissions go, Google also notes that in 2019, it’ll introduce new mechanisms and more narrowly scoped APIs that will reduce the need for broader permissions and that will give users more control over the access that they grant to their extensions. Starting in 2019, Google will also require two-factor authentication for access to Chrome Web Store developer accounts to make sure that a malicious actor can’t take over a developer’s account and publish a hacked extensions.

While that change is still a few months out, starting today, developers are no longer allowed to publish extensions with obfuscated code. By default, obfuscated code isn’t a bad thing. Developers often use this method of scrambling their JavaScript source code to hide their code, which would otherwise be in clear text and easy to steal. That also makes it very hard to figure out what exactly the code does and 70 percent of malicious extensions and those that try to circumvent Google’s policies use obfuscated code. Google will remove all existing extensions with obfuscated code in 90 days.

it’s worth noting that developers will still be allowed to minify their code to remove whitespace, comments and newlines, for example.