Year: 2020

07 Dec 2020

Sidewalk Infrastructure Partners looks to make CA power grids more reliable with a $100 million investment

Sidewalk Infrastructure Partners, the investment firm which spun out of Alphabet’s Sidewalk Labs to fund and develop the next generation of infrastructure, has unveiled its latest project — Reslia, which focuses on upgrading the efficiency and reliability of power grids.

Through a $20 million equity investment in the startup OhmConnect, and an $80 million commitment to develop a demand response program leveraging OhmConnect’s technology and services across the state of California, Sidewalk Infrastructure Partners intends to plant a flag for demand-response technologies as a key pathway to ensuring stable energy grids across the country.

‘We’re creating a virtual power plant,” said Sidewalk Infrastructure Partners co-CEO, Jonathan Winer. “With a typical power plant … it’s project finance, but for a virtual power plant… We’re basically going to subsidize the rollout of smart devices.”

The idea that people will respond to signals from the grid isn’t a new one, as Winer himself acknowledged in an interview. But the approach that Sidewalk Infrastructure Partners is taking, alongside OhmConnect, to roll out the incentives to residential customers through a combination of push notifications and payouts, is novel. “The first place people focused is on commercial and industrial buildings,” he said. 

What drew Sidewalk to the OhmConnect approach was the knowledge of the end consumer that OhmConnect’s management team brought to the table The company’s chief technology officer was the former chief technology officer of Zynga, Winer noted.

“What’s cool about the OhmConnect platform is that it empowers participation,” Winer said. “Anyone can enroll in these programs. If you’re an OhmConnect user and there’s a blackout coming, we’ll give you five bucks if you turn down your thermostat for the next two hours.”

Illustration of Sidewalk Infrastructure Partners Resilia Power Plant. Image Credit: Sidewalk Infrastructure Partners

The San Francisco-based demand-response company already has 150,000 users on its platform, and has paid out something like $1 million to its customers during the brownouts and blackouts that have roiled California’s electricity grid over the past year.

The first collaboration between OhmConnect and Sidewalk Infrastructure Partners under the Resilia banner will be what the companies are calling a “Resi-Station” — a 550 megawatt capacity demand response program that will use smart devices to power targeted energy reductions.

At full scale, the companies said that the project will be the largest residential virtual power plant in the world. 

“OhmConnect has shown that by linking together the savings of many individual consumers, we can reduce stress on the grid and help prevent blackouts,” said OhmConnect CEO Cisco DeVries. “This investment by SIP will allow us to bring the rewards of energy savings to hundreds of thousands of additional Californians – and at the same time build the smart energy platform of the future.” 

California’s utilities need all the help they can get. Heat waves and rolling blackouts spread across the state as it confronted some of its hottest temperatures over the summer. California residents already pay among the highest residential power prices in the counry at 21 cents per kilowatt hour, versus a national average of 13 cents.

During times of peak stress earlier in the year, OhmConnect engaged its customers to reduce almost one gigawatt hour of total energy usage. That’s the equivalent of taking 600,000 homes off the grid for one hour.

If the Resilia project was rolled out at scale, the companies estimate they could provide 5 gigawatt hours of energy conservation — that’s the full amount of the energy shortfall from the year’s blackouts and the equivalent of not burning 3.8 million pounds of coal.

Going forward, the Resilia energy efficiency and demand response platform will scale up other infrastructure innovations as energy grids shift from centralized power to distributed, decentralized generation sources, the company said. OhmConnect looks to be an integral part of that platform.

“The energy grid used to be uni-directional.. .we believe that in the near future the grid is going to be become bi-directional and responsive,” said Winer. “With our approach, this won’t be one investment. We’ll likely make multiple investments. [Vehicle-to-grid], micro-grid platforms, and generative design are going to be important.” 

07 Dec 2020

Eco-conscious car subscription platform Finn.auto raises $24.2M, with White Star and Zalando founders

finn.auto — which allows people to subscribe to their car instead of owning it, and offsetting their CO2 emissions — has raised a $24.2M / €20M Series A funding round. White Star Capital (which has also invested in Tier Mobility), and the Zalando co-CEOs Rubin Ritter, David Schneider and Robert Gentz are new investors in this round. All previous investors participated.

The funding comes just under a year since the company launched, after selling just 1,000 car subscriptions. It’s also partnered with Deutsche Post AG and Deutsche Telekom AG.

A number of car manufacturers have launched similar subscription services powered by various providers, such as Drover, Leaseplan and Wagonex.

UK-based startup Drover has raised a total of $40M in funding over 5 rounds. Their latest Series B funding round was with Shell Ventures and Cherry Ventures . Plus, there are branded services which include Audi on Demand, BMW, Citroën, DS, Jaguar Carpe, Land Rover Carpe, Mini, Volkswagen and Care by Volvo.

Digitally-led subscription services have the potential to disrupt the traditional car sales model, and new startups are entering the market all the time.

The fin.auto model is proving to appeal to environment-conscious millennials. For each car subscription, the company is offsetting the CO₂ emissions of its vehicles, meaning subscribers can drive their cars in a climate-neutral manner. Its now expanding its range of fully electric vehicles and, in cooperation with ClimatePartner, is supporting selected regional climate protection and development projects.

Key to the Munich-based startups’ play is the automation of fleet management processes and customer interactions, meaning it’s much easier and cheaper to run this kind of subscription operation.

Max-Josef Meier, CEO and founder of finn.auto said: “We are delighted to have been able to bring such high-caliber investors on board and that our existing investors are cementing their confidence with the current round. Mobility with your own car becomes as easy as buying shoes on the Internet. We already offer a large selection of different car brands, whose cars can be ordered online on our platform in just five minutes and at flexible runtimes. The delivery is then conveniently made to the front door.”

Nicholas Stocks, General Partner at White Star Capital added: “There is a huge opportunity globally to streamline outdated customer experiences in the automotive retail space and become the Amazon of the automotive industry. This is something finn.auto is excellently placed to capitalize on with its offering of convenience, flexibility, value and sustainability.”

07 Dec 2020

Twitter users complain of timelines being overrun with ‘Promoted Tweets’

Twitter’s timeline is currently overrun with ads for some users, in what appears to be a glitch involving the distribution of Promoted Tweets. Typically, a Promoted Tweet — which is just a regular tweet an advertiser has paid to promote more broadly — will appear just once at the top of a user’s timeline, then scroll through the timeline like any other tweet. Now, however, Promoted Tweets are popping up with increased frequency. Some users report seeing them as often as every 4 to 6 tweets, in fact. Others are reporting seeing the same Promoted Tweet more than once.

This indicates some sort of issue with Twitter’s ad system, as the company intends for Promoted Tweets to be targeted and relevant to the end user, without being an overly frequent part of users’ timelines.

As Twitter’s Business website explains, “we’re thoughtful in how we display Promoted Tweets, and are conservative about the number of Promoted Tweets that people see in a single day.”

That’s obviously not the case when it seems like nearly every other tweet is now an ad — and often, a repeated ad.

Twitter has not yet publicly addressed the bug through its @TwitterSupport account, or others that communicate with the public, like @Twitter, @TwitterComms, or @TwitterMktg, so it’s been unclear how many users are impacted, on what platforms or in which geographic regions. However, we’ve seen complaints coming from users both in the U.S. and abroad and on both the “Home” and “Recent Tweets” timelines.

Given the lack of updates and information, some Twitter users have been dealing with the influx of Promoted Tweets by muting or blocking the advertiser’s account. That could have lasting consequences, as advertisers won’t be able to again reach those users if they get blocked.

Twitter, reached for comment, says it’s looking into the issue. We’ll update when the company has more to share.

07 Dec 2020

The mikme pocket is a fantastic mobile audio solution for podcasters, reporters and creators

Portable audio recording solutions abound, and many recently released devices have done a lot to improve the convenience and quality of sound recording devices you can carry in your pocket – spurred in part by smartphones and their constant improvement in video recording capabilities. A new device from Austria’s mikme, the mikme pocket (€369.00 or just under $450 USD), offers a tremendous amount of flexibility and quality in a very portable package, delivering what might just be the ultimate pocket sound solution for reporters, podcasters, video creators and more.

The basics

mikme pocket is small – about half the size of a smartphone, but square and probably twice as thick. It’s not as compact as something like the Rode Wireless GO, but it contains onboard memory and a Bluetooth antenna, making it possible to both record locally and transmit audio directly to a connected smartphone from up to three mikme pockets at once.

[gallery ids="2083738,2083739"]

The mikme pocket features a single button for control, as well as dedicated volume buttons, a 3.5mm headphone jack for monitoring audio, a micro-USB port for charging and for offloading files via physical connection, and Bluetooth pairing and power buttons. It has an integrated belt clip, as well as a 3/8″ thread mount for mic stands, with an adapter included for mounting to 1/4″ standard camera tripod connections.

In the box, mikme has also included a lavalier microphone with a mini XLR connector (which is the interface the pocket uses) and a clip and two windscreens for the mic. They also offer a ‘pro’ lavalier mic as a separate, add-on purchase (€149.00 or around $180 USD), which offers improved performance vs. the included lav in terms of audio quality and dynamic range.

Image Credits: mikme

The internal battery for the mikme pocket lasts up to 3.5 hours of recording time, and it can last for more than six months in standby mode between recordings, too.

Design and performance

The mikme pocket is a pretty unadorned black block, but its unassuming design is one of its strengths. It has a textured matte feel which helps with grittiness, and it’s easy to hide in dark clothing, plus the integrated belt clip works exactly as desired ensuring the pack is easy secured to anyone you’re trying to wire for sound. It features a single large button for simplified control, which also easily shows you its connectivity status using an LED backlight.

Controls for more advanced functions like Bluetooth connectivity, as well as the micro-USB port, are located on the bottom where they’re unlikely to be pressed accidentally by anyone during recording. The mini XLR interface for microphones means that once a mic is plugged it, it’s also securely locked in place and won’t be jostled out during sessions.

You can use the mikme pocket on its own, thanks to its 16GB of built-in local storage, but it really shines when used in tandem with the smartphone app. The app allows you to connect up to three pockets simultaneously, and provides a built-in video recorder so you can take full advantage of the recording capabilities of modern devices like the iPhone 12 to capture real-time synced audio while you film effortlessly. The mikme pocket and app also have a failsafe built in for filling in any gaps that might arise from any connection dropouts thanks to the local recording backup.

In terms of audio quality, the sound without adjusting any settings is excellent. Like all lavalier mics, you’ll get better results the closer you can place the actually mic capsule itself to a speaker’s mouth, but the mikme pocket produced exceptional clean-sounding, high-quality audio right out of the box – in environments that weren’t particularly sound isolated or devoid of background noise.

The included mini XLR lav mic is probably good enough for the needs of most amateur and enthusiast users, while the lavalier pro is a great upgrade option for anyone looking to make the absolute most of their recordings, especially with post-processing via desktop audio editing software. The mikme app has built-in audio tweaking controls with a great visual interface that allows you to hear the effects of processing tweaks in real-time, which is great for maximizing sound quality on the go before sharing clips and videos directly from your device to social networks or publishing platforms.

Bottom line

From on-phone shotgun mics, to handheld recorders and much more, there are plenty of options out there for capturing audio on-the-go, but the mikme pocket is the one that offers the best balance of very high-quality sound that’s essentially immediately ready to publish, in a package that’s both extremely easy to carry anywhere with you, and that offers durability and user-friendliness to suit newcomers and experts alike.

07 Dec 2020

SpaceX snags $885M from FCC to serve rural areas with Starlink

The FCC has just published the results of its Rural Digital Opportunity Fund Phase I auction, which sounds rather stiff but involves distributing billions to broadband providers that bring solid internet connections to under-served rural areas “on the wrong side of the digital divide.” $885 million is earmarked for SpaceX, whose Starlink satellite service could be a game-changer for places where laying fiber isn’t an option.

Only three other companies garnered more funds: Charter with $1.22 billion; Minnesota and Iowa provider LTD Broadband with $1.32B; and utility collective Rural Electric Cooperate Consortium with $1.1B. Those are all traditional wireline based broadband, and a quick perusal of the list of grantees suggests no other satellite broadband provider made the cut. 180 bidders were awarded support in total.

The $9.2B auction (though the specifics of the process itself are not relevant) essentially asks for bids on how much a company can provide service  to a given area for, ideally with a 100 megabit downstream and 20 up. Local companies can collect the hundred grand necessary to fund a fiber line where there’s now copper, and big multi-state concerns may promise to undertake major development projects for hundreds of millions of dollars.

SpaceX’s Starlink has the advantage of not requiring any major construction projects to reach people out in the boonies. All that’s needed is a dish and for their home to be in the area currently covered by the rapidly expanding network of satellites in low-Earth orbit. That means the company can undercut many of its competitors — in theory anyway.

Starlink has not had any major rollout yet, only small test deployments, which according to SpaceX have gone extremely well. The first wave of beta testers for the service will be expected to pay $99 per month plus a one-time $500 installation fee, but what the cost of the commercial service would be is anyone’s guess (probably a bit lower).

In order to secure the $885B in the FCC’s auction, SpaceX would need to demonstrate that it can provide solid service to the areas it claims to for a reasonable price, so we can expect the costs to be in line with terrestrial broadband offerings. No other satellite broadband provider operates in that price range (Swarm offers IoT connection for $5/month, but that’s a totally different category).

The FCC doesn’t just knock on Elon Musk’s door with a big check, though. The company must demonstrate “periodic buildout requirements” at the locations it’s promised, at which point funds will be disbursed. This continues for a period of several years, and should help the fledgling internet provider stay alive while undergoing the rigors and uncertainties of launch. By the time the FCC cash dries up the company will ideally have several million subscribers propping it up.

This is “Phase I” of the auction, targeting the areas most in need of new internet service; Phase II will cover “partially-served” areas that perhaps have one good provider but no competition. Whether SpaceX will be able (or want) to make a push there is unclear, though the confidence with which the company has been approaching the market suggests it may make a limited play for these somewhat more hardened target markets.

The push for expanding rural broadband has been a particular focus for outgoing FCC Chairman Ajit Pai, who had this to say about its success so far:

We structured this innovative and groundbreaking auction to be technologically neutral and to prioritize bids for high-speed, low-latency offerings.  We aimed for maximum leverage of taxpayer dollars and for networks that would meet consumers’ increasing broadband needs, and the results show that our strategy worked. This auction was the single largest step ever taken to bridge the digital divide and is another key success for the Commission in its ongoing commitment to universal service. I thank our staff for working so hard and so long to get this auction done on time, particularly during the pandemic.

You can read the full list of auction winners at the FCC’s press release here.

07 Dec 2020

3 questions to ask before adopting microservice architecture

As a product manager, I’m a true believer that you can solve any problem with the right product and process, even one as gnarly as the multiheaded hydra that is microservice overhead.

Working for Vertex Ventures US this summer was my chance to put this to the test. After interviewing 30+ industry experts from a diverse set of companies — Facebook, Fannie Mae, Confluent, Salesforce and more — and hosting a webinar with the co-founders of PagerDuty, LaunchDarkly and OpsLevel, we were able to answer three main questions:

  1. How do teams adopt microservices?
  2. What are the main challenges organizations face?
  3. Which strategies, processes and tools do companies use to overcome these challenges?

How do teams adopt microservices?

Out of dozens of companies we spoke with, only two had not yet started their journey to microservices, but both were actively considering it. Industry trends mirror this as well. In an O’Reilly survey of 1500+ respondents, more than 75% had started to adopt microservices.

It’s rare for companies to start building with microservices from the ground up. Of the companies we spoke with, only one had done so. Some startups, such as LaunchDarkly, plan to build their infrastructure using microservices, but turned to a monolith once they realized the high cost of overhead.

“We were spending more time effectively building and operating a system for distributed systems versus actually building our own services so we pulled back hard,” said John Kodumal, CTO and co-founder of LaunchDarkly.

“As an example, the things we were trying to do in mesosphere, they were impossible,” he said. “We couldn’t do any logging. Zero downtime deploys were impossible. There were so many bugs in the infrastructure and we were spending so much time debugging the basic things that we weren’t building our own service.”

As a result, it’s more common for companies to start with a monolith and move to microservices to scale their infrastructure with their organization. Once a company reaches ~30 developers, most begin decentralizing control by moving to a microservice architecture.

Teams may take different routes to arrive at a microservice architecture, but they tend to face a common set of challenges once they get there.

Large companies with established monoliths are keen to move to microservices, but costs are high and the transition can take years. Atlassian’s platform infrastructure is in microservices, but legacy monoliths in Jira and Confluence persist despite ongoing decomposition efforts. Large companies often get stuck in this transition. However, a combination of strong, top-down strategy combined with bottoms-up dev team support can help companies, such as Freddie Mac, make substantial progress.

Some startups, like Instacart, first shifted to a modular monolith that allows the code to reside in a single repository while beginning the process of distributing ownership of discrete code functions to relevant teams. This enables them to mitigate the overhead associated with a microservice architecture by balancing the visibility of having a centralized repository and release pipeline with the flexibility of discrete ownership over portions of the codebase.

What challenges do teams face?

Teams may take different routes to arrive at a microservice architecture, but they tend to face a common set of challenges once they get there. John Laban, CEO and co-founder of OpsLevel, which helps teams build and manage microservices told us that “with a distributed or microservices based architecture your teams benefit from being able to move independently from each other, but there are some gotchas to look out for.”

Indeed, the linked O’Reilly chart shows how the top 10 challenges organizations face when adopting microservices are shared by 25%+ of respondents. While we discussed some of the adoption blockers above, feedback from our interviews highlighted issues around managing complexity.

The lack of a coherent definition for a service can cause teams to generate unnecessary overhead by creating too many similar services or spreading related services across different groups. One company we spoke with went down the path of decomposing their monolith and took it too far. Their service definitions were too narrow, and by the time decomposition was complete, they were left with 4,000+ microservices to manage. They then had to backtrack and consolidate down to a more manageable number.

Defining too many services creates unnecessary organizational and technical silos while increasing complexity and overhead. Logging and monitoring must be present on each service, but with ownership spread across different teams, a lack of standardized tooling can create observability headaches. It’s challenging for teams to get a single-pane-of-glass view with too many different interacting systems and services that span the entire architecture.

07 Dec 2020

Cloudflare is testing a Netlify competitor to host Jamstack sites

Cloudflare is working on a new product called Cloudflare Pages. The new product could compete directly with Netlify and Vercel, two cloud hosting companies that let you build and deploy sites using Jamstack frameworks. Popular Jamstack frameworks include Gatsby, Jekyll, Hugo, Vue.js, Next.js, etc.

The new product was discovered on Saturday by reverse engineerJane Manchun Wong, who found details by looking into Cloudflare’s code.

If you’re not familiar with Jamstack, it’s a popular way of developing and deploying websites at scale. It lets you take advantage of global edge networks with a focus on performance.

Let’s say you’re a very famous pop artist and you’re launching your new album on your website tomorrow. You expect to get a huge traffic spike and a ton of orders.

If you also happen to be a web developer, you could develop a Jamstack website to make sure that your website remains available and loads quickly. Instead of using a traditional content management system to host your content and deliver it to your users, a Jamstack framework lets you decouple the frontend from the backend.

You could write a few posts in your content management system and then deploy your website update. Your Jamstack application will prebuild static pages based on what you just wrote. Those pages will be cached on a global edge network and served in a few milliseconds across the world. It’s like photocopying a letter instead of writing a new copy every time somebody wants to read it.

But what if somebody wants to buy your new album on the buy page? On that page, there will be a checkout module, which is dynamic content that can’t be cached. You can leverage a payments API, such as Stripe, so that the user doesn’t load content from your server at all.

This is a simple example, but many companies have been going down this path. For static content, everything is prebuilt and cached. For dynamic content, companies build microservices that are loaded on demand and that can scale easily.

According to Jane Manchun Wong’s screenshots, Cloudflare Pages lets you deploy sites with a simple Git commit. It integrates directly with your GitHub repository if you’re hosting your source code on the platform.

You can configure a Node.js build command that will be executed every time you change something to your code. Once the build process is done, your website is accessible to your end users.

There will be a free tier for Cloudflare Pages, which lets you generate 500 builds per month. Above that quota, Cloudflare will most likely let you pay for more builds and more features.

By default, Cloudflare gives you a subdomain name to try out the service — your-website-name.pages.dev. Of course, you can configure your own domain name and combine Cloudflare Pages with other Cloudflare products.

You can already read Cloudflare Pages’ documentation, as spotted by Jane Manchun Wong. So it sounds like Cloudflare Pages could launch sooner rather than later.

07 Dec 2020

The cloud can’t solve all your problems

The way a team functions and communicates dictates the operational efficiency of a startup and sets the scene for its culture. It’s way more important than what social events and perks are offered, so it’s the responsibility of a founder and/or CEO to provide their team with a technology approach that will empower them to achieve and succeed — now and in the future.

With that in mind, moving to the cloud might seem like a no-brainer because of its huge benefits around flexibility, accessibility and the potential to rapidly scale, while keeping budgets in check.

But there’s an important consideration here: Cloud providers won’t magically give you efficient teams.

Designing a startup for scale means investing in the right technology today to underpin growth for tomorrow and beyond.

It will get you going in the right direction, but you need to think even farther ahead. Designing a startup for scale means investing in the right technology today to underpin growth for tomorrow and beyond. Let’s look at how you approach and manage your cloud infrastructure will impact the effectiveness of your teams and your ability to scale.

Hindsight is 20/20

Adopting cloud is easy, but adopting it properly with best practices and in a secure way? Not so much. You might think that when you move to cloud, the cloud providers will give you everything you need to succeed. But even though they’re there to provide a wide breadth of services, these services won’t necessarily have the depth that you will need to run efficiently and effectively.

Yes, your cloud infrastructure is working now, but think beyond the first prototype or alpha and toward production. Considering where you want to get to, and not just where you are, will help you avoid costly mistakes. You definitely don’t want to struggle through redefining processes and ways of working when you’re also managing time sensitivities and multiple teams.

If you don’t think ahead, you’ll have to put all new processes in. It will take a whole lot longer, cost more money and cause a lot more disruption to teams than if you do it earlier.

For any founder, making strategic technology decisions right now should be a primary concern. It feels more natural to put off those decisions until you come face to face with the problem, but you’ll just end up needing to redo everything as you scale and cause your teams a world of hurt. If you don’t give this problem attention at the beginning, you’re just scaling the problems with the team. Flaws are then embedded within your infrastructure, and they’ll continue to scale with the teams. When these things are rushed, corners are cut and you will end up spending even more time and money on your infrastructure.

Build effective teams and reduce bottlenecks

When you’re making strategic decisions on how to approach your technology stack and cloud infrastructure, the biggest consideration should be what makes an effective team. Given that, keep these things top of mind:

  • Speed of delivery: Having developers able to self-serve cloud infrastructure with best practices built-in will enable speed. Development tools that factor in visibility and communication integrations for teams will give transparency on how they are iterating, problems, bugs or integration failures.
  • Speed of testing: This is all about ensuring fast feedback loops as your team works on critical new iterations and features. Developers should be able to test as much as possible locally and through continuous integration systems before they are ready for code review.
  • Troubleshooting problems: Good logging, monitoring and observability services, gives teams awareness of issues and the ability to resolve problems quickly or reproduce customer complaints in order to develop fixes.
07 Dec 2020

Microsoft announces its first Azure data center region in Denmark

Microsoft continues to expand its global Azure data center presence at a rapid clip. After announcing new regions in Austria and Taiwan in October, the company today revealed its plans to launch a new region in Denmark.

As with many of Microsoft’s recent announcements, the company is also attaching a commitment to provide digital skills to 200,000 people in the country (in this case, by 2024).

“With this investment, we’re taking the next step in our longstanding commitment to provide Danish society and businesses with the digital tools, skills and infrastructure needed to drive sustainable growth, innovation, and job creation. We’re investing in Denmark’s digital leap into the future – all in a way that supports the country’s ambitious climate goals and economic recovery,” said Nana Bule, General Manager, Microsoft Denmark.

Azure regions

Image Credits: Microsoft

The new data center, which will be powered by 100% renewable energy and feature multiple availability zones, will feature support for what has now become the standard set of Microsoft cloud products: Azure, Microsoft 365, and Dynamics 365 and Power Platform.

As usual, the idea here is to provide low-latency access to Microsoft’s tools and services. It has long been Microsoft’s strategy to blanket the globe with local data centers. Europe is a prime example of this, with regions (both operational and announced) in about a dozen countries already. In the U.S., Azure currently offers 13 regions (including three exclusively for government agencies), with a new region on the West Coast coming soon.

“This is a proud day for Microsoft in Denmark,” said Brad Smith, President, Microsoft. “Building a hyper-scale datacenter in Denmark means we’ll store Danish data in Denmark, make computing more accessible at even faster speeds, secure data with our world-class security, protect data with Danish privacy laws, and do more to provide to the people of Denmark our best digital skills training. This investment reflects our deep appreciation of Denmark’s green and digital leadership globally and our commitment to its future.”

07 Dec 2020

Cloudflare lets you control where data is stored and accessible

Cloudflare has launched a new set of features today called the Data Localization Suite. Companies on the Enterprise plan can choose to enable the features through an add-on.

With the Data Localization Suite, Cloudflare is making it easier to control where your data is stored and if you’re authorized to view data depending on where you are accessing it from. It’s a feature that lets you take advantage of Cloudflare’s products, such as serverless infrastructure, while complying with local and industry-specific regulation.

For instance, the Data Localization Suite seems particularly relevant following this year’s EU ruling that ended the Privacy Shield. If you’re operating in a highly regulated industry, such as healthcare and legal, you may have some specific data requirements as well.

Let’s say you’re building an application that should store data in the European Union exclusively. You could choose to run your application in a single data center, or a single cloud region. But that doesn’t scale well if you expect to get customers all around the world. You could also suffer from outages.

With Cloudflare’s approach, everything is encrypted at rest and in transit (if you enforce mandatory TLS encryption). You can choose to manage your private keys yourself, or you can choose to set different rules for your private keys.

For instance, a private key that lets you inspect traffic could be accessible from a European data center exclusively. Now that the Privacy Shield is invalid, this setup makes it easier to comply with European regulation.

Cloudflare inspects network requests in order to know what to do with them. For instance, the company tries to reject malicious bot requests automatically. You can choose to inspect those requests in a region in particular. If a malicious bot is running on a server in the U.S., the request will be sent to the closest Cloudflare data center in the U.S., routed to a data center in Europe and then inspected.

As for traffic logs and metadata, you can use Edge Log Delivery to send logs directly from Cloudflare’s edge network to a storage bucket or an on-premise data center. It doesn’t transit through Cloudflare’s core data centers at all.

Finally, if you’re using the recently announced Cloudflare Workers Durable Objects, you can configure jurisdiction restriction. If you run an app on Cloudflare’s serverless infrastructure, you can choose to avoid storing durable objects in some locations for regulation purposes.

As you can see, there are several tools and services in the Data Localization Suite. Some of them have already been live and others are brand new. But it’s interesting to see that Cloudflare is thinking about locality even though it thinks serverless computing and edge data centers are the future.