Category: UNCATEGORIZED

07 Dec 2020

Twitter users complain of timelines being overrun with ‘Promoted Tweets’

Twitter’s timeline is currently overrun with ads for some users, in what appears to be a glitch involving the distribution of Promoted Tweets. Typically, a Promoted Tweet — which is just a regular tweet an advertiser has paid to promote more broadly — will appear just once at the top of a user’s timeline, then scroll through the timeline like any other tweet. Now, however, Promoted Tweets are popping up with increased frequency. Some users report seeing them as often as every 4 to 6 tweets, in fact. Others are reporting seeing the same Promoted Tweet more than once.

This indicates some sort of issue with Twitter’s ad system, as the company intends for Promoted Tweets to be targeted and relevant to the end user, without being an overly frequent part of users’ timelines.

As Twitter’s Business website explains, “we’re thoughtful in how we display Promoted Tweets, and are conservative about the number of Promoted Tweets that people see in a single day.”

That’s obviously not the case when it seems like nearly every other tweet is now an ad — and often, a repeated ad.

Twitter has not yet publicly addressed the bug through its @TwitterSupport account, or others that communicate with the public, like @Twitter, @TwitterComms, or @TwitterMktg, so it’s been unclear how many users are impacted, on what platforms or in which geographic regions. However, we’ve seen complaints coming from users both in the U.S. and abroad and on both the “Home” and “Recent Tweets” timelines.

Given the lack of updates and information, some Twitter users have been dealing with the influx of Promoted Tweets by muting or blocking the advertiser’s account. That could have lasting consequences, as advertisers won’t be able to again reach those users if they get blocked.

Twitter, reached for comment, says it’s looking into the issue. We’ll update when the company has more to share.

07 Dec 2020

The mikme pocket is a fantastic mobile audio solution for podcasters, reporters and creators

Portable audio recording solutions abound, and many recently released devices have done a lot to improve the convenience and quality of sound recording devices you can carry in your pocket – spurred in part by smartphones and their constant improvement in video recording capabilities. A new device from Austria’s mikme, the mikme pocket (€369.00 or just under $450 USD), offers a tremendous amount of flexibility and quality in a very portable package, delivering what might just be the ultimate pocket sound solution for reporters, podcasters, video creators and more.

The basics

mikme pocket is small – about half the size of a smartphone, but square and probably twice as thick. It’s not as compact as something like the Rode Wireless GO, but it contains onboard memory and a Bluetooth antenna, making it possible to both record locally and transmit audio directly to a connected smartphone from up to three mikme pockets at once.

[gallery ids="2083738,2083739"]

The mikme pocket features a single button for control, as well as dedicated volume buttons, a 3.5mm headphone jack for monitoring audio, a micro-USB port for charging and for offloading files via physical connection, and Bluetooth pairing and power buttons. It has an integrated belt clip, as well as a 3/8″ thread mount for mic stands, with an adapter included for mounting to 1/4″ standard camera tripod connections.

In the box, mikme has also included a lavalier microphone with a mini XLR connector (which is the interface the pocket uses) and a clip and two windscreens for the mic. They also offer a ‘pro’ lavalier mic as a separate, add-on purchase (€149.00 or around $180 USD), which offers improved performance vs. the included lav in terms of audio quality and dynamic range.

Image Credits: mikme

The internal battery for the mikme pocket lasts up to 3.5 hours of recording time, and it can last for more than six months in standby mode between recordings, too.

Design and performance

The mikme pocket is a pretty unadorned black block, but its unassuming design is one of its strengths. It has a textured matte feel which helps with grittiness, and it’s easy to hide in dark clothing, plus the integrated belt clip works exactly as desired ensuring the pack is easy secured to anyone you’re trying to wire for sound. It features a single large button for simplified control, which also easily shows you its connectivity status using an LED backlight.

Controls for more advanced functions like Bluetooth connectivity, as well as the micro-USB port, are located on the bottom where they’re unlikely to be pressed accidentally by anyone during recording. The mini XLR interface for microphones means that once a mic is plugged it, it’s also securely locked in place and won’t be jostled out during sessions.

You can use the mikme pocket on its own, thanks to its 16GB of built-in local storage, but it really shines when used in tandem with the smartphone app. The app allows you to connect up to three pockets simultaneously, and provides a built-in video recorder so you can take full advantage of the recording capabilities of modern devices like the iPhone 12 to capture real-time synced audio while you film effortlessly. The mikme pocket and app also have a failsafe built in for filling in any gaps that might arise from any connection dropouts thanks to the local recording backup.

In terms of audio quality, the sound without adjusting any settings is excellent. Like all lavalier mics, you’ll get better results the closer you can place the actually mic capsule itself to a speaker’s mouth, but the mikme pocket produced exceptional clean-sounding, high-quality audio right out of the box – in environments that weren’t particularly sound isolated or devoid of background noise.

The included mini XLR lav mic is probably good enough for the needs of most amateur and enthusiast users, while the lavalier pro is a great upgrade option for anyone looking to make the absolute most of their recordings, especially with post-processing via desktop audio editing software. The mikme app has built-in audio tweaking controls with a great visual interface that allows you to hear the effects of processing tweaks in real-time, which is great for maximizing sound quality on the go before sharing clips and videos directly from your device to social networks or publishing platforms.

Bottom line

From on-phone shotgun mics, to handheld recorders and much more, there are plenty of options out there for capturing audio on-the-go, but the mikme pocket is the one that offers the best balance of very high-quality sound that’s essentially immediately ready to publish, in a package that’s both extremely easy to carry anywhere with you, and that offers durability and user-friendliness to suit newcomers and experts alike.

07 Dec 2020

SpaceX snags $885M from FCC to serve rural areas with Starlink

The FCC has just published the results of its Rural Digital Opportunity Fund Phase I auction, which sounds rather stiff but involves distributing billions to broadband providers that bring solid internet connections to under-served rural areas “on the wrong side of the digital divide.” $885 million is earmarked for SpaceX, whose Starlink satellite service could be a game-changer for places where laying fiber isn’t an option.

Only three other companies garnered more funds: Charter with $1.22 billion; Minnesota and Iowa provider LTD Broadband with $1.32B; and utility collective Rural Electric Cooperate Consortium with $1.1B. Those are all traditional wireline based broadband, and a quick perusal of the list of grantees suggests no other satellite broadband provider made the cut. 180 bidders were awarded support in total.

The $9.2B auction (though the specifics of the process itself are not relevant) essentially asks for bids on how much a company can provide service  to a given area for, ideally with a 100 megabit downstream and 20 up. Local companies can collect the hundred grand necessary to fund a fiber line where there’s now copper, and big multi-state concerns may promise to undertake major development projects for hundreds of millions of dollars.

SpaceX’s Starlink has the advantage of not requiring any major construction projects to reach people out in the boonies. All that’s needed is a dish and for their home to be in the area currently covered by the rapidly expanding network of satellites in low-Earth orbit. That means the company can undercut many of its competitors — in theory anyway.

Starlink has not had any major rollout yet, only small test deployments, which according to SpaceX have gone extremely well. The first wave of beta testers for the service will be expected to pay $99 per month plus a one-time $500 installation fee, but what the cost of the commercial service would be is anyone’s guess (probably a bit lower).

In order to secure the $885B in the FCC’s auction, SpaceX would need to demonstrate that it can provide solid service to the areas it claims to for a reasonable price, so we can expect the costs to be in line with terrestrial broadband offerings. No other satellite broadband provider operates in that price range (Swarm offers IoT connection for $5/month, but that’s a totally different category).

The FCC doesn’t just knock on Elon Musk’s door with a big check, though. The company must demonstrate “periodic buildout requirements” at the locations it’s promised, at which point funds will be disbursed. This continues for a period of several years, and should help the fledgling internet provider stay alive while undergoing the rigors and uncertainties of launch. By the time the FCC cash dries up the company will ideally have several million subscribers propping it up.

This is “Phase I” of the auction, targeting the areas most in need of new internet service; Phase II will cover “partially-served” areas that perhaps have one good provider but no competition. Whether SpaceX will be able (or want) to make a push there is unclear, though the confidence with which the company has been approaching the market suggests it may make a limited play for these somewhat more hardened target markets.

The push for expanding rural broadband has been a particular focus for outgoing FCC Chairman Ajit Pai, who had this to say about its success so far:

We structured this innovative and groundbreaking auction to be technologically neutral and to prioritize bids for high-speed, low-latency offerings.  We aimed for maximum leverage of taxpayer dollars and for networks that would meet consumers’ increasing broadband needs, and the results show that our strategy worked. This auction was the single largest step ever taken to bridge the digital divide and is another key success for the Commission in its ongoing commitment to universal service. I thank our staff for working so hard and so long to get this auction done on time, particularly during the pandemic.

You can read the full list of auction winners at the FCC’s press release here.

07 Dec 2020

3 questions to ask before adopting microservice architecture

As a product manager, I’m a true believer that you can solve any problem with the right product and process, even one as gnarly as the multiheaded hydra that is microservice overhead.

Working for Vertex Ventures US this summer was my chance to put this to the test. After interviewing 30+ industry experts from a diverse set of companies — Facebook, Fannie Mae, Confluent, Salesforce and more — and hosting a webinar with the co-founders of PagerDuty, LaunchDarkly and OpsLevel, we were able to answer three main questions:

  1. How do teams adopt microservices?
  2. What are the main challenges organizations face?
  3. Which strategies, processes and tools do companies use to overcome these challenges?

How do teams adopt microservices?

Out of dozens of companies we spoke with, only two had not yet started their journey to microservices, but both were actively considering it. Industry trends mirror this as well. In an O’Reilly survey of 1500+ respondents, more than 75% had started to adopt microservices.

It’s rare for companies to start building with microservices from the ground up. Of the companies we spoke with, only one had done so. Some startups, such as LaunchDarkly, plan to build their infrastructure using microservices, but turned to a monolith once they realized the high cost of overhead.

“We were spending more time effectively building and operating a system for distributed systems versus actually building our own services so we pulled back hard,” said John Kodumal, CTO and co-founder of LaunchDarkly.

“As an example, the things we were trying to do in mesosphere, they were impossible,” he said. “We couldn’t do any logging. Zero downtime deploys were impossible. There were so many bugs in the infrastructure and we were spending so much time debugging the basic things that we weren’t building our own service.”

As a result, it’s more common for companies to start with a monolith and move to microservices to scale their infrastructure with their organization. Once a company reaches ~30 developers, most begin decentralizing control by moving to a microservice architecture.

Teams may take different routes to arrive at a microservice architecture, but they tend to face a common set of challenges once they get there.

Large companies with established monoliths are keen to move to microservices, but costs are high and the transition can take years. Atlassian’s platform infrastructure is in microservices, but legacy monoliths in Jira and Confluence persist despite ongoing decomposition efforts. Large companies often get stuck in this transition. However, a combination of strong, top-down strategy combined with bottoms-up dev team support can help companies, such as Freddie Mac, make substantial progress.

Some startups, like Instacart, first shifted to a modular monolith that allows the code to reside in a single repository while beginning the process of distributing ownership of discrete code functions to relevant teams. This enables them to mitigate the overhead associated with a microservice architecture by balancing the visibility of having a centralized repository and release pipeline with the flexibility of discrete ownership over portions of the codebase.

What challenges do teams face?

Teams may take different routes to arrive at a microservice architecture, but they tend to face a common set of challenges once they get there. John Laban, CEO and co-founder of OpsLevel, which helps teams build and manage microservices told us that “with a distributed or microservices based architecture your teams benefit from being able to move independently from each other, but there are some gotchas to look out for.”

Indeed, the linked O’Reilly chart shows how the top 10 challenges organizations face when adopting microservices are shared by 25%+ of respondents. While we discussed some of the adoption blockers above, feedback from our interviews highlighted issues around managing complexity.

The lack of a coherent definition for a service can cause teams to generate unnecessary overhead by creating too many similar services or spreading related services across different groups. One company we spoke with went down the path of decomposing their monolith and took it too far. Their service definitions were too narrow, and by the time decomposition was complete, they were left with 4,000+ microservices to manage. They then had to backtrack and consolidate down to a more manageable number.

Defining too many services creates unnecessary organizational and technical silos while increasing complexity and overhead. Logging and monitoring must be present on each service, but with ownership spread across different teams, a lack of standardized tooling can create observability headaches. It’s challenging for teams to get a single-pane-of-glass view with too many different interacting systems and services that span the entire architecture.

07 Dec 2020

Cloudflare is testing a Netlify competitor to host Jamstack sites

Cloudflare is working on a new product called Cloudflare Pages. The new product could compete directly with Netlify and Vercel, two cloud hosting companies that let you build and deploy sites using Jamstack frameworks. Popular Jamstack frameworks include Gatsby, Jekyll, Hugo, Vue.js, Next.js, etc.

The new product was discovered on Saturday by reverse engineerJane Manchun Wong, who found details by looking into Cloudflare’s code.

If you’re not familiar with Jamstack, it’s a popular way of developing and deploying websites at scale. It lets you take advantage of global edge networks with a focus on performance.

Let’s say you’re a very famous pop artist and you’re launching your new album on your website tomorrow. You expect to get a huge traffic spike and a ton of orders.

If you also happen to be a web developer, you could develop a Jamstack website to make sure that your website remains available and loads quickly. Instead of using a traditional content management system to host your content and deliver it to your users, a Jamstack framework lets you decouple the frontend from the backend.

You could write a few posts in your content management system and then deploy your website update. Your Jamstack application will prebuild static pages based on what you just wrote. Those pages will be cached on a global edge network and served in a few milliseconds across the world. It’s like photocopying a letter instead of writing a new copy every time somebody wants to read it.

But what if somebody wants to buy your new album on the buy page? On that page, there will be a checkout module, which is dynamic content that can’t be cached. You can leverage a payments API, such as Stripe, so that the user doesn’t load content from your server at all.

This is a simple example, but many companies have been going down this path. For static content, everything is prebuilt and cached. For dynamic content, companies build microservices that are loaded on demand and that can scale easily.

According to Jane Manchun Wong’s screenshots, Cloudflare Pages lets you deploy sites with a simple Git commit. It integrates directly with your GitHub repository if you’re hosting your source code on the platform.

You can configure a Node.js build command that will be executed every time you change something to your code. Once the build process is done, your website is accessible to your end users.

There will be a free tier for Cloudflare Pages, which lets you generate 500 builds per month. Above that quota, Cloudflare will most likely let you pay for more builds and more features.

By default, Cloudflare gives you a subdomain name to try out the service — your-website-name.pages.dev. Of course, you can configure your own domain name and combine Cloudflare Pages with other Cloudflare products.

You can already read Cloudflare Pages’ documentation, as spotted by Jane Manchun Wong. So it sounds like Cloudflare Pages could launch sooner rather than later.

07 Dec 2020

The cloud can’t solve all your problems

The way a team functions and communicates dictates the operational efficiency of a startup and sets the scene for its culture. It’s way more important than what social events and perks are offered, so it’s the responsibility of a founder and/or CEO to provide their team with a technology approach that will empower them to achieve and succeed — now and in the future.

With that in mind, moving to the cloud might seem like a no-brainer because of its huge benefits around flexibility, accessibility and the potential to rapidly scale, while keeping budgets in check.

But there’s an important consideration here: Cloud providers won’t magically give you efficient teams.

Designing a startup for scale means investing in the right technology today to underpin growth for tomorrow and beyond.

It will get you going in the right direction, but you need to think even farther ahead. Designing a startup for scale means investing in the right technology today to underpin growth for tomorrow and beyond. Let’s look at how you approach and manage your cloud infrastructure will impact the effectiveness of your teams and your ability to scale.

Hindsight is 20/20

Adopting cloud is easy, but adopting it properly with best practices and in a secure way? Not so much. You might think that when you move to cloud, the cloud providers will give you everything you need to succeed. But even though they’re there to provide a wide breadth of services, these services won’t necessarily have the depth that you will need to run efficiently and effectively.

Yes, your cloud infrastructure is working now, but think beyond the first prototype or alpha and toward production. Considering where you want to get to, and not just where you are, will help you avoid costly mistakes. You definitely don’t want to struggle through redefining processes and ways of working when you’re also managing time sensitivities and multiple teams.

If you don’t think ahead, you’ll have to put all new processes in. It will take a whole lot longer, cost more money and cause a lot more disruption to teams than if you do it earlier.

For any founder, making strategic technology decisions right now should be a primary concern. It feels more natural to put off those decisions until you come face to face with the problem, but you’ll just end up needing to redo everything as you scale and cause your teams a world of hurt. If you don’t give this problem attention at the beginning, you’re just scaling the problems with the team. Flaws are then embedded within your infrastructure, and they’ll continue to scale with the teams. When these things are rushed, corners are cut and you will end up spending even more time and money on your infrastructure.

Build effective teams and reduce bottlenecks

When you’re making strategic decisions on how to approach your technology stack and cloud infrastructure, the biggest consideration should be what makes an effective team. Given that, keep these things top of mind:

  • Speed of delivery: Having developers able to self-serve cloud infrastructure with best practices built-in will enable speed. Development tools that factor in visibility and communication integrations for teams will give transparency on how they are iterating, problems, bugs or integration failures.
  • Speed of testing: This is all about ensuring fast feedback loops as your team works on critical new iterations and features. Developers should be able to test as much as possible locally and through continuous integration systems before they are ready for code review.
  • Troubleshooting problems: Good logging, monitoring and observability services, gives teams awareness of issues and the ability to resolve problems quickly or reproduce customer complaints in order to develop fixes.
07 Dec 2020

Microsoft announces its first Azure data center region in Denmark

Microsoft continues to expand its global Azure data center presence at a rapid clip. After announcing new regions in Austria and Taiwan in October, the company today revealed its plans to launch a new region in Denmark.

As with many of Microsoft’s recent announcements, the company is also attaching a commitment to provide digital skills to 200,000 people in the country (in this case, by 2024).

“With this investment, we’re taking the next step in our longstanding commitment to provide Danish society and businesses with the digital tools, skills and infrastructure needed to drive sustainable growth, innovation, and job creation. We’re investing in Denmark’s digital leap into the future – all in a way that supports the country’s ambitious climate goals and economic recovery,” said Nana Bule, General Manager, Microsoft Denmark.

Azure regions

Image Credits: Microsoft

The new data center, which will be powered by 100% renewable energy and feature multiple availability zones, will feature support for what has now become the standard set of Microsoft cloud products: Azure, Microsoft 365, and Dynamics 365 and Power Platform.

As usual, the idea here is to provide low-latency access to Microsoft’s tools and services. It has long been Microsoft’s strategy to blanket the globe with local data centers. Europe is a prime example of this, with regions (both operational and announced) in about a dozen countries already. In the U.S., Azure currently offers 13 regions (including three exclusively for government agencies), with a new region on the West Coast coming soon.

“This is a proud day for Microsoft in Denmark,” said Brad Smith, President, Microsoft. “Building a hyper-scale datacenter in Denmark means we’ll store Danish data in Denmark, make computing more accessible at even faster speeds, secure data with our world-class security, protect data with Danish privacy laws, and do more to provide to the people of Denmark our best digital skills training. This investment reflects our deep appreciation of Denmark’s green and digital leadership globally and our commitment to its future.”

07 Dec 2020

Cloudflare lets you control where data is stored and accessible

Cloudflare has launched a new set of features today called the Data Localization Suite. Companies on the Enterprise plan can choose to enable the features through an add-on.

With the Data Localization Suite, Cloudflare is making it easier to control where your data is stored and if you’re authorized to view data depending on where you are accessing it from. It’s a feature that lets you take advantage of Cloudflare’s products, such as serverless infrastructure, while complying with local and industry-specific regulation.

For instance, the Data Localization Suite seems particularly relevant following this year’s EU ruling that ended the Privacy Shield. If you’re operating in a highly regulated industry, such as healthcare and legal, you may have some specific data requirements as well.

Let’s say you’re building an application that should store data in the European Union exclusively. You could choose to run your application in a single data center, or a single cloud region. But that doesn’t scale well if you expect to get customers all around the world. You could also suffer from outages.

With Cloudflare’s approach, everything is encrypted at rest and in transit (if you enforce mandatory TLS encryption). You can choose to manage your private keys yourself, or you can choose to set different rules for your private keys.

For instance, a private key that lets you inspect traffic could be accessible from a European data center exclusively. Now that the Privacy Shield is invalid, this setup makes it easier to comply with European regulation.

Cloudflare inspects network requests in order to know what to do with them. For instance, the company tries to reject malicious bot requests automatically. You can choose to inspect those requests in a region in particular. If a malicious bot is running on a server in the U.S., the request will be sent to the closest Cloudflare data center in the U.S., routed to a data center in Europe and then inspected.

As for traffic logs and metadata, you can use Edge Log Delivery to send logs directly from Cloudflare’s edge network to a storage bucket or an on-premise data center. It doesn’t transit through Cloudflare’s core data centers at all.

Finally, if you’re using the recently announced Cloudflare Workers Durable Objects, you can configure jurisdiction restriction. If you run an app on Cloudflare’s serverless infrastructure, you can choose to avoid storing durable objects in some locations for regulation purposes.

As you can see, there are several tools and services in the Data Localization Suite. Some of them have already been live and others are brand new. But it’s interesting to see that Cloudflare is thinking about locality even though it thinks serverless computing and edge data centers are the future.

07 Dec 2020

The IPO market looks hot as Airbnb and C3.ai raise price targets

So much for a December slowdown— this morning, Airbnb and C3.ai raised their IPO price ranges and we got early pricing information from Upstart and Wish.

This gives us a good amount of ground to cover. So, we’ll dig into Airbnb’s new price range first, working to understand how richly investors are valuing the American home-sharing unicorn. We’ll repeat the experiment with C3.ai, a company we find utterly fascinating. And then we’ll calculate valuation ranges for both Upstart — a consumer lending fintech — and Wish — an e-commerce giant — to see where they stand.


The Exchange explores startups, markets and money. Read it every morning on Extra Crunch, or get The Exchange newsletter every Saturday.


There are other IPOs in the wings: we’re still waiting on early pricing information from Affirm and Roblox, and DoorDash raised its range last week.

The upcoming calendar is busy. C3.ai and DoorDash should price tomorrow and trade Wednesday. Airbnb should price Wednesday and trade Thursday. Upstart will price next Tuesday and trade the following day.

In normal times, we’d take each element of of today’s IPO news fusillade and parse it in its own post. But we only have ten fingers, so let’s double-time through the numbers and get to what matters while you drink coffee. To work!

Airbnb and C3.ai

Public investors are bidding shares of both Airbnb and C3.ai up ahead of their debuts.

This morning, C3.ai, a company that sells enterprise AI technology, raised its IPO price range from $31 – $34 to $36 – $38 per share. It both raised and tightened its range, the latter often happening as a company gets a better handle on where demand lies as it ramps towards final pricing and eventual trading.

There are two ways to calculate the company’s new valuation range. The first uses the company’s non-diluted, expected post-IPO share count of 98,655,627, a figure that includes a little more than 2 million shares reserved for underwriters. At that share count, C3.ai would be worth between $3.57 billion and $3.77 billion.

07 Dec 2020

California’s CA Notify app to offer statewide exposure notification using Apple and Google’s framework

The state of California has now expanded access of its CA Notify app to all in the state, after originally deploying the app in a pilot program at UC Berkeley in November, which later expanded to other UC campuses. The statewide launch of the app, announced by California Governor Gavin Newsom on Monday, means that the tool based on Apple and Google’s exposure notification API will be available for download and opt-in use to anyone with a compatible iPhone or Android device as of this Thursday, December 10.

Apple and Google’s jointly-developed exposure notification API uses Bluetooth to determine contact between confirmed COVID-positive individuals and others, alerting users to potential exposure without storing or transmitting any data related to their identity or location. The system uses a randomized, rolling identifier to communicate possible exposure to other devices, and individual state health authorities can customize specific details like how close, and for how long individuals need to be in contact in order to quality as an exposure risk.

In the case of California, the state has set contact with a confirmed COVID-19 positive individual of within 6 feet, for a period of 15 minutes or more as meriting an exposure notification. Users who receive a positive COVID-19 test will get a text message from the Department of Public Health for the state that contains a code they input in the CA Notify app in order to trigger an alert broadcast to any phones that met the criteria above during the prior 14 days (the period during which the virus is transmissible).

As mentioned, there’s no personal information transmitted from a user’s device via the notification system, and it’s a fully opt-in arrangement. Other states have already deployed exposure notification apps based on the Apple/Google API, as have many other countries around the world. It’s not a replacement for a contact tracing system, in which healthcare professionals attempt to determine who a COVID-19 patient came in contact with to find out how they might have contracted the virus, and to whom they may spread it, but it is a valuable component of a comprehensive tracing program that can improve its efficacy and success.