Month: August 2018

16 Aug 2018

Crimson Hexagon regains Facebook data access

Analytics company Crimson Hexagon says Facebook has reinstated its data access to Facebook and Instagram.

That access was suspended last month, with Facebook saying it was investigating whether the company had violated any of its data use policies. (The social network, of course, has been dealing with the fallout from a separate controversy over user data.)

In this case, the issue appears to be related to some of Crimson Hexagon’s contracts with the U.S. government, with Facebook saying it wasn’t aware of those contracts when contacted by The Wall Street Journal.

What followed, according to a blog post by Crimson Hexagon Dan Shore, was “several weeks of constructive discussion and information exchange.” It seems that Facebook was satisfied with what it learned and ended Crimson Hexagon’s suspension.

Shore said that government customers make up less than 5 percent of the company’s business, adding, “To our knowledge, no government customer has used the Crimson Hexagon platform for surveillance of any individual or group.”

“Over time we have enhanced our vetting procedures for government customers,” he said. “Nevertheless, we recognize it is important to go beyond vetting by monitoring these government customers on an ongoing basis to ensure the public’s expectations of privacy are met. As governments and government-sponsored organizations change how they use data, we too must change.”

16 Aug 2018

Google Firebase adds in-app messaging, JIRA integration, new reports and more

Firebase is now Google’s default platform for app developers and over the course of the last four years since it was acquired, the service has greatly expanded its feature set and integrations with our Google services. Today, it’s rolling out yet another batch of updates that bring new features, deeper integrations and a few design updates to the service.

The highlight of this release is the launch of in-app messaging, which will allow developers to send targeted and contextual messages to users as they use the app. Developers can customize the look and feel of these in-app notifications, which are rolling out today, but what’s maybe even more important is that this feature is integrated with Firebase Predictions and Google Analytics for Firebase so that developers can just react to current behavior but also Firebase’s predictions of how likely a user is to spend some additional money or stop using the app.

Developers who use Atlassian’s JIRA will also be happy to hear that Firebase is launching an integration with this tool. Firebase users can now create JIRA issues based on crash reports in Firebase. This integration will roll out in the next few weeks.

Another new integration is a deeper connection to Crashlytics, which Google acquired from Twitter in early 2017 (together with Fabric). Firebase will now let you export this data to BigQuery to analyze it — and then visualize it in Google’s Data Studio. And once it’s in BigQuery, it’s your data, so you’re not dependent on Firebase’s retention and deletion defaults.

Talking about reports, Firebase Cloud Messaging is getting a new reporting dashboard and the Firebase Console’s Project Overview page has received a full design overhaul that’ll allow you to see the health and status of your apps on a single page. The Latest Release section now also features live data. These features will start rolling out today and should become available to everybody in the next few weeks.

Firebase Hosting, the service’s web content hosting service, is also getting a small update and now allows you to host multiple websites within one project. And when you push an update, Firebase Hosting now only uploads the files that have changed between releases, which should speed up that process quite a bit.

16 Aug 2018

Tonal launches at-home digital strength training system

If you want to have a brutal workout from the comfort of your own home — and have about $3,000 to spend — look no further. Tonal, a strength-training system powered by electromagnetism resistance technology and machine learning, is launching today to let you get ripped and in shape without having to go to the gym.

There are two key features that make Tonal different from the weight lifting machines you’ll find in the gym. For one, there aren’t actual weights. Instead, Tonal uses electromagnetism to simulate and control weight.

So when you’re doing a bicep curl, for example, “the thing pulling back on you isn’t gravity — it’s an electromagnetic field controlled by a computer algorithm,” Tonal CEO and founder Aly Orady told me at the company’s San Francisco headquarters last week. “It’s digitally-controlled weight.”

The other key feature is the built-in personal trainer. For $49 a month, Tonal members get access to personal training sessions, recommended programs and workouts.

“It’s like having an entire gym and a personal trainer in your home,” Orady said. “That’s a pretty big claim but I’m going to show it to you and you’re going to love it.”

He was right. I loved it in a pure hate kind of way. I had a chance to try it out and I feel confident saying I had the worst day of my year — but, you know, in a good, yet sadistic way. It’s just that I’m horribly out of shape and this machine isn’t messing around.

Tonal works by first determining your baseline strength with a 10-minute test. The test entails completing four movements (seated lat pulldown, seated overhead press, bench press and neutral grip dead lift) as fast and as powerful as you can. From there, Tonal gives you a baseline score for your core, upper body and lower body.

As you can see from my results below, I’m very strong.

But seriously — my trainer told me I was very strong. From there, I completed my first workout. And that’s when I realized that while I may be strong, my endurance is non-existent.

As I made my way through my first workout, Tonal could automatically tell that I was on the struggle bus headed farther into struggle town. That’s because Tonal was constantly monitoring the quality of my reps and based on that, dynamically adjusted the weight.

Tonal, which mounts to your wall like a TV, is pretty pricey ($2,995), but it joins the likes of startups like Peloton and Mirror. Peloton is an internet-connected cycling bike that retails for $1,995 plus $39 a month for content while Mirror is similarly an at-home device that lets you see video of a fitness instructor and classmates for exercises like barre, yoga and pilates. Mirror has raised $13 million from Spark Capital, Lerer Hippeau Ventures, First Round Capital and others. The company, however, has yet to launch its product and pricing.

Tonal is not disclosing its amount of funding, but has raised money from Mayfield, Shasta, Bolt Capital, Next Play Capital, Upside Partnership and others.

16 Aug 2018

Credit Karma acquires mortgage platform Approved

Credit Karma, the service best known for providing free credit score monitoring and other financial advice (mostly to millennials), is getting into the mortgage business. The company today announced that it has acquired Approved, a mortgage platform that brings modern technology to a process that even today often still involves faxing documents back and forth. The companies did not disclose the financial details of the transaction.

At first glance, this may seem like a bit of an odd acquisition, given that Approved is mostly a service for banks and mortgage brokers. But it also makes perfect sense for Credit Karma to get into the mortgage business.

Indeed, Credit Karama Chief Product Officer Nikhyl Singhal told me that he sees this as the natural next step in the company’s evolution.

“As we’ve expanded, you’ve seen us move from credit cards as a way to help members with that part of their life to first personal loans to auto — meaning auto loans, auto insurance,” he said. “Today, we’re really talking more publicly about mortgage. Mortgage being for many of our members the most important financial decision they’ll make.”

It’s also no secret that Credit Karma’s largest user base is millennials. As they get older and start getting to the point where they consider buying a home (assuming they are in the financial position to do so), the company obviously wants to keep those users engaged on their platform and offer them more services.

Singhal also stressed that 80 percent of Credit Karma members are active on the service before they get a new mortgage — and Credit Karma obviously knows all of this because it is able to collect a lot of very detailed financial data about its users.

As Singhal noted, Credit Karma has been working on getting deeper into the mortgage business for about 18 months. “The acquisition is just the continuing effort of saying, ‘look, we’re serious about taking our scale and being that trusted destination for our members as it relates to helping them with their mortgage.'”

Credit Karma already offers some mortgage brokerage services and today’s acquisition is meant to help speed up this process with the help of Approved’s technology. “What approved has spent a lot of time doing is working with lenders to help them automate and make them more efficient,” Singhal explained. A more efficient process, Singhal expects, means the lenders can reduce rates and save Credit Karma members money.

Approved CEO Andy Taylor and CTO Navtaj Sadhal are both Redfin alums, so they know this business well. Taylor told me that he believes that Credit Karama will allow him to scale his service up beyond what a stand-alone company could’ve done. Taylor tells me that he sees Approved’s mission as helping consumers navigate the often tedious and painful world of getting a mortgage. “Moving to Credit Karma is going to immediately give us the sort of resources and immediate scale to continue to drive that mission-driven work,” he said. “We can reach significantly more people than we could otherwise. We can spend less time focusing in on the minutia of building the lender system and more time focussing on bringing transparency to the transaction and having a better loan application process.”

16 Aug 2018

Twitter company email addresses why it’s #BreakingMyTwitter

It’s hard to be a fan of Twitter right now. The company is sticking up for conspiracy theorist Alex Jones, when nearly all other platforms have given him the boot, it’s overrun with bots, and now it’s breaking users’ favorite third-party Twitter clients like Tweetbot and Twitterific by shutting off APIs these apps relied on. Worse still, is that Twitter isn’t taking full responsibility for its decisions.

In a company email it shared today, Twitter cited “technical and business constraints” that it can no longer ignore as being the reason behind the APIs’ shutdown.

It said the clients relied on “legacy technology” that was still in a “beta state” after more than 9 years, and had to be killed “out of operational necessity.”

This reads like passing the buck. Big time.

It’s not as if there’s some other mysterious force that maintains Twitter’s API platform, and now poor ol’ Twitter is forced to shut down old technology because there’s simply no other recourse. No.

Twitter, in fact, is the one responsible for its User Streams and Site Streams APIs – the APIs that serve the core functions of these now deprecated third-party Twitter clients. Twitter is the reason these APIs have been stuck in a beta state for nearly a decade. Twitter is the one that decided not to invest in supporting those legacy APIs, or shift them over to its new API platform.

And Twitter is the one that decided to give up on some of its oldest and most avid fans – the power users and the developer community that met their needs – in hopes of shifting everyone over to its own first-party clients instead.

The company even refused to acknowledge how important these users and developers have been to its community over the years, by citing the fact that the APIs it’s terminating – the ones that power Tweetbot, Twitterrific, Tweetings and Talon – are only used by “less than 1%” of Twitter developers. Burn! 

Way to kick a guy when he’s already down, Twitter.

But just because a community is small in numbers, does not mean its voice is not powerful or its influence is not felt.

Hence, the #BreakingMyTwitter hashtag, which Twitter claims to be watching “quite often.”

The one where users are reminding Twitter CEO Jack Dorsey about that time he apologized to Twitter developers for not listening to them, and acknowledged the fact they made Twitter what it is today. The time when he promised to do better.

This is…not better:

The company’s email also says it hopes to eventually learn “why people hire 3rd party clients over our own apps.”

Its own apps?

Oh, you mean like TweetDeck, the app Twitter acquired then shut down on Android, iPhone and Windows? The one it generally acted like it forgot it owned? Or maybe you mean Twitter for Mac (previously Tweetie, before its acquisition), the app it shut down this year, telling Mac users to just use the web instead? Or nearly maybe you mean the nearly full slate of TV apps that Twitter decided no longer needed to exist?

And Twitter wonders why users don’t want to use its own clients?

Or perhaps, users want a consistent experience – one that doesn’t involve a million inconsequential product changes like turning stars to hearts or changing the character counter to a circle. Maybe they appreciate the fact that the third parties seem to understand what Twitter is better than Twitter itself does: Twitter has always been about a real-time stream of information. It’s not meant to be another Facebook-style algorithmic News Feed. The third-party clients respect that. Twitter does not.

Yesterday, the makers of Twitterific spoke to the API changes, noting that its app would no longer be able to stream tweets, send native push notifications, or be able to update its Today view, and that new tweets and DMs will be delayed.

It recommended users download Twitter’s official mobile app for notifications going forward.

In other words, while Twitterific will hang around in its broken state, its customers will now have to run two Twitter apps on their device – the official one to get their notifications, and the other because they prefer the experience.

A guide to using Twitter’s app for notifications, from Iconfactory

“We understand why Twitter feels the need to update its API endpoints,” explains Iconfactory co-founder Ged Maheux, whose company makes Twitterrific. “The spread of bots, spam and trolls by bad actors that exploit their systems is bad for the entire Twitterverse, we just wish they had offered an affordable way forward for the developers of smaller, third party apps like ours.”

“Apps like the Iconfactory’s Twitterrific helped build Twitter’s brand, feature sets and even its terminology into what it is today. Our contributions were small to be sure, but real nonetheless. To be priced out of the future of Twitter after all of our history together is a tough pill to swallow for all of us,” he added.

The question many users are now facing is what to do next?

Continue to use now broken third-party apps? Move to an open platform like Mastodon? Switch to Twitter’s own clients, as it wants, where it plans to “experiment with showing alternative viewpoints” to pop people’s echo chambers…on a service that refuses to kick out people like Alex Jones?

Or maybe it’s time to admit the open forum for everything that Twitter – and social media, really – has promised is failing? Maybe it’s time to close the apps – third-party and otherwise. Maybe it’s time to go dark. Get off the feeds. Take a break. Move on.

The full email from Twitter is below:

Hi team,

Today, we’re publishing a blog post about our priorities for where we’re investing today in Twitter client experiences. I wanted to share some more with you about how we reached these decisions, and how we’re thinking about 3rd party clients specifically.

First, some history:

3rd party clients have had a notable impact on the Twitter service and the products we build. Independent developers built the first Twitter client for Mac and the first native app for iPhone. These clients pioneered product features we all know and love about Twitter, like mute, the pull-to-refresh gesture, and more.

We love that developers build experiences on our APIs to push our service, technology, and the public conversation forward. We deeply respect the time, energy, and passion they’ve put into building amazing things using Twitter.

But we haven’t always done a good job of being straightforward with developers about the decisions we make regarding 3rd party clients. In 2011, we told developers (in an email) not to build apps that mimic the core Twitter experience. In 2012, we announced changes to our developer policies intended to make these limitations clearer by capping the number of users allowed for a 3rd party client. And, in the years following those announcements, we’ve told developers repeatedly that our roadmap for our APIs does not prioritize client use cases — even as we’ve continued to maintain a couple specific APIs used heavily by these clients and quietly granted user cap exceptions to the clients that needed them.

It is now time to make the hard decision to end support for these legacy APIs — acknowledging that some aspects of these apps would be degraded as a result. Today, we are facing technical and business constraints we can’t ignore. The User Streams and Site Streams APIs that serve core functions of many of these clients have been in a “beta” state for more than 9 years, and are built on a technology stack we no longer support. We’re not changing our rules, or setting out to “kill” 3rd party clients; but we are killing, out of operational necessity, some of the legacy APIs that power some features of those clients. And it has not been a realistic option for us today to invest in building a totally new service to replace these APIs, which are used by less than 1% of Twitter developers.

We’ve heard the feedback from our customers about the pain this causes. We check out #BreakingMyTwitter quite often and have spoken with many of the developers of major 3rd party clients to understand their needs and concerns. We’re committed to understanding why people hire 3rd party clients over our own apps. And we’re going to try to do better with communicating these changes honestly and clearly to developers. We have a lot of work to do. This change is a hard, but important step, towards doing it. Thank you for working with us to get there.

Thanks,

Rob

16 Aug 2018

SNES.party lets you play Super Nintendo with your friends

Hot on the heels of the wonderful NES.party comes Haukur Rosinkranz’s SNES.party, a site that lets you play Super Nintendo with all your buds.

Rosinkranz is Icelandic but lives in Berlin now. He made NES.party a year ago while experimenting with WebRTC and WebSockets and he updated his software to support the SNES.

“The reason I made it was simply because I discovered how advanced the RTC implementation in Chrome had become and wanted to do something with it,” he said. “When I discovered that it’s possible to take a video element and stream it over the network I just knew I had to do something cool with this and I came up with the idea of streaming emulators.”

He said it took him six months to build the app and a month to add NES support.

“It’s hard to say how long it took because I basically created my own framework for web applications that need realtime communication between one or more participants,” he said. He is a freelance programmer.

It’s a clever hack that could add a little fun to your otherwise dismal day. Feel like a little Link to the Past? Pop over here and let’s play!

16 Aug 2018

Google One is more proof of commoditization of consumer cloud storage

We have long known that the price of cloud storage services like Dropbox, Google Drive and Microsoft OneDrive have been getting cheaper over time. Yesterday’s launch of Google One in the U.S. dropped the price for Google storage even further, cutting the cost per terabyte per month in half, driving this point home even more clearly.

As Frederic Lardinois pointed out in his post, 2 terabytes of storage now costs $9.99 a month. Consider that without joining Google One, that was the same price for 1 terabyte of storage. By signing up for Google One, you could double your storage without paying one penny more, and let’s face it this was a ton of storage before the change.

Let’s compare that with some of the other players out there. Each one is a little different, but the storage costs tell a story.

Google One’s shift to 2 TB for $9.99 a month puts it in line with Apple’s pricing, which surprisingly had given you the most storage bang for your buck out of these four companies before Google One came along. Who would have thought that Apple was giving its users the best price on anything? Of course, you get access to Office 365, including Word and PowerPoint, with your terabyte of Microsoft OneDrive storage, which is going to add a fair bit of value for many users over and above the pure storage being offered.

Regardless, if you consider Apple and Google’s pricing, the price of a terabyte of cloud storage has dropped to $5.00 a month. That’s pretty darn cheap and it shows just how commoditized online storage has become and how much scale you require to make money.

Alan Pelz-Sharpe, principal analyst at Deep Analysis, who has been watching this space for years says that consumer cloud storage pricing has always been a race to the bottom. “You can only make a margin with mass scale. That’s why firms who are not Microsoft, Amazon or Google are pushing hard for business and enterprise customers. Google One just brings that message home,” he said.

If you get enough scale, as Dropbox has with an estimated 500 million users, if you can get a percentage to pay $8.25 a month for a terabyte of storage, it can add up to real money. When Dropbox filed its S-1 to before it went public earlier this year, it reported more than $1 billion in consumer revenue. It would be difficult if not impossible for a startup launching today to compete with the existing players, but the ones out there continue to compete with one another, driving the cost down even further.

Today’s announcement is just another step in that downward price pressure of consumer cloud storage, and when you get double the storage from one day to the next for the exact same price, it shows just how true that it is.

16 Aug 2018

Google One is more proof of commoditization of consumer cloud storage

We have long known that the price of cloud storage services like Dropbox, Google Drive and Microsoft OneDrive have been getting cheaper over time. Yesterday’s launch of Google One in the U.S. dropped the price for Google storage even further, cutting the cost per terabyte per month in half, driving this point home even more clearly.

As Frederic Lardinois pointed out in his post, 2 terabytes of storage now costs $9.99 a month. Consider that without joining Google One, that was the same price for 1 terabyte of storage. By signing up for Google One, you could double your storage without paying one penny more, and let’s face it this was a ton of storage before the change.

Let’s compare that with some of the other players out there. Each one is a little different, but the storage costs tell a story.

Google One’s shift to 2 TB for $9.99 a month puts it in line with Apple’s pricing, which surprisingly had given you the most storage bang for your buck out of these four companies before Google One came along. Who would have thought that Apple was giving its users the best price on anything? Of course, you get access to Office 365, including Word and PowerPoint, with your terabyte of Microsoft OneDrive storage, which is going to add a fair bit of value for many users over and above the pure storage being offered.

Regardless, if you consider Apple and Google’s pricing, the price of a terabyte of cloud storage has dropped to $5.00 a month. That’s pretty darn cheap and it shows just how commoditized online storage has become and how much scale you require to make money.

Alan Pelz-Sharpe, principal analyst at Deep Analysis, who has been watching this space for years says that consumer cloud storage pricing has always been a race to the bottom. “You can only make a margin with mass scale. That’s why firms who are not Microsoft, Amazon or Google are pushing hard for business and enterprise customers. Google One just brings that message home,” he said.

If you get enough scale, as Dropbox has with an estimated 500 million users, if you can get a percentage to pay $8.25 a month for a terabyte of storage, it can add up to real money. When Dropbox filed its S-1 to before it went public earlier this year, it reported more than $1 billion in consumer revenue. It would be difficult if not impossible for a startup launching today to compete with the existing players, but the ones out there continue to compete with one another, driving the cost down even further.

Today’s announcement is just another step in that downward price pressure of consumer cloud storage, and when you get double the storage from one day to the next for the exact same price, it shows just how true that it is.

16 Aug 2018

Cisco’s $2.35 billion Duo acquisition front and center at earnings call

When Cisco bought Ann Arbor, Michigan security company, Duo for a whopping $2.35 billion earlier this month, it showed the growing value of security and security startups in the view of traditional tech companies like Cisco.

In yesterday’s earnings report, even before the ink had dried on the Duo acquisition contract, Cisco was reporting that its security business grew 12 percent year over year to $627 million. Given those numbers, the acquisition was top of mind in CEO Chuck Robbins’ comments to analysts.

“We recently announced our intent to acquire Duo Security to extend our intent-based networking portfolio into multi- cloud environments. Duo’s SaaS delivered solution will expand our cloud security capabilities to help enable any user on any device to securely connect to any application on any network,” he told analysts.

Indeed, security is going to continue to take center stage moving forward. “Security continues to be our customers number one concern and it is a top priority for us. Our strategy is to simplify and increase security efficacy through an architectural approach with products that work together and share analytics and actionable threat intelligence,” Robbins said.

That fits neatly with the Duo acquisition, whose guiding philosophy has been to simplify security. It is perhaps best known for its two-factor authentication tool. Often companies send a text with a code number to your phone after you change a password to prove it’s you, but even that method has proven vulnerable to attack.

What Duo does is send a message through its app to your phone asking if you are trying to sign on. You can approve if it’s you or deny if it’s not, and if you can’t get the message for some reason you can call instead to get approval. It can also verify the health of the app before granting access to a user. It’s a fairly painless and secure way to implement two-factor authentication, while making sure employees keep their software up-to-date.

Duo Approve/Deny tool in action on smartphone.

While Cisco’s security revenue accounted for a fraction of the company’s overall $12.8 billion for the quarter, the company clearly sees security as an area that could continue to grow.

Cisco hasn’t been shy about using its substantial cash holdings to expand in areas like security beyond pure networking hardware to provide a more diverse recurring revenue stream. The company currently has over $54 billion in cash on hand, according to Y Charts.

Cisco spent a fair amount money on Duo, which according to reports has $100 million in annual recurring revenue, a number that is expected to continue to grow substantially. It had raised over $121 million in venture investment since inception. In its last funding round in September 2017, the company raised $70 million on a valuation of $1.19 billion.

The acquisition price ended up more than doubling that valuation. That could be because it’s a security company with recurring revenue, and Cisco clearly wanted it badly as another piece in its security solutions portfolio, one it hopes can help keep pushing that security revenue needle ever higher.

16 Aug 2018

Arm wants to power your next laptop

Arm, the company that designs the chips that power virtually every smartphone and IoT device, published its roadmap for the next two years today. That’s the first time Arm has done so and the reason for this move, it seems, is that the company wants to highlight its ambitions to get its chips into laptops.

So far, Arm-based laptops are far and in-between, though Microsoft recently made a major move in this direction thanks to its push for always connected Windows laptops. While that sounds great in theory, with laptops that only need a single charge to run all day, there’s still a performance penalty to pay compared to using an x86 chip. Arm says that gap is closing quockly, though, while offering a better performance/battery life balance.

Indeed, Ian Smythe, Arm’s senior director of its marketing programs, doesn’t shy away from comparing the next generations of its architectures with Intel’s mid-tier Core i5 processors.

“I think it’s clear that we’re on a transformative path,” Smythe said during a press conference ahead of today’s announcement. “It’s a compute journey that is changing the way that computers are able to be delivered in large screen form factor devices. And our vision is very much around how we’re going to be driving laptop performance from that mobile innovation base and how we’re going to be working with foundry partners to deliver that performance in today’s and tomorrow’s leading processes.”

So what does Arm’s roadmap look like? The details are still quite sparse, but this year, the company will deliver its Cortex-A76 architecture that its partners will likely build using both 10nm and 7nm processes. Next year, it’ll launch its Deimos CPUs, which will most likely all be built using 7nm processes. Come 2020, it’ll launch its Hercules processors for both 7nm and 5nm processes.

In terms of raw compute performance, the A76 can compete with some Intel i5 chips and Arm promises less than 5W TDP. Raw compute power based on a single benchmark isn’t all there is to making a computer run smoothly, of course, but this signals Arm’s ambitions (and those of its partners) in competing for the laptop market — while at the same time making smartphones far more powerful, too.

Can Arm-based processors really make a dent in the laptop market, though? Smythe surely thinks so. ‘I think with every disruption we created an opportunity,” he said. “That toehold [in the laptop market] can grow when you can demonstrate the benefits. I think that with the first Windows on Arm devices we’re sharing that capability and as we move towards Cortex A76-based devices and beyond, I think that capability and disruption offers opportunity beyond where we are today.”