Month: July 2018

25 Jul 2018

Dodged questions from Facebook’s press call on misinformation

Facebook avoided some of the toughest inquiries from reporters yesterday during a conference call about its efforts to fight election interference and fake news. The company did provide additional transparency on important topics by subjecting itself to intense questioning from a gaggle of its most vocal critics, and a few bits of interesting news did emerge:

  • Facebook’s fact-checking partnerships now extend to 17 countries, up from 14 last month
  • Top searches in its new political ads archive include California, Clinton, Elizabeth Warren, Florida, Kavanaugh, North Carolina and Trump; and its API for researchers will open in August
  • To give political advertisers a quicker path through its new verification system, Facebook is considering a preliminary location check that would later expire unless they verify their physical mailing address

Yet deeper questions went unanswered. Will it be transparent about downranking accounts that spread false news? Does it know if the midterm elections are already being attacked? Are politically divisive ads cheaper?

UNITED STATES – APRIL 11: Facebook CEO Mark Zuckerberg prepares to testify before a House Energy and Commerce Committee in Rayburn Building on the protection of user data on April 11, 2018. (Photo By Tom Williams/CQ Roll Call) // Flickr CC Sean P. Anderson

Here’s a selection of the most important snippets from the call, followed by a discussion of how it evaded some critical topics.

Fresh facts and perspectives

On Facebook’s approach of downranking instead of deleting fake news

Tessa Lyons, product manager for the News Feed: “If you are who you say you are and you’re not violating our Community Standards, we don’t believe we should stop you from posting on Facebook. This approach means that there will be information posted on Facebook that is false and that many people, myself included, find offensive . . . Just because something is allowed to be on Facebook doesn’t mean it should get distribution . . . We know people don’t want to see false information at the top of their News Feed and we believe we have a responsibility to prevent false information from getting broad distribution. This is why our efforts to fight disinformation are focused on reducing its spread. 

-When we take action to reduce the distribution of misinformation in News Feed, what we’re doing is changing the signals and predictions that inform the relevance score for each piece of content. Now, what that means is that information, that content appears lower in everyone’s News Feed who might see it, and so fewer people will actually end up encountering it. 

Image: Bryce Durbin/TechCrunch

Now, the reason that we strike that balance is because we believe we are working to strike the balance between expression and the safety of our community.

If a piece of content or an account violates our Community Standards, it’s removed; if a Page repeatedly violates those standards, the Page is removed. On the side of misinformation — not Community Standards — if an individual piece of content is rated false, its distribution is reduced; if a Page or domain repeatedly shares false information, the entire distribution of that Page or domain is reduced.”

On how Facebook disrupts misinformation operations targeting elections

Nathaniel Gleicher, head of Cybersecurity Policy: “For each investigation, we identify particular behaviors that are common across threat actors. And then we work with our product and engineering colleagues as well as everyone else on this call to automate detection of these behaviors and even modify our products to make those behaviors much more difficult. If manual investigations are like looking for a needle in a haystack, our automated work is like shrinking that haystack. It reduces the noise in the search environment which directly stops unsophisticated threats. And it also makes it easier for our manual investigators to corner the more sophisticated bad actors. 

In turn, those investigations keep turning up new behaviors, which fuels our automated detection and product innovation. Our goal is to create this virtuous circle where we use manual investigations to disrupt sophisticated threats and continually improve our automation and products based on the insights from those investigations. Look for the needle and shrink the haystack.”

TechCrunch/Bryce Durbin

On reactions to political ads labeling, improving the labeling process and the ads archive

Rob Leathern, product manager for Ads: “On the revenue question, the political ads aren’t a large part of our business from a revenue perspective, but we do think it’s very important to be giving people tools so they can understand how these ads are being used. 

-I do think we have definitely seen some folks have some indigestion about the process of getting authorized. We obviously think it’s an important trade-off and it’s the right trade-off to make. We’re definitely exploring ways to reduce the time for them from starting the authorization process to being able to place an ad. We’re considering a preliminary location check that might expire after a certain amount of time, which would then become permanent once they verify their physical mailing address and receive the letter that we send to them.

We’re actively exploring ways to streamline the authorization process and are clarifying our policy by providing examples on what ad copy would require authorization and a label and what would not.

We also plan to add more information to the Info and Ads tab for Pages. Today you can see when the Page was created, previous Page names, but over time we hope to add more context for people there in addition to the ads that that Page may have run as well.”

Dodged questions

On transparency about downranking accounts

Facebook has been repeatedly asked to clarify the lines it draws around content moderation. It’s arrived at a controversial policy where content is allowed even if it spreads fake news, gets downranked in News Feed if fact checkers verify the information is false and gets deleted if it incites violence or harasses other users. Repeat offenders in the second two categories can get their whole profile, Page or Group downranked or deleted.

But that surfaces secondary questions about how transparent it is about these decisions and their impacts on the reach of false news. Hannah Kuchler of The Financial Times and Sheera Frenkel of The New York Times pushed Facebook on this topic. Specifically, the latter asked, “I was wondering if you have any intention going forward to be transparent about who is going — who is down-ranked and are you keeping track of the effect that down-ranking a Page or a person in the News Feed has and do you have those kinds of internal metrics? And then is that also something that you’ll eventually make public?”

Facebook has said that if a post is fact-checked as false, it’s downranked and loses 80 percent of its future views through News Feed. But that ignores the fact that it can take three days for fact checkers to get to some fake news stories, so they’ve likely already received the majority of their distribution. It’s yet to explain how a false rating from fact checkers reduces the story’s total views before and after the decision, or what the ongoing reach reduction is for accounts that are downranked as a whole for repeatedly sharing false-rated news.

Lyons only answered regarding what happens to individual posts rather than providing the requested information about the impact on downranked accounts:

Lyons: “If you’re asking specifically will we be transparent about the impact of fact-checking on demotions, we are already transparent about the rating that fact-checkers provide . . . In terms of how we notify Pages when they share information that’s false, any time any Page or individual shares a link that has been rated false by fact-checkers, if we already have a false rating we warn them before they share, and if we get a false rating after they share, we send them a notification. We are constantly transparent, particularly with Page admins, but also with anybody who shares information about the way in which fact-checkers have evaluated their content.”

On whether politically divisive ads are cheaper and more effective

A persistent question about Facebook’s ads auction is if it preferences inflammatory political ads over neutral ones. The auction system is designed to prioritize more engaging ads because they’re less likely to push users off the social network than boring ads, thereby reducing future ad views. The concern is that Facebook may be incentivizing political candidates and bad actors trying to interfere with elections to polarize society by making more efficient ads that stoke divisions.

Deepa Seetharaman of the The Wall Street Journal surfaced this on the call saying, “I’m talking to a lot of campaign strategists coming up to the 2018 election. One theme that I continuously hear is that the more incendiary ads do better, but the effective CPMs on those particular ads are lower than, I guess, neutral or more positive messaging. Is that a dynamic that you guys are comfortable with? And is there anything that you’re doing to kind of change the kind of ads that succeeds through the Facebook ad auction system?”

Facebook’s Leathern used a similar defense Facebook has relied on to challenge questions about whether Donald Trump got cheaper ad rates during the 2016 election, claiming it was too hard to assess that given all the factors that go into determining ad prices and reach. Meanwhile, he ignored whether, regardless of the data, Facebook wanted to make changes to ensure divisive ads didn’t get preference.

Leathern: “Look, I think that it’s difficult to take a very specific slice of a single ad and use it to draw a broad inference which is one of the reasons why we think it’s important in the spirit of the transparency here to continue to offer additional transparency and give academics, journalists, experts, the ability to analyze this data across a whole bunch of ads. That’s why we’re launching the API and we’re going to be starting to test it next month. We do believe it’s important to give people the ability to take a look at this data more broadly. That, I think, is the key here — the transparency and understanding of this when seen broadly will give us a fuller picture of what is going on.”

On if there’s evidence of midterm elections interference

Facebook failed to adequately protect the 2016 U.S. presidential election from Russian interference. Since then it’s taken a lot of steps to try to safeguard its social network, from hiring more moderators to political advertiser verification systems to artificial intelligence for fighting fake news and the fake accounts that share it.

Internal debates about approaches to the issue and a reorganization of Facebook’s security teams contributed Facebook CSO Alex Stamos’ decision to leave the company next month. Yesterday, BuzzFeed’s Ryan Mac and Charlie Warzel published an internal memo by Stamos from March urging Facebook to change. “We need to build a user experience that conveys honesty and respect, not one optimized to get people to click yes to giving us more access . . . We need to listen to people (including internally) when they tell us a feature is creepy or point out a negative impact we are having in the world.” And today, Facebook’s Chief Legal Officer Colin Stretch announced his departure.

Facebook efforts to stop interference aren’t likely to have completely deterred those seeking to sway or discredit our elections, though. Evidence of Facebook-based attacks on the midterms could fuel calls for government regulation, investments in counter-cyberwarfare, and Robert Mueller’s investigation into Russia’s role.

WASHINGTON, DC – APRIL 11: Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC. This is the second day of testimony before Congress by Zuckerberg, 33, after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

David McCabe of Axios and Cecilia Kang of The New York Times pushed Facebook to be clear about whether it had already found evidence of interference into the midterms. But Facebook’s Gleicher refused to specify. While it’s reasonable that he didn’t want to jeopardize Facebook or Mueller’s investigation, it’s something that Facebook should at least ask the government if it can disclose.

Gleicher: “When we find things and as we find things — and we expect that we will — we’re going to notify law enforcement and we’re going to notify the public where we can . . . And one of the things we have to be really careful with here is that as we think about how we answer these questions, we need to be careful that we aren’t compromising investigations that we might be running or investigations the government might be running.”

The answers we need

So Facebook, what’s the impact of a false rating from fact checkers on a story’s total views before and after it’s checked? Will you reveal when whole accounts are downranked and what the impact is on their future reach? Do politically incendiary ads that further polarize society cost less and perform better than politically neutral ads, and, if so, will Facebook do anything to change that? And does Facebook already have evidence that the Russians or anyone else are interfering with the U.S. midterm elections?

We’ll see if any of the analysts who get to ask questions on today’s Facebook earnings call will step up.

25 Jul 2018

The Traeger Timberline 850 turns BBQ from art to science

This review took a lot of pork. Over the last few months, I’ve used the Traeger Timberline 850 several times a week. Cooking on this grill is easier than using an oven. With a little bit of planning, a person can simultaneously grill a flock of chickens, a couple of pork butts and a load of veggies and have them turn out perfectly. I did, and it was the best Mother’s Day ever.

First the good.

It’s simple: This grill can cook the perfect brisket every time. It doesn’t take any skill. Just follow the instructions, and in 12-14 hours, an award-winning brisket will melt in your mouth. And therein lies the rub. This grill turns barbecuing from an art to a science.

My completely unscientific ranking of all the food I cooked on this grill:

  • Brisket: 10/10
  • Pork butt: 10/10
  • Pork belly: 10/10
  • Short ribs: 10/10
  • Country style ribs: 10/10
  • Beer can chicken: 8/10
  • Spatchcock chicken: 8/10
  • Chicken wings: 8/10
  • Roasted chicken: 7/10
  • Hamburgers: 7/10
  • Cookies: 7/10
  • Flank steak: 6/10
  • Thick, general cuts of beef: 5/10
[gallery ids="1680115,1680114,1680117,1680118,1680123,1680132,1680128,1680120,1680121,1680148"]

Everything from chicken to every cut of pork to every sort of vegetable comes out nearly perfectly. Just follow the instructions, set the temperature and walk away. As long as the pellet hopper has enough fuel, most food will be a blue ribbon contender.

I cooked everything I could on this grill. It excels at long and slow. Items like ribs and pork shoulders and brisket are perfect for this grill. Poultry turns out picture perfect. The indirect nature of the grill makes a perfect tray of veggies. But the grill isn’t ideal for everything. Items that need high, direct heat aren’t great on this Traeger grill. Steaks and hamburgers aren’t as good as what comes off other grills. That’s to expected though.

The grill uses little pellets of compressed wood as fuel. Loaded in a hopper on the side of the grill, they’re gravity-fed into an auger that methodically pulls the pellets to a small firebox on the bottom of the grill where they’re burned, providing the right temperature and amount of smoke. A control panel on the front of the hopper lets the user select the desired temperature in single digit increments from 165 to 500 degrees.

Once the appropriate temperature is selected, the grill’s computer makes the necessary adjustments. Want to crank the heat from 220 degrees to 500? It takes about 10 minutes and just a twist of a dial.

I found the built-in probe thermometer accurate. It registered within a degree of my Weber meat thermometer. More importantly during my time with the grill, the meat cooked on the grill was done when the thermometer said it was done. (Note: pic coming shortly. It’s on my wife’s phone.)

The Timberline 850 is one of Traeger’s largest grills though it’s not evident from the outside. That’s part of the beauty. It’s compact but can hold a crazy amount of food thanks to three deep trays. For Mother’s Day I cooked six chickens on the bottom level, a pork belly on the middle level and veggies on top, and for a little bit, they all shared the grill. Other times, I cooked four pork butts and two racks of ribs, and there was still plenty of room left.

The grill’s vertical design allows it to hold a lot of food while minimizing hot spots. This design is what sets it apart from similar pellet grills. I didn’t experience a substantial difference in cooking ability on any of the levels.

This grill comes with wifi. Traeger calls it WiFire because that’s fun. It’s handy, and I use it a lot more than I expected. The app lets users see and adjust the temperature of the grill and monitor the temperature of the meat probe. The connection is rock-solid. Past experiences with wifi-enabled appliances set the expectation that I would have to continually re-connect the grill to my network. That’s not the case. The app has never lost connection to my network. I wish there were an Alexa app so I can talk to my grill.

And now the bad.

This grill is expensive. It’s $1700. That’s crazy. I own several Weber grills, and after 20 years of practice, I can cook a chicken better on a Weber than on this Traeger grill. But it took years to get there. The Traeger makes cooking a great chicken possible from the first time. What’s more, there are a handful of Traeger competitors that offer grills with similar features for often half the price: Green Mountain Grills, Camp Chef, Z Grills, Pit Boss. Google Pellet Grills.

Is this grill worth $1700 when compared to the others? No, I don’t think so though an argument could be made around its relativity small footprint compared to its capacity. A person can cook a lot on this thing, and it doesn’t take up more room than a standard gas grill. Still, unless you’re grilling for a family of 20 every Sunday, I would look at other modes while considering this one.

I had some issues with the Timberline 850.

Grease fires. I had two over the last few months. Both were my fault, but the grill suffered. One time I had a tray overloaded with oiled veggies. Some oil seeped behind the drip plate that guards the firebox and caused a fire out of my reach. The temperature blasted to over 700 degrees, tripping a sensor and shutting off the grill. But the fire raged on for a few minutes in the closed grill. Something similar happened when I had to cook 50 of those horrible frozen hamburger patties. Grease from one dripped down the back of the grill and started a fire. Same thing: the sensor tripped and the grill shut off. But look at the rear of the grill. The paint is peeling, and I fear the steel is damaged though it feels fine.

The grill shutoff twice in the middle of an 8-hour pork butt. I caught the first time within a few minutes; the second time it ruined my pork butt. The hopper is to blame.

The hopper in this model is poorly designed. In my mind, it’s reasonable to expect most of the hopper to empty itself without user intervention. That’s not the case. The auger easily grabs the pellets and pulls them in, but the hopper is too wide. This causes the pellets to sit on the side of the container where the auger can’t reach. By my estimate, nearly 1/4 of the pellets can sit on the sidelines, useless until the owner pushes them down into the path of the auger. To be clear if the hopper is more than half full, this is not an issue. It’s when the hopper is half exhausted that the owner needs to watch the levels.

This grill does a lot of things right. It cooks like a pro. The Timberline 850 makes you, the cook, look like a pit boss. I like it a lot. Getting over the initial price is hard. $1700 is a lot for a grill when similar grills can be had for less than half. Without a direct comparison, all I can say is the Traeger Timberline 850 is a rock solid barbecue grill with a few flaws. Its design lets it hold a lot of food without taking up a lot of deck space. It excels at low and slow cooking, and for my money, that’s the best way to cook.

25 Jul 2018

Virtu teams up with Google to bring its end-to-end encryption service to Google Drive

Virtu, which is best known for its email encryption service for both enterprises and consumers, is announcing a partnership with Google today that will bring the company’s encryption technology to Google Drive.

Only a few years ago, the company was still bolting its solution on top of Gmail without Google’s blessing, but these days, Google is fully on board with Virtu’s plans.

Its new Data Protection for Google Drive extends its service for Gmail to Google’s online file storage service. It ensures that files are encrypted before upload, which ensures the files remain protected, even when they are shared outside of an organization. The customer remains in full control of the encryption keys, so Google, too, has no access to these files, and admins can set and manage access policies by document, folder and team drive.

Virtu’s service uses the Trusted Data Format, an open standard the company’s CTO Will Ackerly developed at the NSA.

While it started as a hack, Virtu is Google’s only data protection partner for G Suite today, and its CEO John Ackerly tells me the company now gets what he and his team are trying to achieve. Indeed, Virtu now has a team of engineers that works with Google. As John Ackerly also noted, GDPR and the renewed discussion around data privacy is helping it gain traction in many businesses, especially in Europe, where the company is opening new offices to support its customers there. In total, about 8,000 organization now use its services.

It’s worth noting that while Virtu is announcing this new Google partnership today, the company also supports email encryption in Microsoft’s Office 365 suite.

25 Jul 2018

Coinbase lets you convert your tokens into gift cards

It’s still quite hard to buy physical goods using bitcoins or ethers. Coinbase plans to (partially) solve that issue with a new partnership with WeGift. Coinbase customers in Europe and Australia can now convert their tokens on their Coinbase account into digital gift cards for popular stores.

For instance, you’ll be able to buy gift cards for Uber, Tesco, Google Play, Marks and Spencer and more. The feature is now live in the U.K., Spain, France, Italy, Netherlands and Australia.

While WeGift promises gift cards for dozens of merchants, most of them are restricted to customers based in the U.K. If you live in another country, you’ll only get a handful of options. For instance, in France you can only buy gift cards for Décathlon, Bloom & Wild, Global Hotel Card and Ticketmaster. Coinbase says that it will adding be more retailers in the coming months.

In some cases, WeGift offers you the fiat equivalent of your cryptocurrencies as well as a tiny bonus. For instance, you get £102 in Uber gift cards if you spend the equivalent of £100 in bitcoins.

In the U.S., Coinbase also partnered with Shift for a traditional Visa card. But many European cryptocurrency companies who provided Visa cards had to go back to the drawing board because Visa stopped working Wave Crest Holding — Wave Crest Holding was the card issuer for all European cryptocurrency cards.

Gift cards aren’t as convenient as receiving money on your bank account or a debit card. But they’re a great way to avoid telling your bank that you made money by speculating on cryptocurrencies. Many banks directly report data on their users to local tax authorities. But don’t forget that Coinbase can track all your withdrawal events and notify tax authorities too.

Disclosure: I own small amounts of various cryptocurrencies.

25 Jul 2018

ColdQuanta raises $6.75M to make it easier to spin up a limited use-case quantum computer

Quantum computing may be a long ways off, but early applications of it aren’t as far off as you might think, according to longtime researcher and ColdQuanta founder Dana Anderson.

The startup creates a device that’s designed to make it easier to start operating quantum computing-like operations on near-term problems like signal processing or time measurement, which is the kind of low-hanging fruit that current technology might enable. Researchers using that approach — a set of atoms where there’s practically no motion — require some mechanism of keeping them from moving, for which some cases involve refrigeration. ColdQuanta’s main product is a set of lasers that’s able to stabilize a set of atoms and allow them to operate with those properties. It’s certainly nowhere close to a server — or even a standard computer — but using this kind of a tool, it might be easier to handle tasks like real-time signal processing. ColdQuanta said today that it has raised $6.75 million in a round led by Maverick Ventures and including Global Frontier Investments.

“If you weren’t look out the window, and you turned off GPS because it’s a conflict or sunspots, you can ask, ‘can I fly to New York from San Francisco with my eyes closed,'” Anderson said. “The answer is no. These types of applications — real world applications based on fundamental advances of physics — keeps me thinking, and up at night. Clocks sound pretty boring, and you might ask why do I need something like that. But there’s enormous demand for improvements in time-keeping, whether for high frequency trading, navigation, guidance, or autonomous vehicles. We see those as early applications.”

The primary aim of ColdQuanta’s hardware is, Andersen says, to create a “neutral” set of atoms that all have identical properties of the ones next to them. It does that by using a set of lasers to bring them to a near standstill — within a millionth of a degree of absolute zero — and then control their properties using lasers. That way, a researcher or team could scale that up to a larger system where they can start finding applications right away. That includes time-keeping, secure communications, and others, now that a lot of the primary limitations of the technology have gotten a little more relaxed over time. ColdQuanta’s aim is to be able to do this in a normal, room-temperature situation throughout the environment everywhere else as well. The lasers are tuned in such a way that a stream of photons hitting each atom slows it down until it’s largely stable (also being held up by another set of lasers to account for gravity).

“Laser technology was unreliable in the early days, that was mostly a time when things weren’t working, and most often it was the laser,” Anderson said. “What ColdQuanta is focused on, now for 11 years, is technology that could be manufactured in large quantities, making reliable, small, and robust equipment. If you looked at the initial quantum gas machine it took a couple of square meters of area on a table plus tons of electrics. Now we’ve made it small enough that there’s one sitting on the ISS. It’s a fairly small package, mostly because integration techniques, improvements in lasers, and developing key electronics components have helped us achieve this task.”

There may be an analogy between what’s happened with the emergence of the widespread use of deep learning for a variety of tasks and the early stages of products like ColdQuanta. Deep Learning, Andersen said, was the key innovation on the change in a lot of machine learning models, but there were plenty of smaller use cases where it was interesting and useful — even back in the 1990s. Andersen said there will probably be a similar situation going forward as limited quantum computing will find some near-term applications and then exist on a similar timetable as other technological shifts as it waits for the biggest, cheapest, and most powerful use case that demands widespread adoption.

“I see the path we’re going on is very familiar,” Andersen said. “I don’t think the technological challenges we face are improbable. We’ve been through other difficult technology roadmaps before and overcome them. The landscape is very familiar. The timescale of inserting them into real-world problems gets kind of fuzzy when you have to predict so far off, but I think quantum computers will get there. I’m quite convinced there will be modest applications of quantum computers that will show up very soon. Quantum simulation, I have almost no doubt, will find pure science uses and begin to apply to at least in restricted spaces relative to national security and defense.”

25 Jul 2018

Google Drive will hit a billion users this week

Google loves to talk about how it has seven products with more than a billion users. Those are its flagship search service, Gmail, Chrome, Google Maps, YouTube, Android and the Google Play Store. Indeed, Android actually has more than 2 billion users now. Later this week, we will be able to add an eighth service to this list: Google Drive, the company’s online file storage service that launched back in 2012.

The company made the announcement at its Google Cloud Next conference in San Francisco — though somehow it doesn’t want to commit to saying that Drive has already hit that billion user number, or when exactly it’ll do so. “Later this week,” is about as good as it gets right now, but if you want to buy some fireworks to celebrate, you probably still have a day or two to prepare.

It’s actually been a while since we last got any updated stats about Google Drive. At last year’s Google I/O conference in May, the company said that Drive now stored 2 trillion files and that it had over 800 million daily active users. At this year’s Google I/O, the company didn’t offer any updated numbers for Drive, likely because it was still waiting to cross the billion users number.

Over the course of the last year, Google launched a number of business-focused features for Drive, including Team Drives and Drive File Stream, as well as new machine learning-powered features for all users. The company also launched its new Drive-centric backup and sync tool for Mac and PC last summer.

25 Jul 2018

Mayfield Robotics ceases production of Kuri robot amid a questionable future

In a letter to backers today, Bay Area-based Mayfield Robotics said it was “crushed” to announce that it has ceased manufacturing of its home robot, Kuri. The note finds the Bosch-backed business grappling with an uncertain future, as it pauses all operations and re-evaluates its future.

Launched in 2015, as part of Bosch’s Startup Platform, the company debuted its home robot at CES the following year. It took close to two-years, but the company finally began shipping the adorable little robot to backers in late-2017. Kuri also appeared on stage at our robotics event, back in May.

According to the letter, however, Bosch struggled to find good fit for the company in its broader portfolio.

“From the beginning, we have been constantly looking for the best paths to achieve scale and continue to advance our innovative technology,” the company writes. “Typically, startups in the Bosch Startup Platform are integrated into existing Bosch business units, but after extensive review, there was not a business fit within Bosch to support and scale our business.”

Home robotics have, of course, had a famously difficult time finding mainstream success, through a combination of prohibitive pricing (Kuri carried a $700 price tag) and limited functionality. Only the hyper-focused Roomba has managed to effectively buck that trend.

Existing within the larger confines of Bosch likely sheltered the company from some of those harsher realities, but ultimately, corporations have little time for products that don’t play into their larger strategies. Without a support structure, the future remains one giant question mark for the company.

“Creating a robot like Kuri is a massive undertaking,” Mayfield writes. “We don’t know what the coming months will bring. Regardless, we stand firm in our belief that the home robot Renaissance is just beginning, and it’s going to be amazing.”

25 Jul 2018

Google is baking machine learning into its BigQuery data warehouse

There are still a lot of obstacles to building machine learning models and one of those is that in order to build those models, developers often have to move a lot of data back and forth between their data warehouses and wherever they are building their models. Google is now making this part of the process a bit easier for the developers and data scientists in its ecosystem with BigQuery ML, a new feature of its BigQuey data warehouse by building some machine learning functionality right into BigQuery.

Using BigQuery ML, developers can build models using linear and logistical regression right inside their data warehouse without having to transfer data back and forth as they build and fine-tune their models. And all they have to do to build these models and get predictions is to write a bit of SQL.

Moving data doesn’t sound like it should be a big issue, but developers often spend a lot of their time on this kind of grunt work — time that would be better spend on actually working on their models.

BigQuery ML also promises to make it easier to build these models, even for developers who don’t have a lot of experience with machine learning. To get started, developers can use what’s basically a variant of standard SQL to say what kind of model they are trying to build and what the input data is supposed to be. From there, BigQuery ML then builds the model and allows developers to almost immediately generate predictions based on it. And they won’t even have to write any code in R or Python.

These new features are now available in beta.

25 Jul 2018

Google launches a stand-alone version of Drive for businesses that don’t want the full G Suite

If you are a business and want to use Google Drive, then your only option until now was to buy a full G Suite subscription, even if you don’t want or need access to the rest of the company’s productivity tools. Starting today, though, these businesses, will be able to buy a subscription to a stand-alone version of Google Drive, too.

Google says that a stand-along version of drive has been at the top of the list of requests from prospective customers, so it’s now giving this option to them in the form of this new service (though to be honest, I’m not sure how much demand there really is for this product). Stand-along Google Drive will come with all the usual online storage and sharing features as the G Suite version.

Pricing will be based on usage. Google will charge $8 per month per active user and $0.04 per GB stored in a company’s Drive.

Google’s idea here is surely to convert those stand-alone Drive users to full G Suite users over time, but it’s also an acknowledgement on Google’s part that not every business is ready to move away from legacy email tools and desktop-based productivity applications like Word and Excel just yet (and that its online productivity suite may not be right for all of those businesses, too).

Drive, by the way, is going to hit a billion users this week, Google keeps saying. I guess I appreciate that they don’t want to jump the gun and are actually waiting for that to happen instead of just announcing it now when it’s convenient. Once it does, though, it’ll become the company’s eighth product with more than a billion users.

25 Jul 2018

Google is making a fast specialized TPU chip for edge devices and a suite of services to support it

In a pretty substantial move into trying to own the entire AI stack, Google today announced that it will be rolling out a version of its Tensor Processing Unit — a custom chip optimized for its machine learning framework TensorFlow — optimized for inference in edge devices.

That’s a bit of a word salad to unpack, but here’s the end result: Google is looking to have a complete suite of customized hardware for developers looking to build products around machine learning, such as image or speech recognition, that it owns from the device all the way through to the server. Google will have the cloud TPU (the third version of which will soon roll out) to handle training models for various machine learning-driven tasks, and then run the inference from that model on a specialized chip that runs a lighter version of TensorFlow that doesn’t consume as much power. Google is exploiting an opportunity to split the process of inference and machine training into two different sets of hardware and dramatically reduce the footprint required in a device that’s actually capturing the data. That would result in faster processing, less power consumption, and potentially more importantly, a dramatically smaller surface area for the actual chip.

Google is also rolling out a new set of services to compile TensorFlow (Google’s machine learning development framework) into a lighter-weight version that can run on edge devices without having to call the server for those operations. That, again, reduces the latency and could have any number of results, from safety (in autonomous vehicles) to just a better user experience (voice recognition). As competition heats up in the chip space, both from the larger companies and from the emerging class of startups, nailing these use cases is going to be really important for larger companies. That’s especially true for Google as well, which also wants to own the actual development framework in a world where there are multiple options like Caffe2 and PyTorch.

Google will be releasing the chip on a kind of modular board not so dissimilar to the Raspberry Pi, which will get it into the hands of developers that can tinker and build unique use cases. But more importantly, it’ll help entice developers who are already working with TensorFlow as their primary machine learning framework with the idea of a chip that’ll run those models even faster and more efficiently. That could open the door to new use cases and ideas, and should it be successful, will lock those developers further into Google’s cloud ecosystem on both the hardware (the TPU) and framework (TensorFlow) level. While Amazon owns most of the stack for cloud computing (with Azure being the other largest player), it looks like Google is looking to own the whole AI stack – and not just offer on-demand GPUs as a stopgap to keep developers operating within that ecosystem.

Thanks to the proliferation of GPUs, machine learning has become increasingly common across a variety of use cases, which doesn’t just require the horsepower to train a model to identify what a cat looks like. It also needs the ability to take in an image and quickly identify that said four-legged animal is a cat based on the model it’s trained with tens of thousands (or more) images of what a cat is. GPUs were great for both use cases, but it’s clear that better hardware is necessary with the emergence of use cases like autonomous driving, photo recognition on cameras, or a variety of others — for which even millisecond-level lag is too much and power consumption, or surface area, is a dramatic limiting factor.

The edge-specialized TPU is an ASIC chip, a breed of chip architecture that’s increasingly popular for specific use cases like mining for cryptocurrency (such as larger companies like Bitmain). The chips excel at doing specific things really well, and it’s opened up an opportunity to tap various niches, such as mining cryptocurrency, with specific chips that are optimized for those calculations. These kinds of edge-focused chips tend to do a lot of low-precision calculations very fast, making the whole process of juggling runs between memory and the actual core significantly less complicated and consuming less power as a result.

While Google’s entry into this arena has long been a whisper in the Valley, this is a stake in the ground for the company that it wants to own everything from the hardware all the way up to the end user experience, passing through the development layer and others on the way there. It might not necessarily alter the calculus of the ecosystem, as even though it’s on a development board to create a playground for developers, Google still has to make an effort to get the hardware designed into other pieces of hardware and not just its own if it wants to rule the ecosystem. That’s easier said than done, even for a juggernaut like Google, but it is a big salvo from the company that could have rather significant ramifications down the line as every big company races to create its own custom hardware stack that’s specialized for its own needs.