Category: UNCATEGORIZED

29 May 2019

iRobot’s newest mop and vacuum talk to each other to better clean up

Some spring cleaning news from iRobot . The Bedford, Mass.-based company just unveiled a pair of new robots designed to tag-team dirty floors. The Roomba s9+ and Braava Jet m6 both sport iRobot’s mapping technology, coupled with Imprint Link, which lets to two devices communicate in order to take turns on the floor.

The s9+ marks a new premium standard for the Roomba. That starts with arguably the most radical redesign in the robotic vacuum line’s 17-year history. The company’s moved away from the iconic fully round puck design that’s defined the product since its inception — and on that front, at least, it borrows from the Braava.

The front of the vacuum is flat, part of the new “PerfectEdge” tech that lets it get closer to walls — apparently one of the most requested features in recent Roomba generations. The corner brush has five 30mm arms that get dirt that early models were unable to reach. The downside of the flat side, however, is that the system has to do much more maneuvering, which in turn requires more battery.

No word on specifics there, but iRobot says it’s amped up the mAH accordingly. The top, meanwhile, features a large brushed metal circle that opens up to grab and replace the filter. Like last year’s i7+, the system ships with the optional Clean Base (notably, however, they use different Clean Bases with different connectors, so they’re not cross-compatible), which empties out the dirt while docked.

The new model features an upgraded 3D sensor that helps the system map and navigate, scanning for obstacles at a rate of 25 times a second. Also new is the Imprint Link Technology, which is the next step in the company’s floor-cleaning domination plan. The tech allows the Roomba to communicate with the new Braava, so they can take turns cleaning up the floor.

As always, cleans are initiated via the Home app. That sends the s9+ out to clean an area, followed by the m6. iRobot CEO Colin Angle tells TechCrunch that the company’s positioned the new robots (and its lawn mowing robot, Terra) as “iRobot 2.0.”

“It’s a coherent — from a design and communication perspective — set of top-end robots that are designed to raise the bar of functionality,” he tells explains.

Certainly it’s a step forward in the company’s long-promised vision of home robots becoming an integral part of the smart home, particularly when coupled with mapping and Alexa and Google Assistant functionality.

The new Braava, meanwhile operates as before, using a sprayed solution and dry pads, rather than the water tank used by Scooba. It uses similar mapping technology to get a floor plan and avoid obstacles. The cleaning system has been updated throughout, as well, with improved spraying and larger pads with a range of different materials.

Naturally, none of this comes cheap — and let’s be honest, affordability has never really been iRobot’s strong suit. The s9+ runs $1,299 with the Clean Base and $999 without. The Braava m6 is $499, with a box of seven cleaning pads going for $8. They’ll both be available June 9. The vacuum sports a new 3D sensor for advanced room mapping and object detection. Unlike previous models, this Roomba is scanning the room for what’s in front of it at a rate of 25 times a second.

That’s coupled with iRobot’s new proprietary PerfectEdge technology, designed to better skirt the edges of rooms. The brushes and other cleaning bits have been rejiggered, as well.

29 May 2019

Flipboard hacks prompt password resets for millions of users

Social sharing site and news aggregator Flipboard has reset millions of user passwords after hackers gained access to its systems several times over a nine-month period

The company confirmed in a notice Tuesday that the hacks took place between June 2, 2018 and March 23, 2019 and a second time on April 21-22, 2019, but the intrusions were only detected a day later on April 23.

Hackers stole usernames, email addresses, passwords and account tokens for third-party services. According to the notice, “not all” Flipboard users’ account data were involved in the breaches but the company declined to say how many users were affected.

Flipboard has more than 150 million monthly users.

“We’re still identifying the accounts involved and as a precaution, we reset all users’ passwords and replaced or deleted all digital tokens,” the notice read.

Although the passwords were unreadable, Flipboard said passwords prior to March 14, 2012 were scrambled using the older, weak hashing SHA-1 algorithm.. Any passwords changed after are scrambled using a much stronger algorithm that makes it far more difficult to reveal in a usable format.

The hacks also exposed account tokens, which gives Flipboard access to data from accounts on other services, like Facebook, Google, and Samsung.

“We have not found any evidence the unauthorized person accessed third-party account(s) connected to users’ Flipboard accounts,” said the statement. “As a precaution, we have replaced or deleted all digital tokens.”

Flipboard becomes the latest tech giant to be hit by hackers in recent months. Developer platform Stack Overflow earlier this month confirmed a breach involved some user data. Canva, one of the biggest sites on the internet, was also hacked. Last week, the Australia-based company admitted close to 140 million users had data stolen following the breach.

Read more:

29 May 2019

Go chat yourself with Facebook’s new Portal companion app

Ignoring calls that it’s creepy, Facebook is forging onward with its Portal smart display. Today Facebook quietly launched iOS and Android Portal apps that let owners show off photos on the screen without sharing them to the social network, and video call their home while they’re out.

The app isn’t likely to move the needle for Portal whose potential users fall into two camps: those so alarmed by Facebook’s privacy practices that they couldn’t imagine putting its camera and microphone in their home, and those ambivalent or ignorant regarding the privacy backlash who see it as an Amazon Echo with a nice screen and easy way to video call family. Critics were mostly surprised by the device’s quality but too freaked out to recommend it. Those willing to buy it have given it a 4- to 4.4-star average rating on Amazon, praising its AI camera that keeps people in frame of a video chat while they move though jeering some setup difficulties.

Facebook announced at f8 a month ago that the Portal app was coming and eventually so would encrypted WhatsApp video calls. It also extended sales to Europe and Canada, though the new app is currently only available in the US according to Sensor Tower which tipped us off to the launch. The $199 10-inch Portal and $349 15.6-inch Portal+ launched in October, soured by a swirl of Facebook privacy scandals. Last week, the company tried to score some points with the public by funding an art project displayed at the SF Museum Of Modern Art. But the “immersive” exhibit was just some Portals stuck to some funky painted wooden backdrops, and it all felt smarmy and forced.

Facebook stuck Portals into wooden backdrops and called it art

Portal’s app lets you video call your Portal so you can say hi to family while you’re out. That’s great for traveling parents or seeing who is around the house in the post-land line age. The app also allows you to add and remove accounts on Portal and manage who’s in your speed dial Favorites, which you could already do from the device. There’s still no Amazon Prime Video or Smart home controls as were promised at F8.

The option to send photos directly from your camera roll to Portal’s Superframe fixes the worst feature of the digital photo frame. Previously you’d have to select just from photo/video albums you’d shared to Facebook. That meant you were only showing off sacchrine photos you were willing to post online, and if you selected Your Photos or Photos Of You, you might end up displaying shots that were embarrassing or that don’t make sense outside of the News Feed.

My workaround was to create a Facebook album of photos for Portal set to be visible only to me, but that was a hassle. Now you can manually grab pics and videos from your phone and send them to Portal without the worry they’ll show up on your profile. Portal also now can show off your Instagram photos, as was announced at F8. Still missing is Google Assistant support, which Facebook told me it was working to integrate last year.

Facebook’s steady improvements to Portal might not have shook its paranoia-inducing reputation amongst tech news readers and privacy enthusiasts, but they’ve kept it perhaps the best big screen and camera-equipped smart speaker. But in the seven months since launch, Google has copied Portal’s auto-framing camera for video chat in its new Nest Hub Max while Amazon is making a slew of home appliances smart. Portal will need more marquee innovations and some brand rehabilitation if it’s going to stay competitive.

29 May 2019

Pokémon GO will soon use sleep data to “reward good sleep habits”

Well, here’s a bit of surprise news this evening: at some point in the future, Pokémon GO is going to wrap the player’s sleep habits into the gameplay.

It’ll come as part of a wider initiative by The Pokémon Company to — as CEO Tsunekazu Ishihara put it in a press conference this evening — “turn sleep into entertainment”. Which… well, we’ll see how that goes.

Niantic CEO John Hanke took the stage at the press conference for a moment, but didn’t really offer much in the way of details. Said Hanke:

Niantic pioneered a new kind of gaming by turning the whole world into a gameboard, where we can all play and explore. By creating a new way to see the world and an incentive to go outside and exercise, we hoped to encourage a healthy lifestyle and to make a positive impact on our players and on the world. We’re delighted to be working with The Pokémon Company on their efforts to encourage another part of a healthy lifestyle: getting a good night’s rest.

At Niantic, we love exploring the world on foot. And that can’t happen unless we have the energy to embark on these adventures. We’re excited to find ways to reward good sleep habits in Pokémon GO as part of a healthy lifestyle. You’ll be hearing more from us on this in the future.

Ishihara also announced that The Pokémon Company is working with SELECT BUTTON (the company behind the 2017 mobile title Magikarp Jump) to make a separate game called Pokémon SLEEP. Next to no details on that one yet, though, besides the logo:

All of it will tap a just-announced device called Pokémon GO+ Plus (Yeah. Two plusses, one written. Go Plus Plus). It’s a follow-up to the original GO+, which was built primarily to let you play Pokémon GO without actually looking at the screen. The GO Plus Plus will do everything the GO+ did — letting you tap a button to spin Pokéstops or catch nearby Pokémon — but also has a built-in accelerometer allowing it to be laid on your bed to track sleep habits and send’em back to your phone via Bluetooth.

And here’s a screenshot of a video that played alongside the announcement, showing the device as it’s meant to be used:

 

28 May 2019

FireEye snags security effectiveness testing startup Verodin for $250M

When FireEye reported its earnings last month, the outlook was a little light, so the security vendor decided to be proactive and make a big purchase. Today, the company announced it has acquired Verodin for $250 million. The deal closed today.

The startup had raised over $33 million since it opened its doors 5 years ago, according to Crunchbase data, and would appear to have given investors a decent return. With Verodin, FireEye gets a security validation vendor, that is, a company that can run a review against the existing security setup and find gaps in coverage.

That would seem to be a handy kind of tool to have in your security arsenal, and could possibly explain the price tag. Perhaps, it could also help set FireEye apart from the broader market, or fill in a gap in its own platform.

FireEye CEO Kevin Mandia certainly sees the potential of his latest purchase. “Verodin gives us the ability to automate security effectiveness testing using the sophisticated attacks we spend hundreds of thousands of hours responding to, and provides a systematic, quantifiable, and continuous approach to security program validation,” he said in a statement.

Chris Key, Verodin co-founder and chief executive officer, sees the purchase through the standard acquisition lens. “By joining FireEye, Verodin extends its ability to help customers take a proactive approach to understanding and mitigating the unique risks, inefficiencies and vulnerabilities in their environments,” he said in a statement. In other words, as part of a bigger company, we’ll do more faster.

While FireEye plans to incorporate Verodin into its on-prem and managed services, it will continue to sell the solution as a stand-alone product, as well.

28 May 2019

NIO shifts electric vehicle plans as losses pile up

Chinese automotive startup Nio is tweaking its plans for future electric vehicles, a move prompted by sluggish sales of its ES8 model.

Nio, which reported in its unaudited financial results Tuesday a loss of $390.9 million in the first quarter, is taking a number of measures in response to a slowdown in sales. Those changes include a shift in its vehicle production plans, a reduction in R&D spending and a 4.5 percent cut to its workforce, the company’s founder and CEO William Li said during an earnings call.

The drop in sales is primarily driven by the EV subsidy reduction in China and macroeconomics trends in the country that have been exacerbated by the U.S.-China trade war, Li said.

Nio shares still closed 3.6 percent higher Tuesday — and had popped as high as 7 percent) because the results still beat Wall Street’s expectations.

Nio began deliveries of the ES8, a seven-seater high-performance electric SUV, in China in June 2018. And while deliveries initially surpassed expectations, they have since slowed in 2019. Nio reported it delivered 3,989 ES8 electric SUV in the first quarter, a 50 percent drop from the previous period.

Li suggested that the upcoming ES6, a cheaper SUV that will come to market next month, could be cannibalizing ES8 sales.

Meanwhile, Nio has changed its strategy for future electric vehicles. Instead of bringing its next-generation ET7 to production, the automaker will instead focus on a third vehicle under its ES series lineup.

“Looking ahead to the second quarter, we expect an even more challenging sales environment and anticipate overall sequential demand and deliveries to decrease, as competition continues to accelerate and the general automobile market in China remains muted. Against this backdrop, Nio is focusing on rolling out our ES6 nationwide, and at the same time, improving overall network utilization and operating efficiencies,” Nio CFO Louis T. Hsieh said in a statement.

Nio showcased a preview version of the ET7 during last month’s Shanghai Auto Show. The automaker has said it will design and develop the ET series with a new next-generation platform 2.0 (NP2) that will feature Level 4 autonomous driving capabilities, a designation by SAE that means all of the driving is handled by the vehicle under certain conditions.

The launch timeline of the ET series will largely hinge on how Nio’s joint venture with Beijing E-Town International Investment and Development unfolds. Under the joint venture, Beijing E-Town International Investment and Development will invest 10 billion yuan ($1.45 billion) and will support a new factory for its NP2 platform vehicles, Reuters reported. The company said the parties are still working towards a final binding definitive agreement on the investment.

As those discussions shake out,  Nio plans to “leverage the platform technologies from the ES8 and ES6 to create a new model design.” This third vehicle model tied to the ES8 and ES6 platform is expected to launch in 2020.

Meanwhile, the company has high hopes for its nearer term ES6. The company has 12,000 pre orders for the ES6; some 5,000 of those were added in the five weeks since the Shanghai Auto Show. The first deliveries of the smaller SUV will begin in June.

28 May 2019

SF will allow additional dockless bike-share operators

Uber’s JUMP will soon no longer be the sole operator of stationless bike-share programs in San Francisco. Today, the San Francisco Municipal Transportation Agency announced the expansion of its stationless bike-share program to allow additional companies to operate in the city.

JUMP received an exclusive, 18-month permit to operate a stationless electric bike-share service in the city last January. That meant companies like Lime and Spin, which has since gone all-in on scooters, were unable to deploy their bikes in the city. Now, following an application process, other companies may be able to operate stationless bike-share programs in San Francisco.

Interested companies can apply between now through June 24, 2019. The application requires companies to outline things like their pricing structures, proposed service area and fleet size, how they plan to ensure proper parking, and how they will ensure equal opportunity and fair wages among its labor.

While the city looks to expand its dockless bike-share offerings, the SFMTA says it has no plans to get rid of station-based bike-share systems like the one from Lyft’s Ford GoBike.

“A stationless system can help fill in the gaps where stations are yet to arrive or where stations are not required,” SFMTA’s Benjamin Barnett wrote in a blog post today. “We see the combination of both systems as the best fit for our unique City and an important part of our transportation network.”

28 May 2019

Is your autonomous vehicle Sally the sports car or blood-thirsty Christine?

When automobiles first started to appear alongside horse-drawn buggies, horses were the initial victims of the technology. They would not be struck by the slow-moving vehicles so much as be frightened into runaways. Sometimes the horses themselves suffered injury; other times it was property damage and pedestrian injury as the terrified steeds trampled everything in their paths.

As cars got faster and more numerous, pedestrians began to fall direct victim to moving vehicles, and it wasn’t long before rules of the road, and product and tort liability laws, imposed order to avoid carnage. Still, even today we have an ever-growing number of distracted and inept drivers turning our crowded highways into a dystopic version of real-life Frogger.  

Enter the autonomous vehicle. All the benefits of driving, without having to drive! Proponents of driverless cars believe that autonomous technology will make cars safer and lead to a 90% reduction in accident frequency by 2050, as more than 90% of car crashes are caused by driver error.

There is certainly no shortage of news stories about injuries and fatalities resulting from drunk or distracted driving, and other accidents caused by drivers’ behavior. Text your friends or binge watch Black Mirror; with an autonomous vehicle, it’s OK. Or is it? All goes well until your AV decides that the pedestrian in front of you isn’t really there, or mistakes the debris trailing a garbage truck for lane guidance, and steers you into a concrete barrier. 

Some companies are closer than others to completely driverless vehicles, but the edge case driving situations are still a challenge, as we sadly found not long ago when an AV hit a pedestrian walking her bicycle across a dark highway in Arizona. Although there was a driver present who could have taken the controls, she didn’t. One can hardly blame her inattention, for the whole point of autonomous driving technology is to allow for, if not encourage, drivers to disengage from the task.

The “autonomous vehicle paradox” of inducing drivers to disconnect because they are not needed most of the time is confounding. At least in the interim, until autonomous systems can reliably achieve better than a 98% safety rate (roughly the rate of human drivers), autonomous systems will need to be supplemented by a human driver for emergencies and other unexpected situations.

What happens and who is at fault when an accident occurs during or even after this transition period? Before the advent of autonomous vehicle technology, car accidents would typically invoke one of two legal theories:  driver negligence and manufacturers’ products liability. The legal theory of negligence seeks to hold people accountable for their actions and leads to financial compensation from drivers, or more commonly their insurance companies, for the drivers’ conduct behind the wheel. Products liability legal theories, on the other hand, are directed at companies that make and sell the injury-causing products, such as defective air bags, ignition switches, tires, or the cars themselves. Applying current legal theories to autonomous vehicle accident situations presents many challenges.   

Suppose artificial intelligence (AI), or whatever makes a car autonomous, fails to detect or correct for a slippery curve. Perhaps a coolant leak from some car ahead covers the road with antifreeze, which can seen by the human behind the wheel, yet is all but invisible to the AI system. If the AV has manual override controls and an accident occurs, is the driver at fault for not taking the controls to avoid the crash? Is the car manufacturer at fault for not sensing the road condition or correcting for it? If both, how should fault be apportioned? 

If a conventional vehicle was involved, the case against the driver may depend on proof that their behavior fell below an applicable standard of care. Not having one’s hands on the steering wheel would most likely be considered negligent behavior with such a car, and likely, so would being distracted by texting on a smartphone. But the self-driving feature of an autonomous vehicle by its very nature encourages driver inattention and lack of engagement with the controls. So would we be willing to find the driver at fault in the above instance for not taking over?

Car manufacturers have expressed different views on liability.

As to the manufacturer of a conventional vehicle, liability might depend on whether a system or part was defective. A conventional vehicle in good condition, with no suspension, brake or steering defects, would likely allow the manufacturer to escape the brunt of liability in the above scenario. The manufacturer of an autonomous vehicle with human override controls, however, might try to shift at least some portion of fault to the driver, but would or should society allow that? The driver might argue he or she reasonably relied upon the AV, but should the manufacturer instead be held responsible where the hazard was visible and driver intervention could have avoided the accident?

The outcome might differ if the vehicle was completely autonomous and no human possibly could have intervened, but that vehicle may be years away.

When such an AV comes to market, would, or should it be considered “defective” if it fails to detect or correct for the unexpectedly slippery surface? And if so, would it be considered defective merely because the failures occurred, or would proof also require some showing of errors in the AI software? Given that AI algorithms can evolve on their own and be dependent on millions of miles or hours of training data, how would one prove a “defect” in the software? Would it be fair to hold the programmer or software supplier accountable if the algorithm at the time of the accident differed substantially from the original, and the changes were effected by the AI algorithm having “taught” itself?

Another issue is the “hive mind.” One way AI could learn is by processing the collective experiences of other connected AVs, a process at one time used by Tesla. But if a significant proportion of other AVs upload erroneous data that is acted upon, what then?

In light of these issues, and as technology moves toward complete control of the vehicle with increasingly less human intervention, we may see the law evolve to place more emphasis on products liability theories and perhaps strict liability rather than negligence. It is not far-fetched that the price tag of an AV of the future not only include the R&D and component costs, but an “insurance” component to cover the costs of accidents. Such an evolution would be consistent with the decreasing role of the human driver, though it is somewhat inconsistent with a car manufacturer’s ability to exert full control over an AI system’s learning process, not to mention the driving environment.

In the present interim period when at least some human intervention is required, car manufacturers have expressed different views on liability. Some, like Volvo, have publicly stated that they will accept full responsibility whenever one of their vehicles is involved in an accident while in autonomous mode. But others, like Tesla, are attempting to shift liability to drivers when accidents happen by requiring some modicum of driver engagement, even in autonomous mode.

For example, to activate the capability to pass other cars in autonomous mode, drivers of Teslas once had to trigger the turn signal (Tesla recently announced a new version that would dispense with this requirement). Having drivers perform this seemingly insignificant but deliberate action could help auto manufacturers shift legal liability to the driver. Performing that simple action not only tells the car to pass, but suggests the driver has made a decision that the maneuver is safe and therefore is willing to, or should, accept responsibility for the consequences if it is not.  

What about the ethics of the AI programming or training?

The underlying technology itself presents further complications to ascertain who is at fault. As alluded to above, one aspect of AI, better characterized as “machine learning,” is that its behavior is more or less a “black box” developed from millions of a variety of inputs and cannot be well-understood as might a strictly math-based algorithm.

Put another way, we might be incapable of knowing exactly how the machine decided to act as it did. In such an instance, if the AI box was negligently trained, or “trained” on a simulator rather than based on real-world driving, could the author of the simulator instead be held accountable for the box’s failure to handle the edge case scenario that resulted in the accident? 

What about the ethics of the AI programming or training? A recent study found that current AI systems are perhaps 20% less likely to identify pedestrians if they are people of color. Was that due to the AI training on an insufficiently diverse subject base, or is there some other explanation? A recent survey conducted by MIT concluded that people ascribe a hierarchy to whose lives might be spared in edge cases where a crash is unavoidable and the question is not whether, but which, lives will perish. According to survey participants, human lives should be spared over those of animals; the lives of many should be spared over those of a few; and the young should be spared at the expense of the aged.

Interestingly, people also thought there should be a preference for someone pushing a stroller and observing traffic laws, the bottom line being that if an AV is programmed according to such ethics, your odds of being hit by an AV might increase significantly if you are a lone person jay-walking across a busy highway. In the moral hierarchy of the study, being a cat, dog or criminal is at the lowest level of protection, though one must wonder how a vehicle will be able to distinguish a human criminal from a non-criminal — real-time connection to prison records? And what happens if, say, animal activist hackers alter the programming to prefer saving animals over people?

If the MIT survey is to be believed, such a hierarchy and variability exist today, only tucked away in the subconsciousness of human drivers rather than in machines. Think about that the next time you cross the street.

28 May 2019

Google rolls out dining and translation filters to Lens

Google Lens users on iOS and ARCore-compatible Android phones are getting some added utility when it comes to ordering at restaurants or translating foreign languages on-the-go.

The functionality was announced earlier this month at Google I/O.

With the new “dining” features, users can point their phone at the menu and the Lens app will highlight the most popular dishes on the menu or surface food information and photos from the restaurant’s Google Maps profile. The company also detailed that you would be able to snap a picture of the bill and split the check directly inside the app.

On the foreign language translation front, the Google Translate app has long enabled language translation of signs that matches the style and typeface of what’s being parsed, but now a more lightweight version of this functionality is cooked directly into Lens.

Google has generally liked to take their sweet time when it comes to rolling out any I/O announcements related to Lens, so the promptness of this launch just a few weeks after the conference is probably the biggest surprise.

28 May 2019

Looking Beyond Meat, the future of food investment looks pretty cheesy

As Beyond Meat continues its reign as one of the kings of this year’s IPO mountain and Impossible Foods serves up impossibly good numbers for Burger King, venture capitalists seem ready to feast on new food deals.

And judging by market size and the returns that some companies have already realized by targeting the dairy aisle, the next big wave in food tech might just come with a whiff of Camembert. Meat alternatives and cultured meat may be grabbing headlines, but a wave of early-stage companies are looking at the dairy business for the next big thing.

There’s nothing cheesy about the size of the check that Danone wrote for WhiteWave Foods. That over $10 billion payout for WhiteWave’s dairy alternatives was one of the single biggest acquisitions in the new food space. And consumers spent a whopping $61.9 billion on cheese in 2018 — a number that’s expected to reach $99.4 billion by 2024, according to data just published by the research group, iMarc.

But before determining which venture capitalists are going to be moving the cheese (or cutting it), it’s worth examining what’s driving the latest food tech craze right now.

VC interest remains huge in foodtech as major IPOs outperform

Investors have long been eyeing a slice of the food business for the simple reason that it, along with healthcare, is one of the largest industries in the world. U.S. consumers, businesses and government services will shovel $1.62 trillion down the giant gaping maw of food and beverage businesses — spending more in a year on food and drink than they will on either healthcare or personal insurance, according to data from the Bureau of Labor Statistics (as CNBC noted).