Year: 2019

13 Feb 2019

Check out the first interior view of Honda’s Urban EV prototype

Honda has been teasing its all-electric urban vehicle in 2017, when the automaker showed off its vision of the future — that had a distinct 1970s first generation Civic flare.

Now, two years later a production version of the Urban EV is nearly here. And the automaker is finally giving a glimpse —albeit the tiniest of peeks — of the Urban EV’s interior.

The full reveal of the Urban EV will come next month at the 2019 Geneva Motor Show. And from there, it won’t be long before the electric vehicle hits the marketplace. The automaker has said it plans to bring the EV to the European market by late 2019.

Honda reveals first glimpse at interior of electric vehicle prototype bound for the 2019 Geneva Motor Show

The image shows a dash with a tech-forward and uncluttered feel. An expansive digital screen on the right and a digital instrument cluster in the driver’s line of sight. The steering wheel is equipped with toggles, which will likely be used to access features in the vehicle.

Observers will note two areas, one directly to the right of the steering wheel and the other on the far right, which appear to be designed for the driver and the passenger, respectively. The placement and lay out suggests this is a touchscreen display.

Honda says the “interior is designed to create a warm and engaging atmosphere inspired by the Urban EV Concept launched at 2017 Frankfurt Motor Show.”

Honda has big plans for the Urban EV, and more broadly electric vehicles. Way back in 2017, Honda Motor Co. President and CEO Takahiro Hachigo emphasized that the Urban EV wasn’t some “vision of the distant future.”

Honda plans to bring electrification, which can mean hybrid, plug-in or all-electric, to every new car model launched in Europe. The automaker is aiming for two thirds of European sales to feature electrified technology by 2025.

13 Feb 2019

HyperSciences wants to ‘gamechange’ spaceflight with hypersonic drilling tech

It’s no coincidence that Elon Musk wants to both tunnel down into and soar above the Earth. If you ask the team at HyperSciences, the best way to get to space is to flip drilling technology upside down and point it at the sky. In the process, that would mean ditching the large, expensive fuel stages that propel what we generally think of as a rocket — massive cylindrical thing, tiny payload at the tip — into space.

This month, the company hit a major milestone on its quest to get to suborbital space, capping off Phase I of a research grant with NASA with a pair of successful proof-of-concept launches demonstrating the company’s one-two punch of ram acceleration and chemical combustion.

HyperSciences put its vision to the test at Spaceport America, conducting a series of low altitude tests at the desolate launch site an hour outside of Truth or Consequences, New Mexico. The company launched “a number of projectiles,” ranging from 1.5 ft long to over 9 ft long. HyperSciences sent up some off-the-shelf electronics in the process, in a partnership with an aerospace research group at the University of Texas.

“We targeted hitting 600 to 1000 G’s (multiples of Earth’s gravity) on the payloads and accomplished that,” HyperSciences Senior Adviser Raymond Kaminski said. “The payloads felt similar levels to what commercial off-the-shelf electronics (like a cell phone) would feel when getting dropped on the floor.” Kaminski returned to aerospace with HyperSciences after a turn in the startup world following an earlier career with NASA, where he worked as an engineer for the International Space Station.

While the 1.5 ft. system launch was enough to meet its goals for NASA’s purposes, the company was testing the waters with an admittedly more impressive 9 ft. 18” projectile. “We’re going to launch a nine foot section — you can’t deny this anymore,” Kaminski said.

Oddly enough, the whole thing started after HyperSciences founder and CEO Mark Russell drilled a bunch of really, really deep holes. Russell formerly led crew capsule development at Jeff Bezos space gambit Blue Origin before leaving to get involved in his family’s mining business. At Blue Origin, he was employee number ten. Russell’s experience with mining and drilling led him to the idea that by elongating the chemical-filled tubes that he’d use to drill in the past, the system he used to break up rock could go to space.

“You have a tube and you have a projectile. It’s got a sharp nose and you’ve pre filled your tube with natural gas and air,” Russell explained. “It rides on the shock wave like a surfer rides on the ocean”

The team believes that launching something into space can be faster, cheaper and far more efficient, but it requires a total reimagining of the process. If SpaceX’s reusable first stages were a sea change for spaceflight, the technology behind HyperSciences would be a revelation, but that’s assuming the vision — and the hypersonic tech that propels it — could be scaled up and adapted to the tricky, high-stakes business of sending things to space.

A hypersonic propulsion system can launch a projectile at at least five times the speed of sound, causing it to reach speeds of Mach 5 or higher — more than a mile a second. Most of of the buzz in hypersonic tech right now is around defense technology — missiles that travel fast enough to evade even sophisticated missile defense systems or strike targets so quickly they can’t be intercepted — but aerospace and geothermal energy are two other big areas of interest.

Last December, the Washington Post reported that moving from rocket-boosted weapons to hypersonic weapons is the “first, second, and third” priority for defense right now. The Pentagon’s 2019 budget currently has $2 billion earmarked for its hypersonics program and that funding grew by almost a third year-over-year.  “You never want to put out a tech when the government is asking for it,” Kaminski said. “At that point it’s too late and you’re playing catch up.”

In spite of the opportunity, HyperSciences isn’t keen to get into the world of weaponry. “We are a platform hypersonics company, we are not weapons designers,” the team told TechCrunch. “We do not plan on being a weapon provider. HyperSciences is focused on making the world a better place.”

To that end, HyperSciences is maneuvering to the fore of non-weaponry hypersonics applications. The company sponsors the University of Washington lab that’s pioneered applications for ram accelerator technology it uses and has sole right to the tech invented there. 

On the geothermal energy note, with $1 million from Shell, HyperSciences was able to develop what it calls a “common engine” — a hypersonic platform that call drill deep to reach geothermal energy stores or point upward to launch things toward the stars. “HyperSciences is about getting really good on earth first,” Russell said, pointing to one advantage of the cross-compatible system that lets the company apply lessons it learns from drilling to its plans for flight.

“Our HyperDrone technology can be used to test new air-breathing hypersonic engines for NASA or aircraft companies that want to build the next gen super- and hypersonic aircraft to go point-to-point around the world in an hour or two,” the team explained. “Right now, you need a rocket on a big aircraft, just to get experiments up to speed. We can do that at the end of our tube right from the ground.”

Though there have been rumors of acquisition interest, for now HyperSciences is pursuing an offbeat crowdfunding model that’s certainly out of the ordinary in a literally nuts and bolts aerospace business. The company is currently running a SeedInvest campaign that allows small, unaccredited investors put as little as a thousand dollars toward the team’s vision. At the time of writing, the campaign was sitting at around five million dollars raised from nearly 2,000 relatively small-time investors. 

“SpaceX’s seed rounds were run by big VCs,” Russell said. “Where do you get access? These are big industries the public never usually gets to invest in.”

Russell prefers to keep HyperSciences flexible in its pursuits and believes that relying on venture capital would force the company to narrow the scope of its mission.  The team is quick to note that in spite of its relationship with Shell, the oil and energy giant doesn’t own any equity in the company. By hopping between industry-specific contracts with a boost from crowdfunding, HyperSciences hopes to continue pursuing its platform’s applications in parallel.

The next overall architecture for spaceflight will be using hypersonics,” Russell said. “We obviously started this with the idea that you could gamechange spaceflight. By removing the first and potentially the second stage of a rocket [and] putting all of that energy in the ground… you could gamechange spaceflight, no doubt.”

13 Feb 2019

Opportunity Mars Rover goes to its last rest after extraordinary 14-year mission

Opportunity, one of two rovers sent to Mars in 2004, is officially offline for good, NASA and JPL officials announced today at a special press conference. “I declare the Opportunity mission as complete, and with it the Mars Exploration Rover mission as complete,” said NASA’s Thomas Zurbuchen.

The cause of Opportunity’s demise was a planet-scale sandstorm that obscured its solar panels too completely, and for too long, for its onboard power supply to survive and keep even its most elementary components running. It last communicated on June 10, 2018, but could easily have lasted a few months more as its batteries ran down — a sad picture to be sure. Even a rover designed for the harsh Martian climate can’t handle being trapped under a cake of dust at -100 degrees celsius for long.

The team has been trying to reach it for months, employing a variety of increasingly desperate techniques to get the rover to at least respond; even if its memory had been wiped clean or instruments knocked out, it could be reprogrammed and refreshed to continue service if only they could set up a bit of radio rapport. But every attempt, from ordinary contact methods to “sweep and beep” ploys, was met with silence. The final transmission from mission control was last night.

Spirit and Opportunity, known together as the Mars Exploration Rovers mission, were launched individually in the summer of 2003 and touched down in January of 2004 — 15 years ago! — in different regions of the planet.

Each was equipped with a panoramic camera, a macro camera, spectrometers for identifying rocks and minerals, and a little drill for taking samples. The goal was to operate for 90 days, traveling about 40 meters each day and ultimately covering about a kilometer. Both exceeded those goals by incredible amounts.

Spirit ended up traveling about 7.7 kilometers and lasting about 7 years. But Opportunity outshone its twin, going some 45 kilometers over 14 years — well over a marathon.

And of course both rovers contributed immensely to our knowledge of the Red Planet. It was experiments by these guys that really established a past when Mars not only had water, but bio-friendly liquid water that might have supported life.

Opportunity did a lot of science but always had time for a selfie, such as this one at the edge of Erebus Crater.

It’s always sad when a hard-working craft or robot finally shuts down for good, especially when it’s one that’s been as successful as “Oppy.” The Cassini probe went out in a blaze of glory, and Kepler has quietly gone to sleep. But ultimately these platforms are instruments of science and we should celebrate their extraordinary success as well as mourn their inevitable final days.

“Spirit and Opportunity may be gone, but they leave us a legacy — a new paradigm for solar system exploration,” said JPL head Michael Watkins. “That legacy continues not just in the Curiosity rover, which is currently operating healthily after about 2,300 days on the surface of Mars. But also in our new 2020 rover, which is under construction here at the Jet Propulsion Laboratory.”

“But Spirit and Opportunity did something more than that,” he continued. “They energized the public about the spirit of robotic Mars exploration. The infectious energy and electricity that this mission created was obvious to the public.”

Mars of course is not suddenly without a tenant. The Insight lander touched down last year and has been meticulously setting up its little laboratory and testing its systems. And the Mars 2020 rover is well on its way to launch. It’s a popular planet.

Perhaps some day we’ll scoop up these faithful servants and put them in a Martian museum. For now let’s look forward to the next mission.

13 Feb 2019

Reddit says government data requests more than doubled in 2018

Reddit has said the number of government requests for user data has more than doubled in 2018 than on the previous year.

The news and content sharing site said in its latest transparency report, posted Wednesday, it received a 752 requests from governments during the year, up from 310 requests a year earlier.

Broken down, that’s 171 requests to preserve account data — up from 79 requests in 2017; and 581 requests to produce user data — up from 231 requests.

Reddit said it complied with 77 percent of requests to turn over user data, and 91 percent of preservation requests. However, the company says it “only processes preservation requests” that originate in the U.S.

For the year, the company said it was asked by the U.S. government to remove “an image and a large volume of comments made underneath it for potential breach of a federal law,” without saying what the post was. But Reddit said it did not comply with the “overbroad” request as the government didn’t demonstrate illegality.

Noticeably absent from the transparency report are any figures relating to national security. There hasn’t been an update since 2016.

The company had posted a warrant canary in its debut 2014 report, confirming that at the time it had “never received a National Security Letter, an order under the Foreign Intelligence Surveillance Act, or any other classified request for user information.” In its transparency report a year later, the notice was removed, indicating that Reddit had received a national security request but was permitted from disclosing it.

We contacted Reddit for comment but didn’t hear back at the time of writing.

Reddit, a platform known for its freedom of speech (sometimes infamously) has come under increase scrutiny by its users in recent days following a $300 million Series D investment from Chinese tech giant Tencent, prompting the mass posting of Winnie The Pooh, a symbol said to represent Chinese president Xi Jinping, as a protest against Beijing’s vast internet censorship.

In response to questions taken by users following the posting of its transparency report, Reddit chief executive Steve Huffman, who goes by the username u/spez, said:

13 Feb 2019

Google says it’ll invest $13B in U.S. data centers and offices this year

Google today announced that it will invest $13 billion in data centers and offices across the U.S. in 2019. That’s up from $9 billion in investments last year. Many of these investments will go to states like Nebraska, Nevada, Ohio, Texas, Oklahoma, South Carolina and Virginia, where Google plans new or expanded data centers. Though like most years, it’ll also continue to expand many of its existing offices in Seattle, Chicago and New York, as well as in its home state of California.

Given Google’s push for more cloud customers, it’s also interesting to see that the company continues to expand its data center presence across the country. Google will soon open its first data centers in Nevada, Nebraska, Ohio and Texas, for example, and it will expand its Oklahoma, South Carolina and Virginia data centers. Google clearly isn’t slowing down in its race to compete with AWS and Azure.

“These new investments will give us the capacity to hire tens of thousands of employees, and enable the creation of more than 10,000 new construction jobs in Nebraska, Nevada, Ohio, Texas, Oklahoma, South Carolina and Virginia,” Google CEO Sundar Pichai writes today. “With this new investment, Google will now have a home in 24 total states, including data centers in 13 communities. 2019 marks the second year in a row we’ll be growing faster outside of the Bay Area than in it.”

Given the current backlash against many tech companies and automation in general, it’s probably no surprise that Google wants to emphasize the number of jobs it is creating (and especially jobs in Middle America). The construction jobs are obviously temporary, though, and data centers don’t need a lot of employees to run once they are up and running. Still, Google promises that this will give it the “capacity to hire tens of thousands of employees.”

13 Feb 2019

SEC charges former Apple compliance lawyer with insider trading, avoiding $382K in losses

The shine on Apple is getting a little tarnished. Today, the SEC filed a suit against Gene Levoff, a lawyer who used to work for the iPhone giant, accusing him of insider trading, selling millions of dollars in stock ahead of earnings and saving himself some $382,000 in losses the process, and in a separate, earlier period, $245,000 in profits.

Levoff started to work for Apple in 2008, first as director of corporate law and then senior director. He was put on leave from Apple in July 2018, and his employment was terminated in September 2018.

The suit covers activities 2015 and 2016, years when Apple saw a dip in performance before it roared back with a trillion dollar market cap in 2017.

The news is especially ironic — although perhaps not surprising, considering the information Levoff had at his disposal: he had been the company’s Senior Director of Corporate Law and Corporate Secretary of Apple and was “responsible for ensuring compliance with the company’s insider trading policy and determining the criteria for those employees (including himself) restricted from trading around quarterly earnings announcements.”

It also worked in the other direction. The SEC alleges that Levoff also made trades in 2011 and 2012 also ahead of market-moving news that helped him make profits of $245,000.

The SEC is requesting that Levoff pay a civil monetary penalty, disgorging “an amount equal to the profits gained and losses avoided as a result of the actions described herein,” and that he be prohibited from serving as an officer or director of a public company.

The SEC suit covers trades on “at least” three occasions between 2015 and 2016, where Levoff would have access to financial data before it was released to the public and subsequently make trades on that information.

One example noted in the suit notes that he sold $10 million in stock in July ahead of Apple reporting that it would miss expectations on iPhone sales.

The SEC makes a point of noting the disconnect between Levoff’s actions for his own gain and his role at the company. Among his duties was serving on Apple’s Disclosure Committee,

“established to assist the Chief Executive Officer and Chief Financial Officer in fulfilling their responsibility for oversight of the accuracy and timeliness of disclosures made by Apple; determine Apple’s disclosure obligations and ensure information contained in Apple’s filings to the SEC and all other disclosures are timely, accurate, complete, and a fair representation of Apple’s financial condition and results of operations; and ensure that Apple’s disclosure controls and procedures are properly designed, adopted and implemented.”

Levoff was involved with some of the stealthier parts of Apple’s dealings. His name also appears involved with a number of Apple’s M&A deals, with his name coming up as a director on legal documents of startups that Apple had quietly acquired in Europe. He was also named in an investigation into how Apple funnels profits into offshore accounts.

But in this suit, the SEC makes a point of clearing Apple itself of wrongdoing in the specific case of insider trading, noting that the company took several steps to warn employees of blackout periods and general legal and illegal practices regarding trading and financial information (some of which Levoff himself even penned):

“Prior to Levoff’s illegal trading, Apple took steps to prevent employees from trading on material nonpublic information, including the undisclosed financial results Levoff received,” it notes. “Apple had an insider trading policy that applied to all employees. Many employees, including Levoff, also received notice when restricted trading periods, known as “blackout” periods, were in effect. The notices, emailed to employees subject to the blackout periods, reminded them of the insider trading policy, and since at least 2015, included a link to the insider trading policy.”

The news is pretty explosive, in the context both of Apple being one of the more tight-lipped companies and generally positioning itself as a model corporate citizen, taking a strong stand not just on issues like user privacy but priding itself on strong customer products and service, at a premium price compared to much of the competition.

We have reached out to Apple for comment, and will update this post as we learn more.

13 Feb 2019

Daily Crunch: Apple’s subscription fix

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Apple’s iOS update makes it easier to get to your subscriptions

Moving the Manage Subscriptions menu so that it’s just one click away from your App Store profile might seem like a minor change, but it was needed: As more mobile apps have adopted subscriptions as a means of generating revenue, it’s become critical to ensure consumers know how to turn off their subscriptions.

Plus, Apple is expected to launch some subscriptions of its own, namely for its streaming video and news services.

2. Instagram confirms that a bug is causing follower counts to change

Don’t panic! Instagram says it’s “aware of an issue that is causing a change in account follower numbers for some people right now” and is “working to resolve this as quickly as possible.”

3. Autonomous truck startup TuSimple hits unicorn status in latest round

Today, TuSimple is taking three to five fully autonomous trips per day for customers on three different routes in Arizona.

4. Sixteen percent of US adults own a smartwatch

The latest figures out of NPD show a continued uptick in smartwatch sales here in the States. The category has been a rare bright spot in an overall flagging wearable space, and the new numbers show gains pretty much across the board.

5. JibJab, one of the first silly selfie video makers, acquired by private equity firm Catapult Capital

Founded in 1999 by brothers Evan and Gregg Spiridellis after they saw “an animated dancing doodie streaming over a 56K modem,” JibJab’s big break came during the 2004 presidential campaign, when its satirical “This Land” racked up more than 80 million views.

6. Eight Sleep unveils The Pod, a bed that’s smarter about temperature

Eight has been focused on bed temperature for a while, first by offering a smart mattress cover and then a smart mattress that allows owners to adjust the surface temperature and even set different temperatures for different sides of the bed. But The Pod goes even further, with a smart temperature mode that will change bed temperature throughout the night to improve your sleep.

7. Ubisoft and Mozilla team up to develop Clever-Commit, an AI coding assistant

Clever-Commit is an assistant that learns from your code base’s bug and regression data to analyze and flag potential new bugs as new code is committed.

13 Feb 2019

Xnor’s saltine-sized, solar-powered AI hardware redefines the edge

“If AI is so easy, why isn’t there any in this room?” asks Ali Farhadi, founder and CEO of Xnor, gesturing around the conference room overlooking Lake Union in Seattle. And it’s true — despite a handful of displays, phones, and other gadgets, the only things really capable of doing any kind of AI-type work are the phones each of us have set on the table. Yet we are always hearing about how AI is so accessible now, so flexible, so ubiquitous.

And in many cases even those devices that can aren’t employing machine learning techniques themselves, but rather sending data off to the cloud where it can be done more efficiently. Because the processes that make up “AI” are often resource-intensive, sucking up CPU time and battery power.

That’s the problem Xnor aimed to solve, or at least mitigate, when it spun off from the Allen Institute for Artificial Intelligence in 2017. Its breakthrough was to make the execution of deep learning models on edge devices so efficient that a $5 Raspberry Pi Zero could perform state of the art computer vision processes nearly well as a supercomputer.

The team achieved that, and Xnor’s hyper-efficient ML models are now integrated into a variety of devices and businesses. As a follow-up, the team set their sights higher — or lower, depending on your perspective.

Answering his own question on the dearth of AI-enabled devices, Farhadi pointed to the battery pack in the demo gadget they made to show off the Pi Zero platform, Farhadi explained: “This thing right here. Power.”

Power was the bottleneck they overcame to get AI onto CPU- and power-limited devices like phones and the Pi Zero. So the team came up with a crazy goal: Why not make an AI platform that doesn’t need a battery at all? Less than a year later, they’d done it.

That thing right there performs a serious computer vision task in real time: It can detect in a fraction of a second whether and where a person, or car, or bird, or whatever, is in its field of view, and relay that information wirelessly. And it does this using the kind of power usually associated with solar-powered calculators.

The device Farhadi and hardware engineering head Saman Naderiparizi showed me is very simple — and necessarily so. A tiny camera with a 320×240 resolution, an FPGA loaded with the object recognition model, a bit of memory to handle the image and camera software, and a small solar cell. A very simple wireless setup lets it send and receive data at a very modest rate.

“This thing has no power. It’s a two dollar computer with an uber-crappy camera, and it can run state of the art object recognition,” enthused Farhadi, clearly more than pleased with what the Xnor team has created.

For reference, this video from the company’s debut shows the kind of work it’s doing inside:

As long as the cell is in any kind of significant light, it will power the image processor and object recognition algorithm. It needs about a hundred millivolts coming in to work, though at lower levels it could just snap images less often.

It can run on that current alone, but of course it’s impractical to not have some kind of energy storage; to that end this demo device has a supercapacitor that stores enough energy to keep it going all night, or just when its light source is obscured.

As a demonstration of its efficiency, let’s say you did decide to equip it with, say, a watch battery. Naderiparizi said it could probably run on that at one frame per second for more than 30 years.

Not a product

Of course the breakthrough isn’t really that there’s now a solar-powered smart camera. That could be useful, sure, but it’s not really what’s worth crowing about here. It’s the fact that a sophisticated deep learning model can run on a computer that costs pennies and uses less power than your phone does when it’s asleep.

“This isn’t a product,” Farhadi said of the tiny hardware platform. “It’s an enabler.”

The energy necessary for performing inference processes such as facial recognition, natural language processing, and so on put hard limits on what can be done with them. A smart light bulb that turns on when you ask it to isn’t really a smart light bulb. It’s a board in a light bulb enclosure that relays your voice to a hub and probably a datacenter somewhere, which analyzes what you say and returns a result, turning the light on.

That’s not only convoluted, but it introduces latency and a whole spectrum of places where the process could break or be attacked. And meanwhile it requires a constant source of power or a battery!

On the other hand, imagine a camera you stick into a house plant’s pot, or stick to a wall, or set on top of the bookcase, or anything. This camera requires no more power than some light shining on it; it can recognize voice commands and analyze imagery without touching the cloud at all; it can’t really be hacked because it barely has an input at all; and its components cost maybe $10.

Only one of these things can be truly ubiquitous. Only the latter can scale to billions of devices without requiring immense investment in infrastructure.

And honestly, the latter sounds like a better bet for a ton of applications where there’s a question of privacy or latency. Would you rather have a baby monitor that streams its images to a cloud server where it’s monitored for movement? Or a baby monitor that absent an internet connection can still tell you if the kid is up and about? If they both work pretty well, the latter seems like the obvious choice. And that’s the case for numerous consumer applications.

Amazingly, the power cost of the platform isn’t anywhere near bottoming out. The FPGA used to do the computing on this demo unit isn’t particularly efficient for the processing power it provides. If they had a custom chip baked, they could get another order of magnitude or two out of it, lowering the work cost for inference to the level of microjoules. The size is more limited by the optics of the camera and the size of the antenna, which must have certain dimensions to transmit and receive radio signals.

And again, this isn’t about selling a million of these particular little widgets. As Xnor has done already with its clients, the platform and software that runs on it can be customized for individual projects or hardware. One even wanted a model to run on MIPS — so now it does.

By drastically lowering the power and space required to run a self-contained inference engine, entirely new product categories can be created. Will they be creepy? Probably. But at least they won’t have to phone home.

13 Feb 2019

Xnor’s saltine-sized, solar-powered AI hardware redefines the edge

“If AI is so easy, why isn’t there any in this room?” asks Ali Farhadi, founder and CEO of Xnor, gesturing around the conference room overlooking Lake Union in Seattle. And it’s true — despite a handful of displays, phones, and other gadgets, the only things really capable of doing any kind of AI-type work are the phones each of us have set on the table. Yet we are always hearing about how AI is so accessible now, so flexible, so ubiquitous.

And in many cases even those devices that can aren’t employing machine learning techniques themselves, but rather sending data off to the cloud where it can be done more efficiently. Because the processes that make up “AI” are often resource-intensive, sucking up CPU time and battery power.

That’s the problem Xnor aimed to solve, or at least mitigate, when it spun off from the Allen Institute for Artificial Intelligence in 2017. Its breakthrough was to make the execution of deep learning models on edge devices so efficient that a $5 Raspberry Pi Zero could perform state of the art computer vision processes nearly well as a supercomputer.

The team achieved that, and Xnor’s hyper-efficient ML models are now integrated into a variety of devices and businesses. As a follow-up, the team set their sights higher — or lower, depending on your perspective.

Answering his own question on the dearth of AI-enabled devices, Farhadi pointed to the battery pack in the demo gadget they made to show off the Pi Zero platform, Farhadi explained: “This thing right here. Power.”

Power was the bottleneck they overcame to get AI onto CPU- and power-limited devices like phones and the Pi Zero. So the team came up with a crazy goal: Why not make an AI platform that doesn’t need a battery at all? Less than a year later, they’d done it.

That thing right there performs a serious computer vision task in real time: It can detect in a fraction of a second whether and where a person, or car, or bird, or whatever, is in its field of view, and relay that information wirelessly. And it does this using the kind of power usually associated with solar-powered calculators.

The device Farhadi and hardware engineering head Saman Naderiparizi showed me is very simple — and necessarily so. A tiny camera with a 320×240 resolution, an FPGA loaded with the object recognition model, a bit of memory to handle the image and camera software, and a small solar cell. A very simple wireless setup lets it send and receive data at a very modest rate.

“This thing has no power. It’s a two dollar computer with an uber-crappy camera, and it can run state of the art object recognition,” enthused Farhadi, clearly more than pleased with what the Xnor team has created.

For reference, this video from the company’s debut shows the kind of work it’s doing inside:

As long as the cell is in any kind of significant light, it will power the image processor and object recognition algorithm. It needs about a hundred millivolts coming in to work, though at lower levels it could just snap images less often.

It can run on that current alone, but of course it’s impractical to not have some kind of energy storage; to that end this demo device has a supercapacitor that stores enough energy to keep it going all night, or just when its light source is obscured.

As a demonstration of its efficiency, let’s say you did decide to equip it with, say, a watch battery. Naderiparizi said it could probably run on that at one frame per second for more than 30 years.

Not a product

Of course the breakthrough isn’t really that there’s now a solar-powered smart camera. That could be useful, sure, but it’s not really what’s worth crowing about here. It’s the fact that a sophisticated deep learning model can run on a computer that costs pennies and uses less power than your phone does when it’s asleep.

“This isn’t a product,” Farhadi said of the tiny hardware platform. “It’s an enabler.”

The energy necessary for performing inference processes such as facial recognition, natural language processing, and so on put hard limits on what can be done with them. A smart light bulb that turns on when you ask it to isn’t really a smart light bulb. It’s a board in a light bulb enclosure that relays your voice to a hub and probably a datacenter somewhere, which analyzes what you say and returns a result, turning the light on.

That’s not only convoluted, but it introduces latency and a whole spectrum of places where the process could break or be attacked. And meanwhile it requires a constant source of power or a battery!

On the other hand, imagine a camera you stick into a house plant’s pot, or stick to a wall, or set on top of the bookcase, or anything. This camera requires no more power than some light shining on it; it can recognize voice commands and analyze imagery without touching the cloud at all; it can’t really be hacked because it barely has an input at all; and its components cost maybe $10.

Only one of these things can be truly ubiquitous. Only the latter can scale to billions of devices without requiring immense investment in infrastructure.

And honestly, the latter sounds like a better bet for a ton of applications where there’s a question of privacy or latency. Would you rather have a baby monitor that streams its images to a cloud server where it’s monitored for movement? Or a baby monitor that absent an internet connection can still tell you if the kid is up and about? If they both work pretty well, the latter seems like the obvious choice. And that’s the case for numerous consumer applications.

Amazingly, the power cost of the platform isn’t anywhere near bottoming out. The FPGA used to do the computing on this demo unit isn’t particularly efficient for the processing power it provides. If they had a custom chip baked, they could get another order of magnitude or two out of it, lowering the work cost for inference to the level of microjoules. The size is more limited by the optics of the camera and the size of the antenna, which must have certain dimensions to transmit and receive radio signals.

And again, this isn’t about selling a million of these particular little widgets. As Xnor has done already with its clients, the platform and software that runs on it can be customized for individual projects or hardware. One even wanted a model to run on MIPS — so now it does.

By drastically lowering the power and space required to run a self-contained inference engine, entirely new product categories can be created. Will they be creepy? Probably. But at least they won’t have to phone home.

13 Feb 2019

Audio tech supplier to Rolls Royce and Xiaomi secures another $13.2M in funding

As autonomous driving eventually transforms cars from transportation devices to mobile theaters or conference rooms we will need better audio inside them. And we’ve already seen that VCs like Andreessen Horowitz say ‘audio is the future.’

So it’s interesting that Swedish sound pioneer Dirac has completed a new $13.2 million round of financing led by current investors. Previous investors included Swedish Angel network Club Network Investments, Erik Ejerhed and Staffan Persson.

Dirac makes sophisticated audio technology for customers including BMW, OnePlus, Rolls Royce, Volvo, and Xiaomi .

Its platform is used by those firms for everything from capture to playback – regardless of device size or form factor.

“As consumer devices decrease in size and expand in complexity, digital signal processing is
the key to unlocking their full audio potential and creating premium sound experiences,” says
Dirac CEO Mathias Johansson. “With this new funding, we can take our approach to digitizing
sound systems even further – creating more intelligent and adaptive audio processing solutions that establish new standards in both audio playback and capture across a variety of
applications.”

Dirac has now appointed former Harman International executive Armin Prommersberger as CTO and opened a Copenhagen Research Development Center.

Johansson says new 5G networks are set to create new use-cases for current and emerging technologies, including audio.