Year: 2018

11 May 2018

Boston Dynamics will start selling its dog-like SpotMini robot in 2019

After 26 years, Boston Dynamics is finally getting ready to start selling some robots. Founder Marc Raibert says that the company’s dog-like SpotMini robot is in pre-production and preparing for commercial availability in 2019. The announcement came onstage at TechCrunch’s TC Sessions: Robotics event today at UC Berkeley.

“The SpotMini robot is one that was motivated by thinking about what could go in an office — in a space more accessible for business applications — and then, the home eventually,” Raibert said onstage.

Boston Dynamics’ SpotMini was introduced late last year and took the design of the company’s “bigger brother” quadruped Spot. While the company has often showcased advanced demos of its emerging projects, SpotMini has seemed uniquely productized from the start.

On its website, Boston Dynamics highlights that SpotMini is the “quietest robot [they] have built.” The device weighs around 66 pounds and can operate for about 90 minutes on a charge.

The company says it has plans with contract manufacturers to build the first 100 SpotMinis later this year for commercial purposes, with them starting to scale production with the goal of selling SpotMini in 2019. They’re not ready to talk about a price tag yet, but they detailed that the latest SpotMini prototype cost 10 times less to build than the iteration before it.

Just yesterday, Boston Dynamics posted a video of SpotMini in autonomous mode navigating with the curiosity of a flesh-and-blood animal.

[gallery ids="1638152,1638150,1638161,1638151"]

The company, perhaps best known for gravely frightening conspiracy theorists and AI doomsdayers with advanced robotics demos, has had quite the interesting history.

It was founded in 1992 after being spun out of MIT. After a stint inside Alphabet Corp., the company was purchased by SoftBank last year. SoftBank has staked significant investments in the robotics space through its Vision Fund, and, in 2015, the company began selling Pepper, a humanoid robot far less sophisticated than what Boston Dynamics has been working on.

You can watch the entire presentation below, which includes a demonstration of the latest iteration of the SpotMini.

11 May 2018

Deep learning with synthetic data will democratize the tech industry

The visual data sets of images and videos amassed by the most powerful tech companies have been a competitive advantage, a moat that keeps the advances of machine learning out of reach from many. This advantage will be overturned by the advent of synthetic data.

The world’s most valuable technology companies, such as Google, Facebook, Amazon and Baidu, among others, are applying computer vision and artificial intelligence to train their computers. They harvest immense visual data sets of images, videos and other visual data from their consumers.

These data sets have been a competitive advantage for major tech companies, keeping out of reach from many the advances of machine learning and the processes that allow computers and algorithms to learn faster.

Now, this advantage is being disrupted by the ability for anyone to create and leverage synthetic data to train computers across many use cases, including retail, robotics, autonomous vehicles, commerce and much more.

Synthetic data is computer-generated data that mimics real data; in other words, data that is created by a computer, not a human. Software algorithms can be designed to create realistic simulated, or “synthetic,” data.

This synthetic data then assists in teaching a computer how to react to certain situations or criteria, replacing real-world-captured training data. One of the most important aspects of real or synthetic data is to have accurate labels so computers can translate visual data to have meaning.

Since 2012, we at LDV Capital have been investing in deep technical teams that leverage computer vision, machine learning and artificial intelligence to analyze visual data across any business sector, such as healthcare, robotics, logistics, mapping, transportation, manufacturing and much more. Many startups we encounter have the “cold start” problem of not having enough quality labelled data to train their computer algorithms. A system cannot draw any inferences for users or items about which it hasn’t yet gathered sufficient information.

Startups can gather their own contextually relevant data or partner with others to gather relevant data, such as retailers for data of human shopping behaviors or hospitals for medical data. Many early-stage startups are solving their cold start problem by creating data simulators to generate contextually relevant data with quality labels in order to train their algorithms.

Big tech companies do not have the same challenge gathering data, and they exponentially expand their initiatives to gather more unique and contextually relevant data.

Cornell Tech professor Serge Belongie, who has been doing research in computer vision for more than 25 years, says,

In the past, our field of computer vision cast a wary eye on the use of synthetic data, since it was too fake in appearance. Despite the obvious benefits of getting perfect ground truth annotations for free, our worry was that we’d train a system that worked great in simulation but would fail miserably in the wild.  Now the game has changed: the simulation-to-reality gap is rapidly disappearing. At the very minimum, we can pre-train very deep convolutional neural networks on near-photorealistic imagery and fine tune it on carefully selected real imagery.

AiFi is an early-stage startup building a computer vision and artificial intelligence platform to deliver a more efficient checkout-free solution to both mom-and-pop convenience stores and major retailers. They are building a checkout-free store solution similar to Amazon Go.

Amazon.com Inc. employees shop at the Amazon Go store in Seattle. ©Amazon Go; Photographer: Mike Kane/Bloomberg via Getty Images

As a startup, AiFi had the typical cold start challenge with a lack of visual data from real-world situations to start training their computers, versus Amazon, which likely gathered real-life data to train its algorithms while Amazon Go was in stealth mode.

Avatars help train AiFi shopping algorithms. ©AiFI

AiFi’s solution of creating synthetic data has also become one of their defensible and differentiated technology advantages. Through AiFi’s system, shoppers will be able to come into a retail store and pick up items without having to use cash, a card or scan barcodes.

These smart systems will need to continuously track hundreds or thousands of shoppers in a store and recognize or “re-identify” them throughout a complete shopping session.

AiFi store simulation with synthetic data. ©AiFi

Ying Zheng, co-founder and chief science officer at AiFi, previously worked at Apple and Google. She says,

The world is vast, and can hardly be described by a small sample of real images and labels. Not to mention that acquiring high-quality labels is both time-consuming and expensive, and sometimes infeasible. With synthetic data, we can fully capture a small but relevant aspect of the world in perfect detail. In our case, we create large-scale store simulations and render high-quality images with pixel-perfect labels, and use them to successfully train our deep learning models. This enables AiFi to create superior checkout-free solutions at massive scale.

Robotics is another sector leveraging synthetic data to train robots for various activities in factories, warehouses and across society.

Josh Tobin is a research scientist at OpenAI, a nonprofit artificial intelligence research company that aims to promote and develop friendly AI in such a way as to benefit humanity as a whole. Tobin is part of a team working on building robots that learn. They have trained entirely with simulated data and deployed on a physical robot, which, amazingly, can now learn a new task after seeing an action done once.

They developed and deployed a new algorithm called one-shot imitation learning, allowing a human to communicate how to do a new task by performing it in virtual reality. Given a single demonstration, the robot is able to solve the same task from an arbitrary starting point and then continue the task.

©Open AI

Their goal was to learn behaviors in simulation and then transfer these learnings to the real world. The hypothesis was to see if a robot can do precise things just as well from simulated data. They started with 100 percent simulated data and thought that it would not work as well as using real data to train computers. However, the simulated data for training robotic tasks worked much better than they expected.

Tobin says,

Creating an accurate synthetic data simulator is really hard. There is a factor of 3-10x in accuracy between a well-trained model on synthetic data versus real-world data. There is still a gap. For a lot of tasks the performance works well, but for extreme precision it will not fly — yet.

Osaro is an artificial intelligence company developing products based on deep reinforcement learning technology for industrial robotics automation. Osaro co-founder and CEO, Derik Pridmore says that “There is no question simulation empowers startups. It’s another tool in the toolbox. We use simulated data both for rapidly prototyping and testing new models as well as in trained models intended for use in the real world.”

Many large technology companies, auto manufacturers and startups are racing toward delivering the autonomous vehicle revolution. Developers have realized there aren’t enough hours in a day to gather enough real data of driven miles needed to teach cars how to drive themselves.

One solution that some are using is synthetic data from video games such as Grand Theft Auto; unfortunately, some say that the game’s parent company Rockstar is not happy about driverless cars learning from their game. 

A street in GTA V (left) and its reconstruction through capture data (right). ©Intel Labs,Technische Universität Darmstadt

May Mobility is a startup building a self-driving microtransit service. Their CEO and founder, Edwin Olson, says,

One of our uses of synthetic data is in evaluating the performance and safety of our systems. However, we don’t believe that any reasonable amount of testing (real or simulated) is sufficient to demonstrate the safety of an autonomous vehicle. Functional safety plays an important role.

The flexibility and versatility of simulation make it especially valuable and much safer to train and test autonomous vehicles in these highly variable conditions. Simulated data can also be more easily labeled as it is created by computers, therefore saving a lot of time.

Jan Erik Solem is the CEO and co-founder of Mapillary*, helping create better maps for smarter cities, geospatial services and automotive. According to Solem,

Having a database and an understanding of what places look like all over the world will be an increasingly important component for simulation engines. As the accuracy of the trained algorithms improves, the level of detail and diversity of the data used to power the simulation matters more and more.

Neuromation is building a distributed synthetic data platform for deep learning applications. Their CEO, Yashar Behzadi says,

To date, the major platform companies have leveraged data moats to maintain their competitive advantage. Synthetic data is a major disruptor, as it significantly reduces the cost and speed of development, allowing small, agile teams to compete and win.

The challenge and opportunity for startups competing against incumbents with inherent data advantage is to leverage the best visual data with correct labels to train computers accurately for diverse use cases. Simulating data will level the playing field between large technology companies and startups. Over time, large companies will probably also create synthetic data to augment their real data, and one day this may tilt the playing field again. Many speakers at the annual LDV Vision Summit in May in NYC will enlighten us as to how they are using simulated data to train algorithms to solve business problems and help computers get closer to general artificial intelligence.

*Mapillary is an LDV Capital portfolio company.

11 May 2018

Google Clips gets better at capturing candids of hugs and kisses (which is not creepy, right?)

Google Clips’ AI-powered “smart camera” just got even smarter, Google announced today, revealing improved functionality around Clips’ ability to automatically capture specific moments — like hugs and kisses. Or jumps and dance moves. You know, in case you want to document all your special, private moments in a totally non-creepy way.

I kid, I kid!

Well, not entirely. Let me explain.

Look, Google Clips comes across to me as more of a proof-of-concept device that showcases the power of artificial intelligence as applied to the world of photography rather than a breakthrough consumer device.

I’m the target market for this camera — a parent and a pet owner (and look how cute she is) — but I don’t at all have a desire for a smart camera designed to capture those tough-to-photograph moments, even though neither my kid nor my pet will sit still for pictures.

I’ve tried to articulate this feeling, and I find it’s hard to say why I don’t want this thing, exactly. It’s not because the photos are automatically uploaded to the cloud or made public — they are not. They are saved to the camera’s 16 GB of onboard storage and can be reviewed later with your phone, where you can then choose to keep them, share them or delete them. And it’s not even entirely because of the price point — though, arguably, even with the recent $50 discount it’s quite the expensive toy at $199.

Maybe it’s just the camera’s premise.

That in order for us to fully enjoy a moment, we have to capture it. And because some moments are so difficult to capture, we spend too much time with phone-in-hand, instead of actually living our lives — like playing with our kids or throwing the ball for the dog, for example. And that the only solution to this problem is more technology. Not just putting the damn phone down.

What also irks me is the broader idea behind Clips that all our precious moments have to be photographed or saved as videos. They do not. Some are meant to be ephemeral. Some are meant to be memories. In aggregate, our hearts and minds tally up all these little life moments — a hug, a kiss, a smile — and then turn them into feelings. Bonds. Love.  It’s okay to miss capturing every single one.

I’m telling you, it’s okay.

At the end of the day, there are only a few times I would have even considered using this product — when baby was taking her first steps, and I was worried it would happen while my phone was away. Or maybe some big event, like a birthday party, where I wanted candids but had too much going on to take photos. But even in these moments, I’d rather prop my phone up and turn on a “Google Clips” camera mode, rather than shell out hundreds for a dedicated device.

Just saying.

You may feel differently. That’s cool. To each their own.

Anyway, what I think is most interesting about Clips is the actual technology. That it can view things captured through a camera lens and determine the interesting bits — and that it’s already getting better at this, only months after its release. That we’re teaching AI to understand what’s actually interesting to us humans, with our subjective opinions. That sort of technology has all kinds of practical applications beyond a physical camera that takes spy shots of Fido.

The improved functionality is rolling out to Clips with the May update, and will soon be followed by support for family pairing, which will let multiple family members connect the camera to their device to view content.

Here’s an intro to Clips, if you missed it the first time. (See below)

Note that it’s currently on sale for $199. Yeah, already. Hmmm. 

11 May 2018

Hollywood producer plans to incentivise content viewers with tokens

With so much controversy swirling around the advertising-driven business models typified by Facebook and Google, and the increasing rigours of regulations like GDPR, it’s no wonder the Blockchain world is starting to whet its appetite at the prospect of paying users for attention with crypto assets.

Now a company involved in the production of Hollywood blockbusters featuring the likes of James Franco, Selena Gomez, Alec Baldwin, Heidi Klum and Al Pacino is backing a new startup to reward viewers in this manner.

Hollywood producer, Andrea Iervolino (best known for backing the James Franco film “In Dubious Battle” based on the novel by the Nobel Prize-winning author John Steinbeck) has decided to enter the fray by launching a new blockchain platform called TaTaTu. The startup’s aim is to bring a social, crypto economy to the entertainment industry.

Iervolino says the platform ill allows users to get rewarded for the content they watch and share with others through the use of crypto tokens. Of course, whether it can actually pull that off remains to be seen. Many other startups are trying to play in this space. But where Iervolino might just have an edge is in his Hollywood connections.

The idea is that the TaTaTu token can also be used by advertisers to run their ads on the platform. Organisations will also be able to earn tokens by uploading content to the platform. The more content an organisation brings to the platform, the more revenue they earn. TaTaTu aims to show ads to viewers and will even share advertising revenues with them in return for their attention.

But it doesn’t stop there. Users are supposed to invite their friends via their social media to join TaTaTu, and then watch and create videos that can be shared with friends, chat with other members, and share the content they like. TaTaTu will give its users the possibility to be rewarded for their social entertainment activity. TaTaTu plans to not only movies and videos, but also music, sports, and games. So this is quite a grand vision which, frankly, will be tricky to pull off outside of perhaps just sticking to one vertical like movies. This is like trying to do YouTube and Netflix at the same time, on a blockchain. Good luck with that.

But Iervolino is putting his money where his mouth is. The AMBI Media Group, a consortium of vertically integrated film development, production, finance and distribution companies (which counts End of Watch, Apocalypto, and The Merchant of Venice among its title) and which he co-runs with Monaco-based businesswoman Lady Monika Bacardi, is said to have put in $100M via a token pre-sale.

Building the platform will be CTO Jonathan Pullinger who started working in the Bitcoin space in late 2012, developing crypto mining software and building mining rigs. Since then he has worked on several Blockchain projects, including Ethereum smart contracts (ERC-20 tokens and other solidity based solutions), Hyperledger, Fabric, the Waves Platform and lightning nodes.

11 May 2018

YouTube rolls out new tools to help you stop watching

Google’s YouTube is the first streaming app that will actually tell users to stop watching. The company at its Google I/O conference this week introduced a series of new controls for YouTube that will allow users to set limits on their viewing, and then receive reminders telling them to “take a break.” The feature is rolling out now in the latest version of YouTube’s app along with others that limit YouTube’s ability to send notifications, and soon, one that gives users an overview of their binge behavior so they can make better-informed decisions about their viewing habits.

With “Take a Break,” available from YouTube’s mobile app Settings screen, users can set a reminder to appear every 15, 30, 60, 90 or 180 minutes, at which point the video will pause. You can then choose to dismiss the reminder and keep watching, or close the app.

The setting is optional, and is turned off by default so it’s not likely to have a large impact on YouTube viewing time at this point.

Also new is a feature that lets you disable notifications and sounds during a specified time period each day – say, for example, from bedtime until the next morning. When users turn on the setting to disable notifications, it will, by default, disable them from 10 PM to 8 AM local time, but this can be changed.

Combined with this is an option to get a scheduled digest of notifications as an alternative. This setting combines all the daily push notifications into a single combined notification that is sent out only once per day. This is also off by default, but can be turned on in the app’s settings.

And YouTube is preparing to roll out a “time watched profile” that will appear in the Account menu and display your daily average watch time, and how long you’ve watched YouTube videos today, yesterday, and over the past week, along with a set of tools to help you manage your viewing habits.

While these changes to YouTube are opt-in, it’s an interesting – and arguably responsible – position to take in terms of helping people manage their sometimes addictive behaviors around technology.

And it’s not the only major change Google is rolling out on the digital well-being front – the company also announced a series of Android features that will help you get a better handle on how often you’re using your phone and apps, and give you tools to limit distractions – like a Do Not Disturb setting, alerts that are silenced when the phone is flipped over, and a “Wind Down” mode for nighttime usage that switches on the Do Not Disturb mode and turns the screen to gray scale.

The digital well-being movement at Google got its start with a 144-page Google Slides presentation from product manager Tristan Harris, who was working on Google’s Inbox app at the time. After a trip to Burning Man, he came back convinced that technology products weren’t always designed with users’ best interests in mind. The memo went viral and found its way to then-CEO Larry Page, who promoted Harris to “design ethicist” and made digital well-being a company focus.

There’s now a Digital Wellbeing website, too, that talks about Google’s broader efforts on this front. On the site, the company touts features in other products that save people time, like Gmail’s high-priority notifications that only alert you to important emails; Google Photos’ automated editing tools; Android Auto’s distracted driving reduction tools; Google Assistant’s ability to turn on your phone’s DND mode or start a “bedtime routine” to dim your lights and quiet your music; Family Link’s tools for reducing kids’ screen time; Google WiFi’s support for “internet breaks;” and more.

Google is not the only company rethinking its role with regard to how much its technology should infiltrate our lives. Facebook, too, recently re-prioritized well-being over time spent on the site reading news, and saw its daily active users decline as a result.

But in Google’s case, some are cynical about the impact of the new tools – unlike Facebook’s changes, which the social network implemented itself, Google’s tools are opt-in. That means it’s up to users to take control over their own technology addictions, whether that’s their phone in general, or YouTube specifically. Google knows that the large majority won’t take the time to configure these settings, so it can pat itself on the back for its prioritization of digital well-being without taking a real hit to its bottom line.

Still, it’s notable that any major tech platform is doing this at all – and it’s at least a step in the right direction in terms of allowing people to reset their relationship with technology.

And in YouTube’s case, the option to “Take a Break” is at the very top of its Settings screen. If anyone ever heads into their settings for any reason, they’ll be sure to see it.

The new features are available in version 13.17 and higher of the YouTube mobile app on both iOS and Android, which is live now.

The changes were announced on May 8 during the I/O keynote, and will take a few days to roll out to all YouTube users. The “time watched profile,” however, will ship in the “coming months,” Google says.

11 May 2018

Wes Blackwell joins Scout Ventures to invest in early-stage, veteran-led startups

We haven’t written much about Scout Ventures, but the New York City-based firm has built up a big portfolio over nearly a decade of investing, with exits like Olapic (acquired by Monotype for $130 million) and Kanvas (acquired by TechCrunch’s parent company AOL).

And, it’s done all of this with just one full-time partner, Bradley C. Harrison — until recently, when the firm brought on Wes Blackwell as partner.

Blackwell is an advisor to Washington, D.C. startup studio DataTribe and previously led enterprise implementation, account management and tech support at LiveSafe. And like Harrison (who graduated from West Point and served in the Army for five years), Blackwell is a veteran of the U.S. Armed Forces, having spent more than a decade flying helicopters in the Navy.

“If you’d asked me five years ago if I would have partnered with an Annapolis Navy brat, the answer would have been an unequivocal no,” Harrison said. But he said that as he and Blackwell started spending more time together, he realized that their backgrounds were complementary: “It made all the sense in the world.”

And the Armed Forces background isn’t just another line in their bios — Harrison said that about half of the companies that Scout has invested in were founded by veterans.

“We don’t find a lot of competition in this stuff,” he explained. “It’s a pretty tight community.”

Scout typically writes initial checks of between $500,000 and $750,000 and aims to take a stake of around 10 percent. And while Harrison has been the only full-time partner until now, the firm has a team that also includes several venture partners and Principal Brendan Syron.

“Like any good investors, our thesis evolves time,” Syron told me. He said the firm has become increasingly interested in frontier technology, with investments its “core sectors” of AI, machine learning, autonomy and mobility, and “a big focus” on data and cybersecurity — an area where Blackwell has strong connections.

“Some of folks in this industry, by their nature, they’re not very trusting,” Blackwell said. “So by virtue of Brad and I’s background and character, there’s a trust factor there.”

Blackwell has already made his first investment as part of Scout, leading a $1.5 million round in DeepSig, a startup working to improve wireless technology by applying deep learning to radio signal data.

11 May 2018

Cleveland offered $120 million in freebies lure Amazon to the city

A Cleveland.com article detailed the lengths the small midwestern city would go to lure Amazon’s in 50,000-person HQ2. In a document obtained by reporter Mark Naymik, we learn that Cleveland was ready to give over $120 million in free services to Amazon including considerably reduced fares on Cleveland-area trains and buses.

The document, available here, focuses on the Northeast Ohio Areawide Coordinating Agency (NOACA)’s ideas regarding the key component in many of Amazon’s decisions – transportation.

Ohio has a budding but often tendentious connection to public transport. Cities like Columbus have no light rail while Cincinnati just installed a rudimentary system. Cleveland, for its part, has a solid if underused system already in place.

That the city would offer discounts is not surprising. Cities were falling over themselves to gain what many would consider – including Amazon itself – a costly incursion on the city chosen. However, given the perceived importance of having Amazon land in a small city – including growth of the startup and tech ecosystems – you can see why Cleveland would want to give away plenty of goodies.

Ultimately the American Midwest is at a crossroads. It could go either way, with small cities growing into vibrant artistic and creative hubs or those same cities falling into further decline. And the odds are stacked against them.

The biggest city, Chicago, is a transport, finance, and logistics hub and draws talent from smaller cities that orbit it. Further, “smart” cities like Pittsburgh and Ann Arbor steal the brightest students who go on to the coasts after graduation. As Richard Florida noted, the cities with a vibrant Creative Class are often the ones that succeed in this often rigged race and many cities just can’t generate any sort of creative ecosystem – cultural or otherwise – that could support a behemoth like Amazon landing in its midst.

What Cleveland did wasn’t wrong. However, it did work hard to keep the information secret, a consideration that could be dangerous. After all, as Maryland Transportation Secretary Pete K. Rahn told reporters: “Our statement for HQ2 is we’ll provide whatever is necessary to Amazon when they need it. For all practical purposes, it’s a blank check.”

11 May 2018

VC firm SparkLabs launches a security token to let anyone invest in its accelerator programs

Ardent crypto enthusiasts believe ICOs and cryptocurrencies will replace venture capital, but what if VC investors absorb crypto into their existing operations?

That’s the thesis that SparkLabs, a U.S.-Korean firm that runs multiple global funds and early-stage accelerator programs, is putting to the test with the introduction of a security token today. The firm said it is aiming to “democratize” investment opportunities by essentially allowing anyone to buy into two of their accelerator program via the token, which will essentially let them become LP-like investors.

SparkLabs’ past successes include Siri (sold to Apple) and DeepMind (sold to Google), and it claims a portfolio of over 160 startups from more than 60 countries. Its accelerator program has graduated over 80 companies, 80 percent of which the firm said have gone on to raise funding at an average of $3.5 million.

The experiment covers two of SparkLab’s new accelerator programs: a six month IOT-focused initiative in Korean smart city Songdo and Cultiv8, an accelerator for agriculture and food tech in Australia.

The firm has already raised capital for both initiatives — $5.6 million for Cultiv8 and $500,000 for the IOT program — but it is aiming to bring in at least $6 million from the token. That’s the minimum sale, while the hard cap is $30 million.

SparkLabs is working with two crypto platforms to handle the token sale in terms of KYC, operations and tapping into audiences. They are Argon Group, which has a community of crypto investors, and Swarm, a platform that connects retail investors with crypto opportunities in PE and VC funds.

ICOs and tokens are in a precarious position in the U.S. while the SEC conducts an investigation into companies that raised money via ICOs and investors who backed them. Wary of that, SparkLabs is primarily targeted non-U.S.-based investors, but it said that the token is open to accredited investors in the U.S..

Unlike traditional LPs, who wait on the fund’s lifecycle to see financial returns unless they can sneak a secondary share sale, SparkLabs plans to introduce liquidity by listing the token on security exchanges in the future. That’ll make it tradeable. But the firm doesn’t advise U.S.-based investors to trade it since that is almost certain to violate the law.

Despite the legal grey areas, the firm is keen to experiment with a token, having backed a number of crypto-based companies via traditional equity investments since 2014 and also launched its SparkChain fund.

“We think the ICO market is here to stay, it’s an avenue for fundraising [that] we think will be complementary to Series B and Series C rounds,” SparkLabs co-founder and partner Jimmy Kim told TechCrunch in an interview. “As a fund, we believe in this space, and we thought we might as well dip our toes into the water and test it out.”

A number of 500 Startups’ recent batch of companies banded together to offer their own security token earlier this year, but SparkLabs may be the first established firm to adopt the strategy officially. Already it is seeing strong interest from crypto hedge funds and individuals who are looking to diversify their crypto assets, Kim said, but the theory is fairly untested so it will be interesting to see how it is received by the wider market.

Certainly, it could be the first of many.

“We’re opening the doors to investors that we wouldn’t usually reach out to,” Kim explained. “If it works out well, we’ll obviously do it with other funds in the future.”

Note: The author owns a small amount of cryptocurrency. Enough to gain an understanding, not enough to change a life.

11 May 2018

Capital One acquires digital identity and fraud alert startup Confyrm

Capital One has acquired the San Francisco-based digital identity and fraud alert startup Confyrm, the company announced through a blog post on Thursday. The deal will bring Confyrm’s technology to the bank in order to help speed its development and implementation of consumer identity services at scale.

CEO Andrew Nash founded Confyrm in 2013, along with Dale Olds and Emma Lindley, with a vision of restoring trust in digital identities, he says.

“We recognized that despite an increasing reliance on digital identities, consumer trust in those identities continued to erode,” explains Nash. “We wanted to make a real difference to reducing online fraud and to make the internet a safer place for everyone engaged in it, but critically to do this without abusing customer privacy and storing personal data.”

The company created a system to offer early notifications of suspicious account activity, in order to mitigate the impact of fraud or account theft for identity providers and consumers alike. The system also uses privacy-enhancing mechanisms to protect the identities of the individual consumers and the event publishers.

For example, if a financial service was processing a password reset request but detected that the consumer’s email account had been taken over by a fraudster, it could stop the attack on the consumer’s account immediately. Meanwhile, the consumer could be alerted at the same time to take additional steps to secure their account.

Before starting Confyrm, Nash had previously served as Director of Identity Services at Google, one of the largest providers of consumer identity services in the world, with over a billion consumer and enterprise accounts. He also served as Senior Director of Consumer Identity at PayPal, managing over 350 million identities validated for use in the financial services space, and was Director of Technologies at RSA Security.

So for Capital One, the acquisition of Confyrm isn’t just about the technology itself – it’s about bringing Nash on board.

Following the deal’s close, Nash will become Managing Vice President of Consumer Identity Services.

He says working at Capital One will help the team reach more consumers than a startup could on its own, allowing them to “massively increase the set of consumers that we can help to protect.”

It’s unclear how far along Confyrm was on actually bringing its product to consumers – its website touted a few pilot programs several years ago, but hadn’t been updated in some time. Some of the site’s text is still “Lorem Ipsum” filler text, in fact, and there’s been little coverage by press in the years since its founding. The company hadn’t talked much about its pilot partners, but the list was reported to include an internet email provider, mobile operator, financial services company, and multiple e-commerce sites. Likely, Capital One was the early partner, which is what later led to this acquisition.

On the National Institute of Standards and Technology (NIST) website, one of Confyrm’s pilot programs was listed, noting pilot partners included InCommon, Google, AOL, LinkedIn, and Microsoft. (AOL merged with Yahoo to form Oath, which also now owns TechCrunch.)

Deal terms regarding the Capital One acquisition were not being shared, but Confyrm had raised $1.2 million, according to Crunchbase, which attributes the funding to a grant. (Another source states the grant was for $2.4 million, however).

Acquiring an early stage startup isn’t rare for Capital One, which regularly picks up young companies to fuel its company with fresh talent and unique IP. Over the past several years, it’s acquired mobile savings startup BankOns, local business directory Bundle, budgeting app Level Money, design and development firm Monsoon, design firm Adaptive Path, price tracker Paribus (which launched at TechCrunch Disrupt), and secure container orchestration platform Critical Stack. 

There’s a video of Nash explaining how Confyrm works, here.