Year: 2019

02 Aug 2019

Babylon Health confirms $550M raise at $2B+ valuation to expand its AI-based health services

Babylon Health, the UK-based startup that has developed a number of AI-based health services, including a chatbot used by the UK’s National Health Service to help diagnose ailments, has confirmed a massive investment that it plans to use to expand its business to the US and Asia, and expand its R&D to diagnose more serious, chronic conditions. It has closed a $550 million round of funding, valuing Babylon Health at over $2 billion, it announced today.

This is the largest-ever fundraise in Europe or US for digital health delivery, Babylon said.

“Our mission at Babylon is to put accessible and affordable healthcare into the hands of everyone on earth,” said Dr Ali Parsa, founder and CEO of Babylon, in a statement. “This investment will allow us to maximise the number of lives we touch across the world. We have a long way to go and a lot still to deliver. We are grateful to our investors, our partners and 1,500 brilliant Babylonians for allowing us to forge ahead with our mission. Chronic conditions are an increasing burden to affordability of healthcare across the globe. Our technology provides a solid base for a comprehensive solution and our scientists, engineers, and clinicians are excited to work on it. We have seen significant demand from partners across the US and Asia. While the burden of healthcare is global, the solutions have to be localised to meet the specific needs and culture of each country.”

Before today’s announcement, the investment — a Series C — had been the subject of a lot of leaks, with reports over recent days suggesting the investment was anywhere between $100 million and $500 million.

The round brings together a number of strategic and financial investors including PIF (Saudi Arabia’s Public Investment Fund); a large US-based health insurance company (which reports suggest to be Centene Corporation, although Babylon is not disclosing the name); Munich Re’s ERGO Fund; and returning investors Kinnevik and Vostok New Ventures.

This is a big leap for the company, which had raised more modest rounds in the past such as this $60 million round three years ago. Babylon said that $450 million has been secured already, with another $50 million agreed to be exercised at a later date, and the remainder getting closed “shortly.” (The PIF has been a prolific, if controversial, investor in a number of huge startups such as Uber and wider investment vehicles like SoftBank’s Vision Fund.)

We’re at a moment right now when it seems like a daily occurrence that a new company or service launches using AI to advance health.

But even within that bigger trend, Babylon has emerged as one of the key players. In addition to its work in the UK — which includes an NHS service that it offers to “take over” a user’s local GP relationship to diagnose minor ailments remotely, as well as a second-track Babylon Private paid tier that it’s built in partnership with private insurer Bupa — it says other partners include Prudential, Samsung and Telus.

The NHS deal is an interesting one: the state’s health service is thought of by many as a national treasure, but it’s been very hard hit by budget problems, the strain of an ageing and growing population, and what seems sometimes like a slow-release effort to remove some of its most important and reliable services and bring more privitisation into the mix.

Bringing in AI-based services that remove some of the overhead of people managing problems that machines can do just as well is one way of taking some of that pressure off the system — or so the logic goes, at least. The idea is that by handling some of the smaller issues, it helps prioritise the more urgent and difficult problems for people and face-to-face meetings.

That additionally gives Babylon (and others in digital health) a big opportunity to break down some of the more persistent problems in healthcare, such as providing services in developing economies and remote regions: one of its big efforts alongside rollouts in mature markets like the UK and Canada has been a service in Rwanda to bring health services to digital platforms for the first time.

Babylon has been growing and says it delivers 4,000 clinical consultations each day, or one patient interaction every 10 seconds. It says that it now covers 4.3 million people worldwide, with more than 1.2 million digital consultations completed to date, with more than 160,000 five-star ratings for our appointments.

That is the kind of size and potential that has interested investors.

02 Aug 2019

Digital identity startup Yoti raises additional £8M at a valuation of £82M

Yoti, the London startup offering a digital identity platform and app that lets you prove who you say you are when accessing services or making age verified purchases, has raised £8 million in additional funding.

Backing the round is unnamed private investors, Yoti employees, and Robin Tombs, the startup’s co-founder and CEO, who previously founded and exited online gambling company Gamesys. I’m told that the startup has had around £65 million in investment in total since being founded in 2014, the majority of which has been made by Tombs and another Yoti co-founder, Noel Hayden.

Noteworthy, Yoti says the injection of capital comes with a new valuation of £82 million, up from £40 million when Yoti raised £8 million about a year and a half ago. The caveat being, of course, that Tombs and Hayden have effectively helped to set that valuation from both sides of the table.

“The current identity system is broken, outdated and insecure; we still have to show physical identity documents simply to prove who we are,” says Tombs, explaining the problem Yoti has set out to solve. “But this results in us sharing an excessive amount of personal information, putting us at risk of identity fraud. Additionally, millions of ID documents are lost and stolen every year, and our online accounts are vulnerable to data hacks”.

Launched in November 2017, Yoti’s solution includes the Yoti digital identity app, which claims over 4.7 million installs. It essentially replaces a traditional ID card or other paper proof of identity. Yoti also has various partnerships that sees organisations use its ID verification technology within their own apps and websites.

The idea is that Yoti can be used to prove your age on nights out, to check out faster when buying age restricted items at a store, for safer online dating and other social interactions online, or for accessing various business or government services.

The underlying system is granular, too: a company or organisation can ask to verify only certain aspects of your identity that you choose to share on a need-to-know basis.

“At Yoti we believe in putting people in control to share less personal information and enabling businesses to know who they are dealing with using less, higher quality verified data,” says Tombs. “For instance, someone could use Yoti to prove their age to buy age-restricted goods, but only share that they are 18+ to the business. This helps protect the individual’s personal data and privacy, whilst giving the company the details they need to be compliant. Everyone wins”.

Yoti can also potentially be used to help children be safer online by reducing the number of fake accounts and ensuring age guidelines are more strictly adhered to.

“As a parent, it’s very concerning just how easy it is for young kids to create social media accounts and access explicit age-restricted content online unchecked,” he says. “It’s too easy to create a fake profile online and give false details, so we can’t be confident about who we are meeting online”.

More broadly, Tombs argues that a digital identity platform can also support social inclusion for people who otherwise have no form of identity at all. “Over 1.1 billion people around the world don’t have any form of identification; leaving them socially excluded, left behind and unable to access essential services. We want to help fix these issues. We believe everyone, no matter who they are or where they’re from, deserves a safe way of proving their identity,” he says.

To that end, Yoti has formed a variety of partnerships spanning retail, government, travel and social media. These include Heathrow Airport, which is working with Yoti to explore biometric travel for passengers; NCR, which is using Yoti to improve age-verification at self-checkouts, and Yubo, which is deploying Yoti to verify the age of users and to “safeguard” young people online.

Last year, Yoti was selected by the Government of Jersey as its digital identity provider. This, we are told, has seen 10% of the Jersey adult population use Yoti.

Meanwhile, Yoti says it has developed a “private and secure” browser-based age verification solution called ProveMyAge, as it looks to cash in on the U.K.’s upcoming new Digital Economy Act. The product is designed to help adult websites comply with the age verification requirements of the legislation, which is set to come into force later this year.

02 Aug 2019

Africa Roundup: Canal+ acquires ROK, Flutterwave and Alipay partner, OPay raises $50M

in July, French television company Canal+ acquired the ROK film studio from VOD company IROKOtv.

Canal+ would not disclose the acquisition price, but confirmed there was a cash component of the deal.

Founded by Jason Njoku  in 2010 — and backed by $45 million  in VC — IROKOtv boasts the world’s largest online catalog of Nollywood: a Nigerian movie genre that has become Africa’s de facto film industry and one of the largest globally (by production volume).

Based in Lagos, ROK film studios was incubated to create original content for IROKOtv, which can be accessed digitally anywhere in the world.

ROK studio founder and producer Mary Njoku  will stay on as director general under the Canal+ acquisition.

With the ROK deal, Canal+ looks to bring the Nollywood production ethos to other African countries and regions. The new organization plans to send Nigerian production teams to French speaking African countries starting this year.

The ability to reach a larger advertising network of African consumers on the continent and internationally was a big acquisition play for Canal+.

San Francisco and Lagos-based fintech  startup Flutterwave  partnered with Chinese e-commerce company Alibaba’s Alipay to offer digital payments between Africa and China.

Flutterwave is a Nigerian-founded B2B payments service (primarily) for companies in Africa to pay other companies on the continent and abroad.

Alipay is Alibaba’s digital wallet and payments platform. In 2013, Alipay surpassed PayPal in payments volume and currently claims a global network of more than 1 billion active users, per Alibaba’s latest earnings report.

A large portion of Alipay’s network is in China, which makes the Flutterwave integration significant to capturing payments activity around the estimated $200 billion in China-Africa trade.

Flutterwave will earn revenue from the partnership by charging its standard 3.8% on international transactions. The company currently has more than 60,000 merchants on its platform, according to CEO Olugbenga Agboola.

In a recent Extra Crunch feature, TechCrunch tracked Flutterwave as one of several Africa-focused fintech companies that have established headquarters in San Francisco and operations in Africa to tap the best of both worlds in VC, developers, clients and digital finance.

Flutterwave’s Alipay collaboration also tracks a trend of increased presence of Chinese companies in African tech. July saw Chinese owned Opera raise $50 million in venture spending to support its growing West African digital commercial network, which includes browser, payments and ride-hail services. The funds are predominately for OPay, an Opera owned, Africa-focused mobile payments startup.

Lead investors included Sequoia China, IDG Capital  and Source Code Capital. Opera  also joined the round in the payments venture it created.

OPay will use the capital (which wasn’t given a stage designation) primarily to grow its digital finance business in Nigeria — Africa’s most populous nation and largest economy.

OPay will also support Opera’s growing commercial network in Nigeria, which includes motorcycle ride-hail app ORide and OFood delivery service.

Opera founded OPay in 2018 on the popularity of its internet search engine. Opera’s web-browser has ranked No. 2 in usage in Africa, after Chrome, the last four years.

July also saw transit tech news in East Africa. Global ride-hail startup InDriver launched its app-based service in Kampala (Uganda), bringing its Africa operating countries to four: Kenya,  Uganda, South Africa and Tanzania. InDriver’s mobile app allows passengers to name their own fare for nearby drivers to accept, decline or counter.

Nairobi-based internet hardware and service startup BRCK and Egyptian ride-hail venture Swvl are partnering to bring Wi-Fi and online entertainment to on-demand bus service in Kenya.

Swvl BRCK Moja KenyaBRCK is installing its routers on Swvl vehicles in Kenya  to run its Moja service, which offers free public Wi-Fi — internet, music and entertainment — subsidized by commercial partners.

Founded in Cairo in 2017, Swvl is a mass transit service that has positioned itself as an Uber  for shared buses.

The company raised a $42 million Series B round in June, with intent to expand in Africa, Swvl CEO Mostafa Kandil said in an interview.

BRCK and Swvl wouldn’t confirm plans on expanding their mobile internet partnership to additional countries outside of Kenya .

Africa’s ride-hail markets are becoming a multi-wheeled and global affair making the continent home to a number of fresh mobility use cases, including the BRCK and Swvl Wi-Fi partnership.

More Africa-related stories @TechCrunch

African tech around the ‘net

02 Aug 2019

UrbanClap, India’s largest home services startup, raises $75M

UrbanClap, a startup that offers home services across India and UAE, has raised $75 million to expand its business.

The Series E round for the four-and-half-year-old Gurgaon-based startup was led Tiger Global. Existing investors Steadview Capital, which led the startup’s Series D round, and Vy Capital also participated in the round. The startup has raised about $185 million to date, according to Crunchbase.

The financing round was split into two parts — a primary round which resulted in a share subscription by the aforementioned investors and a secondary share sale by some of its early backers, the startup said in a brief statement.

Through its platform, UrbanClap matches service people such as cleaners, repair staff and beauticians with customers across 10 cities in India and Dubai and Abu Dhabi. As of early this year, the startup was supporting 15,000 “micro-franchisees” with around 450,000 transactions taking place each month, cofounder and CEO Abhiraj Bhal told TechCrunch.

Bhal said that UrbanClap helps offline service workers in India, who have traditionally relied on getting work through middleman such as some store or word of mouth networks, to find more work. And they earn more, too. UrbanClap offers a more direct model, with workers keeping 80% of the cost of their jobs. That, Bhal said, means workers can earn multiples more and manage their own working hours.

“The UrbanClap model really allows them to become service entrepreneurs. Their earnings will shoot up two or three-fold, and it isn’t uncommon to see it rise as much as 8X — it’s a life-changing experience,” he said.

In recent years, UrbanClap has also began offering training, credit, basic banking services through its platform. Bhal said that around 20-25% of applicants are accepted into the platform, that’s a decision based on in-person meetings, background and criminal checks, as well as a “skills” test. Workers are encouraged to work exclusively — though it isn’t a requirement — and they wear UrbanClap outfits and represent the brand with customers.

More to follow…

02 Aug 2019

Apple suspends Siri response grading in response to privacy concerns

In response to concerns raised by a Guardian story last week over how recordings of Siri queries are used for quality control, Apple is suspending the program world wide. Apple says it will review the process that it uses, called grading, to determine whether Siri is hearing queries correctly, or being invoked by mistake.

In addition, it will be issuing a software update in the future that will let Siri users choose whether they participate in the grading process or not. 

The Guardian story from Alex Hern quoted extensively from a contractor at a firm hired by Apple to perform part of a Siri quality control process it calls grading. This takes snippets of audio, which are not connected to names or IDs of individuals, and has contractors listen to them to judge whether Siri is accurately hearing them — and whether Siri may have been invoked by mistake.

“We are committed to delivering a great Siri experience while protecting user privacy,” Apple said in a statement to TechCrunch. “While we conduct a thorough review, we are suspending Siri grading globally. Additionally, as part of a future software update, users will have the ability to choose to participate in grading.”

The contractor claimed that the audio snippets could contain personal information, audio of people having sex and other details like finances that could be identifiable, regardless of the process Apple uses to anonymize the records. 

They also questioned how clear it was to users that their raw audio snippets may be sent to contractors to evaluate in order to help make Siri work better. When this story broke, I dipped into Apple’s terms of service myself and, though there are mentions of quality control for Siri and data being shared, I found that it did fall short of explicitly and plainly making it clear that live recordings, even short ones, are used in the process and may be transmitted and listened to. 

The figures Apple has cited put the amount of queries that may be selected for grading under 1 percent of daily requests.

The process of taking a snippet of audio a few seconds long and sending it to either internal personnel or contractors to evaluate is, essentially, industry standard. Audio recordings of requests made to Amazon and Google assistants are also reviewed by humans. 

An explicit way for users to agree to the audio being used this way is table stakes in this kind of business. I’m glad Apple says it will be adding one. 

It also aligns better with the way that Apple handles other data like app performance data that can be used by developers to identify and fix bugs in their software. Currently, when you set up your iPhone, you must give Apple permission to transmit that data. 

Apple has embarked on a long campaign of positioning itself as the most privacy conscious of the major mobile firms and therefore holds a heavier burden when it comes to standards. Doing as much as the other major companies do when it comes to things like using user data for quality control and service improvements cannot be enough if it wants to maintain the stance and the market edge that it brings along with it.

02 Aug 2019

Dasha AI is calling so you don’t have to

While you’d be hard pressed to find any startup not brimming with confidence over the disruptive idea they’re chasing, it’s not often you come across a young company as calmly convinced it’s engineering the future as Dasha AI.

The team is building a platform for designing human-like voice interactions to automate business processes. Put simply, it’s using AI to make machine voices a whole lot less robotic.

“What we definitely know is this will definitely happen,” says CEO and co-founder Vladislav Chernyshov. “Sooner or later the conversational AI/voice AI will replace people everywhere where the technology will allow. And it’s better for us to be the first mover than the last in this field.”

“In 2018 in the US alone there were 30 million people doing some kind of repetitive tasks over the phone. We can automate these jobs now or we are going to be able to automate it in two years,” he goes on. “If you multiple it with Europe and the massive call centers in India, Pakistan and the Philippines you will probably have something like close to 120M people worldwide… and they are all subject for disruption, potentially.”

The New York based startup has been operating in relative stealth up to now. But it’s breaking cover to talk to TechCrunch — announcing a $2M seed round, led by RTP Ventures and RTP Global: An early stage investor that’s backed the likes of Datadog and RingCentral. RTP’s venture arm, also based in NY, writes on its website that it prefers engineer-founded companies — that “solve big problems with technology”. “We like technology, not gimmicks,” the fund warns with added emphasis.

Dasha’s core tech right now includes what Chernyshov describes as “a human-level, voice-first conversation modelling engine”; a hybrid text-to-speech engine which he says enables it to model speech disfluencies (aka, the ums and ahs, pitch changes etc that characterize human chatter); plus “a fast and accurate” real-time voice activity detection algorithm which detects speech in under 100 milliseconds, meaning the AI can turn-take and handle interruptions in the conversation flow. The platform can also detect a caller’s gender — a feature that can be useful for healthcare use-cases, for example.

Another component Chernyshov flags is “an end-to-end pipeline for semi-supervised learning” — so it can retrain the models in real time “and fix mistakes as they go” — until Dasha hits the claimed “human-level” conversational capability for each business process niche. (To be clear, the AI cannot adapt its speech to an interlocutor in real-time — as human speakers naturally shift their accents closer to bridge any dialect gap — but Chernyshov suggests it’s on the roadmap.)

“For instance, we can start with 70% correct conversations and then gradually improve the model up to say 95% of correct conversations,” he says of the learning element, though he admits there are a lot of variables that can impact error rates — not least the call environment itself. Even cutting edge AI is going to struggle with a bad line.

The platform also has an open API so customers can plug the conversation AI into their existing systems — be it telephony, Salesforce software or a developer environment, such as Microsoft Visual Studio.

Currently they’re focused on English, though Chernyshov says the architecture is “basically language agnostic” — but does requires “a big amount of data”.

The next step will be to open up the dev platform to enterprise customers, beyond the initial 20 beta testers, which include companies in the banking, healthcare and insurance sectors — with a release slated for later this year or Q1 2020.

Test use-cases so far include banks using the conversation engine for brand loyalty management to run customer satisfaction surveys that can turnaround negative feedback by fast-tracking a response to a bad rating — by providing (human) customer support agents with an automated categorization of the complaint so they can follow up more quickly. “This usually leads to a wow effect,” says Chernyshov.

Ultimately, he believes there will be two or three major AI platforms globally providing businesses with an automated, customizable conversational layer — sweeping away the patchwork of chatbots currently filling in the gap. And of course Dasha intends their ‘Digital Assistant Super Human Alike’ to be one of those few.

“There is clearly no platform [yet],” he says. “Five years from now this will sound very weird that all companies now are trying to build something. Because in five years it will be obvious — why do you need all this stuff? Just take Dasha and build what you want.”

“This reminds me of the situation in the 1980s when it was obvious that the personal computers are here to stay because they give you an unfair competitive advantage,” he continues. “All large enterprise customers all over the world… were building their own operating systems, they were writing software from scratch, constantly reinventing the wheel just in order to be able to create this spreadsheet for their accountants.

“And then Microsoft with MS-DOS came in… and everything else is history.”

That’s not all they’re building, either. Dasha’s seed financing will be put towards launching a consumer-facing product atop its b2b platform to automate the screening of recorded message robocalls. So, basically, they’re building a robot assistant that can talk to — and put off — other machines on humans’ behalf.

Which does kind of suggest the AI-fuelled future will entail an awful lot of robots talking to each other… ???

Chernyshov says this b2c call screening app will most likely be free. But then if your core tech looks set to massively accelerate a non-human caller phenomenon that many consumers already see as a terrible plague on their time and mind then providing free relief — in the form of a counter AI — seems the very least you should do.

Not that Dasha can be accused of causing the robocaller plague, of course. Recorded messages hooked up to call systems have been spamming people with unsolicited calls for far longer than the startup has existed.

Dasha’s PR notes Americans were hit with 26.3BN robocalls in 2018 alone — up “a whopping” 46% on 2017.

Its conversation engine, meanwhile, has only made some 3M calls to date, clocking its first call with a human in January 2017. But the goal from here on in is to scale fast. “We plan to aggressively grow the company and the technology so we can continue to provide the best voice conversational AI to a market which we estimate to exceed $30BN worldwide,” runs a line from its PR.

After the developer platform launch, Chernyshov says the next step will be to open up access to business process owners by letting them automate existing call workflows without needing to be able to code (they’ll just need an analytic grasp of the process, he says).

Later — pegged for 2022 on the current roadmap — will be the launch of “the platform with zero learning curve”, as he puts it. “You will teach Dasha new models just like typing in a natural language and teaching it like you can teach any new team member on your team,” he explains. “Adding a new case will actually look like a word editor — when you’re just describing how you want this AI to work.”

His prediction is that a majority — circa 60% — of all major cases that business face — “like dispatching, like probably upsales, cross sales, some kind of support etc, all those cases” — will be able to be automated “just like typing in a natural language”.

So if Dasha’s AI-fuelled vision of voice-based business process automation come to fruition then humans getting orders of magnitude more calls from machines looks inevitable — as machine learning supercharges artificial speech by making it sound slicker, act smarter and seem, well, almost human.

But perhaps a savvier generation of voice AIs will also help manage the ‘robocaller’ plague by offering advanced call screening? And as non-human voice tech marches on from dumb recorded messages to chatbot-style AIs running on scripted rails to — as Dasha pitches it — fully responsive, emoting, even emotion-sensitive conversation engines that can slip right under the human radar maybe the robocaller problem will eat itself? I mean, if you didn’t even realize you were talking to a robot how are you going to get annoyed about it?

Dasha claims 96.3% of the people who talk to its AI “think it’s human”, though it’s not clear what sample size the claim is based on. (To my ear there are definite ‘tells’ in the current demos on its website. But in a cold-call scenario it’s not hard to imagine the AI passing, if someone’s not paying much attention.)

The alternative scenario, in a future infested with unsolicited machine calls, is that all smartphone OSes add kill switches, such as the one in iOS 13 — which lets people silence calls from unknown numbers.

And/or more humans simply never pick up phone calls unless they know who’s on the end of the line.

So it’s really doubly savvy of Dasha to create an AI capable of managing robot calls — meaning it’s building its own fallback — a piece of software willing to chat to its AI in future, even if actual humans refuse.

Dasha’s robocall screener app, which is slated for release in early 2020, will also be spammer-agnostic — in that it’ll be able to handle and divert human salespeople too, as well as robots. After all, a spammer is a spammer.

“Probably it is the time for somebody to step in and ‘don’t be evil’,” says Chernyshov, echoing Google’s old motto, albeit perhaps not entirely reassuringly given the phrase’s lapsed history — as we talk about the team’s approach to ecosystem development and how machine-to-machine chat might overtake human voice calls.

“At some point in the future we will be talking to various robots much more than we probably talk to each other — because you will have some kind of human-like robots at your house,” he predicts. “Your doctor, gardener, warehouse worker, they all will be robots at some point.”

The logic at work here is that if resistance to an AI-powered Cambrian Explosion of machine speech is futile, it’s better to be at the cutting edge, building the most human-like robots — and making the robots at least sound like they care.

Dasha’s conversational quirks certainly can’t be called a gimmick. Even if the team’s close attention to mimicking the vocal flourishes of human speech — the disfluencies, the ums and ahs, the pitch and tonal changes for emphasis and emotion — might seem so at first airing.

In one of the demos on its website you can hear a clip of a very chipper-sounding male voice, who identifies himself as “John from Acme Dental”, taking an appointment call from a female (human), and smoothly dealing with multiple interruptions and time/date changes as she changes her mind. Before, finally, dealing with a flat cancelation.

A human receptionist might well have got mad that the caller essentially just wasted their time. Not John, though. Oh no. He ends the call as cheerily as he began, signing off with an emphatic: “Thank you! And have a really nice day. Bye!”

If the ultimate goal is Turing Test levels of realism in artificial speech — i.e. a conversation engine so human-like it can pass as human to a human ear — you do have to be able to reproduce, with precision timing, the verbal baggage that’s wrapped around everything humans say to each other.

This tonal layer does essential emotional labor in the business of communication, shading and highlighting words in a way that can adapt or even entirely transform their meaning. It’s an integral part of how we communicate. And thus a common stumbling block for robots.

So if the mission is to power a revolution in artificial speech that humans won’t hate and reject then engineering full spectrum nuance is just as important a piece of work as having an amazing speech recognition engine. A chatbot that can’t do all that is really the gimmick.

Chernyshov claims Dasha’s conversation engine is “at least several times better and more complex than [Google] Dialogflow, [Amazon] Lex, [Microsoft] Luis or [IBM] Watson”, dropping a laundry list of rival speech engines into the conversation.

He argues none are on a par with what Dasha is being designed to do.

The difference is the “voice-first modelling engine”. “All those [rival engines] were built from scratch with a focus on chatbots — on text,” he says, couching modelling voice conversation “on a human level” as much more complex than the more limited chatbot-approach — and hence what makes Dasha special and superior.

“Imagination is the limit. What we are trying to build is an ultimate voice conversation AI platform so you can model any kind of voice interaction between two or more human beings.”

Google did demo its own stuttering voice AI — Duplex — last year, when it also took flak for a public demo in which it appeared not to have told restaurant staff up front they were going to be talking to a robot.

Chernyshov isn’t worried about Duplex, though, saying it’s a product, not a platform.

“Google recently tried to headhunt one of our developers,” he adds, pausing for effect. “But they failed.”

He says Dasha’s engineering staff make up more than half (28) its total headcount (48), and include two doctorates of science; three PhDs; five PhD students; and ten masters of science in computer science.

It has an R&D office in Russian which Chernyshov says helps makes the funding go further.

“More than 16 people, including myself, are ACM ICPC finalists or semi finalists,” he adds — likening the competition to “an Olympic game but for programmers”. A recent hire — chief research scientist, Dr Alexander Dyakonov — is both a doctor of science professor and former Kaggle No.1 GrandMaster in machine learning. So with in-house AI talent like that you can see why Google, uh, came calling…

Dasha

 

But why not have Dasha ID itself as a robot by default? On that Chernyshov says the platform is flexible — which means disclosure can be added. But in markets where it isn’t a legal requirement the door is being left open for ‘John’ to slip cheerily by. Bladerunner here we come.

The team’s driving conviction is that emphasis on modelling human-like speech will, down the line, allow their AI to deliver universally fluid and natural machine-human speech interactions which in turn open up all sorts of expansive and powerful possibilities for embeddable next-gen voice interfaces. Ones that are much more interesting than the current crop of gadget talkies.

This is where you could raid sci-fi/pop culture for inspiration. Such as Kitt, the dryly witty talking car from the 1980s TV series Knight Rider. Or, to throw in a British TV reference, Holly the self-depreciating yet sardonic human-faced computer in Red Dwarf. (Or indeed Kryten the guilt-ridden android butler.) Chernyshov’s suggestion is to imagine Dasha embedded in a Boston Dynamics robot. But surely no one wants to hear those crawling nightmares scream…

Dasha’s five-year+ roadmap includes the eyebrow-raising ambition to evolve the technology to achieve “a general conversational AI”. “This is a science fiction at this point. It’s a general conversational AI, and only at this point you will be able to pass the whole Turing Test,” he says of that aim.

“Because we have a human level speech recognition, we have human level speech synthesis, we have generative non-rule based behavior, and this is all the parts of this general conversational AI. And I think that we can we can — and scientific society — we can achieve this together in like 2024 or something like that.

“Then the next step, in 2025, this is like autonomous AI — embeddable in any device or a robot. And hopefully by 2025 these devices will be available on the market.”

Of course the team is still dreaming distance away from that AI wonderland/dystopia (depending on your perspective) — even if it’s date-stamped on the roadmap.

But if a conversational engine ends up in command of the full range of human speech — quirks, quibbles and all — then designing a voice AI may come to be thought of as akin to designing a TV character or cartoon personality. So very far from what we currently associate with the word ‘robotic’. (And wouldn’t it be funny if the term ‘robotic’ came to mean ‘hyper entertaining’ or even ‘especially empathetic’ thanks to advances in AI.)

Let’s not get carried away though.

In the meanwhile, there are ‘uncanny valley’ pitfalls of speech disconnect to navigate if the tone being (artificially) struck hits a false note. (And, on that front, if you didn’t know ‘John from Acme Dental’ was a robot you’d be forgiven for misreading his chipper sign off to a total time waster as pure sarcasm. But an AI can’t appreciate irony. Not yet anyway.)

Nor can robots appreciate the difference between ethical and unethical verbal communication they’re being instructed to carry out. Sales calls can easily cross the line into spam. And what about even more dystopic uses for a conversation engine that’s so slick it can convince the vast majority of people it’s human — like fraud, identity theft, even election interference… the potential misuses could be terrible and scale endlessly.

Although if you straight out ask Dasha whether it’s a robot Chernyshov says it has been programmed to confess to being artificial. So it won’t tell you a barefaced lie.

Dasha

How will the team prevent problematic uses of such a powerful technology?

“We have an ethics framework and when we will be releasing the platform we will implement a real-time monitoring system that will monitor potential abuse or scams, and also it will ensure people are not being called too often,” he says. “This is very important. That we understand that this kind of technology can be potentially probably dangerous.”

“At the first stage we are not going to release it to all the public. We are going to release it in a closed alpha or beta. And we will be curating the companies that are going in to explore all the possible problems and prevent them from being massive problems,” he adds. “Our machine learning team are developing those algorithms for detecting abuse, spam and other use cases that we would like to prevent.”

There’s also the issue of verbal ‘deepfakes’ to consider. Especially as Chernyshov suggests the platform will, in time, support cloning a voiceprint for use in the conversation — opening the door to making fake calls in someone else’s voice. Which sounds like a dream come true for scammers of all stripes. Or a way to really supercharge your top performing salesperson.

Safe to say, the counter technologies — and thoughtful regulation — are going to be very important.

There’s little doubt that AI will be regulated. In Europe policymakers have tasked themselves with coming up with a framework for ethical AI. And in the coming years policymakers in many countries will be trying to figure out how to put guardrails on a technology class that, in the consumer sphere, has already demonstrated its wrecking-ball potential — with the automated acceleration of spam, misinformation and political disinformation on social media platforms.

“We have to understand that at some point this kind of technologies will be definitely regulated by the state all over the world. And we as a platform we must comply with all of these requirements,” agrees Chernyshov, suggesting machine learning will also be able to identify whether a speaker is human or not — and that an official caller status could be baked into a telephony protocol so people aren’t left in the dark on the ‘bot or not’ question. 

“It should be human-friendly. Don’t be evil, right?”

Asked whether he considers what will happen to the people working in call centers whose jobs will be disrupted by AI, Chernyshov is quick with the stock answer — that new technologies create jobs too, saying that’s been true right throughout human history. Though he concedes there may be a lag — while the old world catches up to the new.

Time and tide wait for no human, even when the change sounds increasingly like we do.

02 Aug 2019

Lux Capital just closed on a whopping $1 billion in capital, doubling the amount of money it manages

When founders think about the venture firms most likely to invest in space or robotics or other bleeding edge technologies, a handful of firms tend to jump immediately to mind.

One of these is Lux Capital, a venture capital firm that has offices in New York and Menlo Park, Ca., and whose bets include Zoox, the robotics company that’s trying to pioneer autonomous mobility as-a-service; Bright Machines, a manufacturing startup that aims to eliminate manual labor from manufacturing electronic devices; and AirMap, an airspace intelligence platform for drones.

While one might argue whether Lux has bolder ambitions than its venture competitors, its consistent messaging — it says it invests at the “outermost edges of what is possible” — has enabled it to carve space for itself in an increasingly crowded market of investors.

It also just helped the firm secure $1.1 billion in capital commitments across two funds, including a $500 million early-stage fund and a separate $550 million opportunity fund that it will use to support its breakout investments.

Fortune reported on the two funds earlier today.

Even during a time when billion-dollar funds have become routine, the amount of money is notable. Lux last had closed its previous, early-stage fund with $400 million in 2017, a fund that had brought its total assets under management to $1.1 billion. That was across its then 17-year history.

The firm, now 19 years old, just doubled that amount.

No doubt the sale of the surgical robotics company Auris Health helped toward that end. Lux was part of the company’s $34.4 million Series A round in 2014 (and part of subsequent rounds); presumably, it saw a nice return when Auris was acquired for $3.4 billion in cash to healthcare giant Johnson & Johnson in February.

Other deals, like Desktop Metal, a four-year-old that designs and markets metal 3D printing systems, have meanwhile seen their valuations soar, even if they haven’t sold or gone public.

As part of the new fund, Lux has brought aboard Deena Shakir as a partner. Shakir was formerly an investor with Alphabet’s venture arm, GV.

Earlier this year, another of Lux’s partners, Renata Quintini, transitioned to a role as venture partner as she raises a venture fund with fellow venture capitalist Roseanne Wincek, long of IVP.

02 Aug 2019

Clothing marketplace Poshmark confirms data breach

Poshmark, an online marketplace for buying and selling clothes, has reported a data breach.

The company said in a brief blog post that user profile information, including names and usernames, gender and city data was taken by an “unauthorized third party.” Email addresses, size preferences, and scrambled passwords were also taken.

Poshmark did not say which hashing algorithm, used to scramble the passwords, was used. Some algorithms are stronger than others.

The company also said “internal” preferences, such as email and push notifications, were taken.

Poshmark said it retained an outside security firm but did not say which company. It also said it has rolled out “enhanced security measures” without elaborating. We’ve contacted Poshmark for answers, but did not immediately hear back.

Financial data and physical address information was not compromise, the company said

Poshmark has upwards of 50 million users.

Read more:

01 Aug 2019

The galaxy is not flat, researchers show in new 3D model of the Milky Way

Six years of tracking a special class of star have yielded a new and improved 3D model of our galaxy, based on direct observation rather than theoretical frameworks. And although no one ever really though the Milky Way was flat flat, the curves at its edges have now been characterized in better detail than ever before.

Researchers at the University of Warsaw in Poland took on this challenge some time ago with the desire to observe the shape of the galaxy directly rather than indirectly; although we have a good idea of the shape, that idea is based on models that involve assumptions or observations of other galaxies.

Imagine if you wanted to know the distance to the store, but the only way you could tell was by looking out the window and observing how long it took for someone to get there and back; by calculating their average walking speed you can get a general idea. Sure, it works to a point — but wouldn’t it be nice to just lean out the window and see exactly how far it is?

The trouble in astronomy is it can be incredibly difficult to make such direct observations with our present tools, so we rely on indirect ones like timing people above, something that can be helpful and even accurate but is no substitute for the real thing. Fortunately, the researchers found that a certain type of star has special qualities that allow us to tell exactly how far away it is.

“Cepheid variable stars” are young stellar bodies that burn far brighter than our own sun, but also pulse in a very stable pattern. Not only that, but the frequency of that pulsing corresponds directly to how bright it gets — sort of like a strobe that, as you turn the speed up or down, also makes it dimmer or brighter.

What this means is that if you know the frequency of the pulses, you know objectively how much light the star puts out. And by comparing that absolute amount to the amount that reaches us, you can tell how far that light has had to travel with remarkable precision.

warpedgalaxy

“Distances to Cepheids can be measured with an accuracy better than 5 percent,” said lead author Dorota Skowron in a video explaining the findings. In comments to Space.com, she added: “It is not some statistical fact available only to a scientist’s understanding. It is apparent by eye.”

Not only are these beacons reliable, they’re everywhere — the team located thousands of Cepheid variable stars in the sky via the Optical Gravitational Lensing Experiment, a project that tracks the brightness of billions of stellar objects.

They carefully catalogued and observed these Cepheids (highlighted in the top image) for years, and from repeated measurements emerged a portrait of the galaxy — a curved portrait.

skowron4HR

“Our map shows the Milky Way disk is not flat. It is warped and twisted far away from the galactic center,” said co-author Przemek Mroz. “This is the first time we can use individual objects to show this in three dimensions,” some, he said, “as distant as the expected boundary of the Galactic disk.”

The galaxy curves “up” on one side and “down” on the other, a bit like a hat with the brim down in front and up in back. What caused this curvature is unknown, but of course there are many competing theories. A close call with another galaxy? Dark matter? They’re working on it.

The researchers were also able to show by measuring the age of the stars that they were created not regularly but in bunches — direct evidence that star formation is not necessarily constant, but can happen in bursts.

Their findings were published today in the journal Science.

01 Aug 2019

StockX admits ‘suspicious activity’ led to resetting passwords without warning

StockX, a popular site for buying and selling sneakers and other apparel, has admitted it reset customer passwords after it was “alerted to suspicious activity” on its site, despite telling users it was a result of “system updates.”

“We recently completed system updates on the StockX platform,” said the email to customers sent to TechCrunch on Thursday. The email provided a link to a password reset page but said nothing more.

The company was only last month valued at over $1 billion after a $110 million fundraise.

Companies reset passwords all the time for various reasons. Some security teams obtain lists of previously breached passwords that make their way online, scramble them in the same format that the company stores passwords, and find matches. By triggering the reset, it prevents passwords stolen from other sites from being used against one of a company’s own customers. In less than desirable circumstances, passwords are reset following a data breach.

But the company admitted it was not “system updates” as it had told its customers.

“StockX was recently alerted to suspicious activity potentially involving our platform,” said StockX spokesperson Katy Cockrel. “Out of an abundance of caution, we implemented a security update and proactively asked our community to update their account passwords.”

“We are continuing to investigate,” said the spokesperson.

egOZmJK 1

The password reset email sent by StockX on Thursday (Image: supplied)

We asked several follow-up questions — including who alerted StockX to the suspicious activity, if any customer data was compromised and why it misrepresented the reason for the password reset. We’ll have more when we know it.

Throughout the day customers were tweeting screenshots of the email, worried that their accounts had been compromised. Others questioned whether the email was genuine or if it was part of a phishing attack.

“Did they get hacked, find out somehow, and then to cover it up send out that email and ask for a password change?,” one of the affected customers told TechCrunch.

Customers were given no prior warning of the password reset.

StockX founder Josh Luber kept with the company’s line, telling a customer in a tweet that the password reset was “legit” but did not respond to users asking why.

StockX tweeted back to several customers with a boilerplate response: “The password reset email you received is legitimate and came from our team,” and to contact the support email with any questions. We did just that — from our TechCrunch email address — and heard nothing back hours later.

Security experts expressed doubt that a company would reset passwords over a “systems update” as StockX had claimed.

Security researcher John Wethington said it is “rare” to see security overhauls that require password resets. “You wouldn’t just send out a random email about it,” he said. Jake Williams, founder of Rendition Infosec, said it was “bad communication” in any case.

Several took to Twitter to criticize StockX for its handling of the password reset.

One customer called the email “fishy,” another called it “suspicious” and another called on the company to explain why they had to reset passwords in this unorthodox way. Another said in a tweet that he asked StockX twice but they “refused to provide an answer.”

“Guess I’m closing my account,” he said.

Read more:
Slack resets user passwords after 2015 data breach
Capital One breach also hit other major companies, say researchers
An exposed password let a hacker access internal Comodo files
Security lapse exposed weak points on Honda’s internal network
Cryptocurrency loan site YouHodler exposed unencrypted user credit cards and transactions