Year: 2018

15 Jun 2018

Teaching computers to plan for the future

As humans, we’ve gotten pretty good at shaping the world around us. We can choose the molecular design of our fruits and vegetables, travel faster and farther and stave off life-threatening diseases with personalized medical care. However, what continues to elude our molding grasp is the airy notion of “time” — how to see further than our present moment, and ultimately how to make the most of it. As it turns out, robots might be the ones that can answer this question.

Computer scientists from the University of Bonn in Germany wrote this week that they were able to design a software that could predict a sequence of events up to five minutes in the future with accuracy between 15 and 40 percent. These values might not seem like much on paper, but researcher Dr. Juergen Gall says it represents a step toward a new area of machine learning that goes beyond single-step prediction.

Although Gall’s goal of teaching a system how to understand a sequence of events is not new (after all, this is a primary focus of the fields of machine learning and computer vision), it is unique in its approach. Thus far, research in these fields has focused on the interpretation of a current action or the prediction of an anticipated next action. This was seen recently in the news when a paper from Stanford AI researchers reported designing an algorithm that could achieve up to 90 percent accuracy in its predictions regarding end-of-life care.

When researchers provided the algorithm with data from more than two million palliative-care patient records, it was able to analyze patterns in the data and predict when the patient would pass with high levels of accuracy. However, unlike Gall’s research, this algorithm focused on a retrospective, single prediction.

Accuracy itself is a contested question in the field of machine learning. While it appears impressive on paper to report accuracies ranging upwards of 90 percent, there is debate about the over-inflation of these values through cherry-picking “successful” data in a process called p-hacking.

In their experiment, Gall and his team used hours of video data demonstrating different cooking actions (e.g. frying an egg or tossing a salad) and presented the software with only portions of the action and tasked it with predicting the remaining sequence based on what it had “learned.” Through their approach, Gall hopes the field can take a step closer to true human-machine symbiosis.

“[In the industry] people talk about human robot collaboration but in the end there’s still a separation; they’re not really working close together,” says Gall.

Instead of only reacting or anticipating, Gall proposes that, with a proper hardware body, this software could help human workers in industrial settings by intuitively knowing the task and helping them complete it. Even more, Gall sees a purpose for this technology in a domestic setting, as well.

“There are many older people and there’s efforts to have this kind of robot for care at home,” says Gall. “In ten years I’m very convinced that service robots [will] support care at home for the elderly.”

The number of Americans over the age of 65 today is approximately 46 million, according to a Population Reference Bureau report, and is predicted to double by the year 2060. Of that population, roughly 1.4 million live in nursing homes according to a 2014 CDC report. The impact that an intuitive software like Gall’s could have has been explored in Japan, where just over one-fourth of the country’s population is elderly. From PARO, a soft, robotic therapy seal, to the sleek companion robot Pepper from SoftBank Robotics, Japan is beginning to embrace the calm, nurturing assistance of these machines.

With this advance in technology for the elderly also comes the bitter taste that perhaps these technologies will only create further divide between the generations — outsourcing love and care to a machine. For a yet mature industry it’s hard to say where this path with conclude, but ultimately that is in the hands of developers to decide, not the software or robots they develop. These machines may be getting better at predicting the future, but even to them their fates are still being coded.

15 Jun 2018

Teaching computers to plan for the future

As humans, we’ve gotten pretty good at shaping the world around us. We can choose the molecular design of our fruits and vegetables, travel faster and farther and stave off life-threatening diseases with personalized medical care. However, what continues to elude our molding grasp is the airy notion of “time” — how to see further than our present moment, and ultimately how to make the most of it. As it turns out, robots might be the ones that can answer this question.

Computer scientists from the University of Bonn in Germany wrote this week that they were able to design a software that could predict a sequence of events up to five minutes in the future with accuracy between 15 and 40 percent. These values might not seem like much on paper, but researcher Dr. Juergen Gall says it represents a step toward a new area of machine learning that goes beyond single-step prediction.

Although Gall’s goal of teaching a system how to understand a sequence of events is not new (after all, this is a primary focus of the fields of machine learning and computer vision), it is unique in its approach. Thus far, research in these fields has focused on the interpretation of a current action or the prediction of an anticipated next action. This was seen recently in the news when a paper from Stanford AI researchers reported designing an algorithm that could achieve up to 90 percent accuracy in its predictions regarding end-of-life care.

When researchers provided the algorithm with data from more than two million palliative-care patient records, it was able to analyze patterns in the data and predict when the patient would pass with high levels of accuracy. However, unlike Gall’s research, this algorithm focused on a retrospective, single prediction.

Accuracy itself is a contested question in the field of machine learning. While it appears impressive on paper to report accuracies ranging upwards of 90 percent, there is debate about the over-inflation of these values through cherry-picking “successful” data in a process called p-hacking.

In their experiment, Gall and his team used hours of video data demonstrating different cooking actions (e.g. frying an egg or tossing a salad) and presented the software with only portions of the action and tasked it with predicting the remaining sequence based on what it had “learned.” Through their approach, Gall hopes the field can take a step closer to true human-machine symbiosis.

“[In the industry] people talk about human robot collaboration but in the end there’s still a separation; they’re not really working close together,” says Gall.

Instead of only reacting or anticipating, Gall proposes that, with a proper hardware body, this software could help human workers in industrial settings by intuitively knowing the task and helping them complete it. Even more, Gall sees a purpose for this technology in a domestic setting, as well.

“There are many older people and there’s efforts to have this kind of robot for care at home,” says Gall. “In ten years I’m very convinced that service robots [will] support care at home for the elderly.”

The number of Americans over the age of 65 today is approximately 46 million, according to a Population Reference Bureau report, and is predicted to double by the year 2060. Of that population, roughly 1.4 million live in nursing homes according to a 2014 CDC report. The impact that an intuitive software like Gall’s could have has been explored in Japan, where just over one-fourth of the country’s population is elderly. From PARO, a soft, robotic therapy seal, to the sleek companion robot Pepper from SoftBank Robotics, Japan is beginning to embrace the calm, nurturing assistance of these machines.

With this advance in technology for the elderly also comes the bitter taste that perhaps these technologies will only create further divide between the generations — outsourcing love and care to a machine. For a yet mature industry it’s hard to say where this path with conclude, but ultimately that is in the hands of developers to decide, not the software or robots they develop. These machines may be getting better at predicting the future, but even to them their fates are still being coded.

15 Jun 2018

Elizabeth Holmes reportedly steps down at Theranos after criminal indictment

Elizabeth Holmes has left her role as CEO of Theranos and has been charged with wire fraud, CNBC and others report. The company’s former president, Ramesh “Sunny” Balwani, was also indicted today by a grand jury.

These criminal charges are separate from the civil ones filed in March by the SEC and already settled.

Theranos’s general counsel, David Taylor, has been appointed CEO. What duty the position actually entails in the crumbling enterprise is unclear. Holmes, meanwhile, remains chairman of the board.

The FBI Special Agent in Charge of the case against Theranos, John Bennett, said the company engaged in “a corporate conspiracy to defraud financial investors,” and “misled doctors and patients about the reliability of medical tests that endangered health and lives.”

This story is developing. I’ve asked Theranos for comment and will update if I hear back; indeed I’m not even sure anyone is there to respond.

15 Jun 2018

Judge says ‘literal but nonsensical’ Google translation isn’t consent for police search

Machine translation of foreign languages is undoubtedly a very useful thing, but if you’re going for anything more than directions or recommendations for lunch, its shallowness is a real barrier. And when it comes to the law and constitutional rights, a “good enough” translation doesn’t cut it, a judge has ruled.

The ruling (PDF) is not hugely consequential, but it is indicative of the evolving place in which translation apps find themselves in our lives and legal system. We are fortunate to live in a multilingual society, but for the present and foreseeable future it seems humans are still needed to bridge language gaps.

The case in question involved a Mexican man named Omar Cruz-Zamora, who was pulled over by cops in Kansas. When they searched his car, with his consent, they found quite a stash of meth and cocaine, which naturally led to his arrest.

But there’s a catch: Cruz-Zamora doesn’t speak English well, so the consent to search the car was obtained via an exchange facilitated by Google Translate — an exchange that the court found was insufficiently accurate to constitute consent given “freely and intelligently.”

The fourth amendment prohibits unreasonable search and seizure, and lacking a warrant or probable cause, the officers required Cruz-Zamora to understand that he could refuse to let them search the car. That understanding is not evident from the exchange, during which both sides repeatedly fail to comprehend what the other is saying.

Not only that, but the actual translations provided by the app weren’t good enough to accurately communicate the question. For example, the officer asked “¿Puedo buscar el auto?” — the literal meaning of which is closer to “can I find the car,” not “can I search the car.” There’s no evidence that Cruz-Zamora made the connection between this “literal but nonsensical” translation and the real question of whether he consented to a search, let alone whether he understood that he had a choice at all.

With consent invalidated, the search of the car is rendered unconstitutional, and the charges against Cruz-Zamora are suppressed.

It doesn’t mean that consent is impossible via Google Translate or any other app — for example, if Cruz-Zamora had himself opened his trunk or doors to allow the search, that likely would have constituted consent. But it’s clear that app-based interactions are not a sure thing. This will be a case to consider not just for cops on the beat looking to help or investigate people who don’t speak English, but in courts as well.

Providers of machine translation services would have us all believe that those translations are accurate enough to use in most cases, and that in a few years they will replace human translators in all but the most demanding situations. This case suggests that machine translation can fail even the most basic tests, and as long as that possibility remains, we have to maintain a healthy skepticism.

15 Jun 2018

Machines learn language better by using a deep understanding of words

Computer systems are getting quite good at understanding what people say, but they also have some major weak spots. Among them is the fact that they have trouble with words that have multiple or complex meanings. A new system called ELMo adds this critical context to words, producing better understanding across the board.

To illustrate the problem, think of the word “queen.” When you and I are talking and I say that word, you know from context whether I’m talking about Queen Elizabeth, or the chess piece, or the matriarch of a hive, or RuPaul’s Drag Race.

This ability of words to have multiple meanings is called polysemy. And really, it’s the rule rather than the exception. Which meaning it is can usually be reliably determined by the phrasing — “God save the queen!” versus “I saved my queen!” — and of course all this informs the topic, the structure of the sentence, whether you’re expected to respond, and so on.

Machine learning systems, however, don’t really have that level of flexibility. The way they tend to represent words is much simpler: it looks at all those different definitions of the word and comes up with a sort of average — a complex representation, to be sure, but not reflective of its true complexity. When it’s critical that the correct meaning of a word gets through, they can’t be relied on.

A new method called ELMo (“Embeddings from Language Models”), however, lets the system handle polysemy with ease; as evidence of its utility, it was awarded best paper honors at NAACL last week. At its heart it uses its training data (a huge collection of text) to determine whether a word has multiple meanings and how those different meanings are signaled in language.

For instance, you could probably tell in my example “queen” sentences above, despite their being very similar, that one was about royalty and the other about a game. That’s because the way they are written contain clues to your own context-detection engine to tell you which queen is which.

Informing a system of these differences can be done by manually annotating the text corpus from which it learns — but who wants to go through millions of words making a note on which queen is which?

“We were looking for a method that would significantly reduce the need for human annotation,” explained Mathew Peters, lead author of the paper. “The goal was to learn as much as we can from unlabeled data.”

In addition, he said, traditional language learning systems “compress all that meaning for a single word into a single vector. So we started by questioning the basic assumption: let’s not learn a single vector, let’s have an infinite number of vectors. Because the meaning is highly dependent on the context.”

ELMo learns this information by ingesting the full sentence in which the word appears; it would learn that when a king is mentioned alongside a queen, it’s likely royalty or a game, but never a beehive. When it sees pawn, it knows that it’s chess; jack implies cards; and so on.

An ELMo-equipped language engine won’t be nearly as good as a human with years of experience parsing language, but even working knowledge of polysemy is hugely helpful in understanding a language.

Not only that, but taking the whole sentence into account in the meaning of a word also allows the structure of that sentence to be mapped more easily, automatically labeling clauses and parts of speech.

Systems using the ELMo method had immediate benefits, improving on even the latest natural language algorithms by as much as 25 percent — a huge gain for this field. And because it is a better, more context-aware style of learning, but not a fundamentally different one, it can be integrated easily even into existing commercial systems.

In fact, Microsoft is reportedly already using it with Bing. After all, it’s crucial in search to determine intention, which of course requires an accurate reading of the query. ELMo is open source, too, like all the work from the Allen Institute for AI, so any company with natural language processing needs should probably check this out.

The paper lays down the groundwork of using ELMo for English language systems, but because its power is derived by essentially a close reading of the data that it’s fed, there’s no theoretical reason why it shouldn’t be applicable not just for other languages, but in other domains. In other words, if you feed it a bunch of neuroscience texts, it should be able to tell the difference between temporal as it relates to time and as it relates to that region of the brain.

This is just one example of how machine learning and language are rapidly developing around each other; although it’s already quite good enough for basic translation, speech to text and so on, there’s quite a lot more that computers could do via natural language interfaces — if they only know how.

15 Jun 2018

Apple and Oprah sign a multi-year partnership on original content

Apple announced today a multi-year content partnership with Oprah Winfrey to produce programs for the tech company’s upcoming video streaming service. Apple didn’t provide any specific details as to what sort of projects Winfrey would be involved in, but there will be more than one it seems.

Apple shared the news of its deal with Winfrey in a brief statement on its website, which read:

Apple today announced a unique, multi-year content partnership with Oprah Winfrey, the esteemed producer, actress, talk show host, philanthropist and CEO of OWN.

Together, Winfrey and Apple will create original programs that embrace her incomparable ability to connect with audiences around the world.

Winfrey’s projects will be released as part of a lineup of original content from Apple.

The deal is a significant high-profile win for Apple which has been busy filing out its lineup with an array of talent in recent months.

The streaming service will also include a reboot of Steven Spielberg’s Amazing Storiesa Reese Witherspoon- and Jennifer Anniston-starring series set in the world of morning TVan adaptation of Isaac Asimov’s Foundation books, a thriller starring Octavia Spencer, a Kristen Wiig-led comedy, a Kevin Durant-inspired scripted basketball show, a series from “La La Land’s” director and several other shows.

Winfrey, however, is not just another showrunner or producer. She’s a media giant who has worked across film, network and cable TV, print, and more as an actress, talk show host, creator, and producer.

She’s also a notable philanthropist, having contributed more than $100 million to provide education to academically gifted girls from disadvantaged backgrounds, and is continually discussed as a potential presidential candidate, though she said that’s not for her.

On television, Winfrey’s Harpo Productions developed daytime TV shows like “Dr. Phil,” “The Dr. Oz Show” and “Rachael Ray.” Harpo Films produced several Academy Award-winning movies including “Selma,” which featured Winfrey in a starring role. She’s also acted in a variety of productions over the years, like “The Color Purple,” which scored her an Oscar nom,  “Lee Daniels’ The Butler,” “The Immortal Life of Henrietta Lacks” and Disney’s “A Wrinkle in Time.”

Winfrey also founded the cable network OWN in 2011 in partnership with Discovery Communications, and has exec produced series including “Queen Sugar,” “Oprah’s Master Class,” and the Emmy-winning “Super Soul Sunday.

The latter has a connection with Apple as it debuted as podcast called “Oprah’s SuperSoul Conversations” and became a #1 program on Apple Podcasts.

Winfrey recently extended her contract with OWN through 2025, so it’s unclear how much time she’ll devote specifically towards her Apple projects. Apple also didn’t say if Winfrey will star or guest in any of the programs themselves, but that’s always an option on the table with a deal like this.

 

 

 

15 Jun 2018

Kustomer gets $26M to take on Zendesk with an omnichannel approach to customer support

The CRM industry is now estimated to be worth some $4 billion annually, and today a startup has announced a round of funding that it hopes will help it take on one aspect of that lucrative pie, customer support. Kustomer, a startup out of New York that integrates a number of sources to give support staff a complete picture of a customer when he or she contacts the company, has raised $26 million.

The funding, a series B, was led by Redpoint Ventures (notably, an early investor in Zendesk, which Kustomer cites as a key competitor), with existing investors Canaan Partners, Boldstart Ventures, and Social Leverage also participating.

Cisco Investments was also a part of this round as a strategic investor: Cisco (along with Avaya) is one of the world’s biggest PBX equipment vendors, and customer support is one of the biggest users of this equipment, but the segment is also under pressure as more companies move these services to the cloud (and consider alternative options). Potentially, you could see how Cisco might want to partner with Kustomer to provide more services on top of its existing equipment, and potentially as a standalone service — although for now the two have yet to announce any actual partnerships.

Given that Kustomer has been approached already for potential acquisitions, you could see how the Ciscos of the world might be one possible category of buyers.

Kustomer is not discussing valuation but it has raised a total of $38.5 million. Kustomer’s customers include brands in fashion, e-commerce and other sectors that provide customer support on products on a regular basis, such as Ring, Modsy, Glossier, Smug Mug and more.

When we last wrote about Kustomer, when it raised $12.5 million in 2016, the company’s mission was to effectively turn anyone at a company into a customer service rep — the idea being that some issues are better answered by specific people, and a CRM platform for all employees to engage could help them fill that need.

Today, Brad Birnbaum, the co-founder and CEO, says that this concept has evolved. He said that “half of its business model still involves the idea of everyone being on the platform.” For example, an internal sales rep can collaborate with someone in a company’s shipping department — “but the only person who can communicate with the customer is the full-fledged agent,” he said. “That is what the customers wanted so that they could better control the messaging.”

The collaboration, meanwhile, has taken an interesting turn: it’s not just related to employees communicating better to develop a more complete picture of a customer and his/her history with the company; but it’s about a company’s systems integrating better to give a more complete view to the reps. Integrations include data from e-commerce platforms like Shopify and Magento; voice and messaging platforms like Twilio, TalkDesk, Twitter and Facebook Messenger; feedback tools like Nicereply; analytics services like Looker, Snowflake, Jira and Redshift; and Slack.

Birnbaum previously founded and sold Assistly to Salesforce, which turned it into Desk.com — (his co-founder in Kustomer, Jeremy Suriel, was Assistly’s chief architect), and between that and Kustomer he also had a go at building out Airtime, Sean Parker’s social startup. Kustomer, he says, is not only competing against Salesforce but perhaps even more specifically Zendesk, in offering a new take on customer support.

Zendesk, he said, had really figured out how to make customer support ticketing work efficiently, “but they don’t understand the customer at all.”

“We are a much more modern solution in how we see the world,” he continued. “No one does omni-channel customer service properly, where you can see a single threaded conversation speaking to all of a customer’s points.”

Going forward, Kustomer will be using the funding to expand its platform with more capabilities, and some of its own automations and insights (rather than those provided by way of integrations). This will also see the company expand into other kinds of services adjacent to taking inbound customer requests, such as reaching out to the customers, potentially to seel to them. “We plan to go broadly with engagement as an example,” Birnbaum said. “We already know everything about you so if we see you on a website, we can proactively reach out to you and engage you.”

“It is time for disruption in customer support industry, and Kustomer is leading the way,” said Tomasz Tunguz, partner at Redpoint Ventures, in a statement. “Kustomer has had impressive traction to date, and we are confident the world’s best B2C and B2B companies will be able to utilize the platform in order to develop meaningful relationships, experiences, and lifetime value for their customers. This is an exciting and forward-thinking platform for companies as well as their customers.”

 

 

15 Jun 2018

With its new in-car operating system, BMW slowly breaks with tradition

When you spend time with a lot of BMW folks, as I did during a trip to Germany earlier this month, you’ll regularly hear the word “heritage.” Maybe that’s no surprise, given that the company is now well over 100 years old. But in a time of rapid transformation that’s hitting every car manufacturer, engineers and designers have to strike a balance between honoring that history and looking forward. With the latest version of its BMW OS in-car operating system and its accompanying design language, BMW is breaking with some traditions to allow it to look into the future while also sticking to its core principles.

If you’ve driven a recent luxury car, then the instrument cluster in front of you was likely one large screen. But at least in even the most recent BMWs, you’ll still see the standard round gauges that have adorned cars since their invention. That’s what drivers expect and that’s what the company gave them, down to the point where it essentially glued a few plastic strips on the large screen that now makes up the dashboard to give drivers an even more traditional view of their Autobahn speeds.

With BMW OS 7.0, which I got some hands-on time with in the latest BMW 8-series model that’s making its official debut today (and where the OS update will also make its first appearance), the company stops pretending that the screen is a standard set of gauges. Sure, some of the colors remain the same, but users looking for the classic look of a BMW cockpit are in for a surprise.

“We first broke up the classic round instruments back in 2015 so we could add more digital content to the middle, including advanced driving assistance systems,” one of BMW’s designers told me. “And that was the first break [with tradition]. Now in 2018, we looked at the interior and exterior design of our cars — and took all of those forms — and integrated them into the digital user interface of our cars.”

The overall idea behind the design is to highlight relevant information when it’s needed but to let it fade back when it’s not, allowing the driver to focus on the task at hand (which, at least for the next few years, is mostly driving).

So when you enter the car, you’ll get the standard BMW welcome screen, which is now integrated with your digital BMW Connected profile in the cloud. When you start driving, the new design comes to life, with all of the critical information you need for driving on the left side of the dashboard, as well as data about the state of your driving assistance systems. That’s a set of digital gauges that remains on the screen at all times. On the right side of the screen, though, you’ll see all of the widgets that can be personalized. There are six of those, and they range from G meters for when you’re at a track day to a music player that uses the space to show album art.

The middle of the screen focuses on navigation. But as the BMW team told me, the idea here isn’t to just copy the map that’s traditionally on the tablet-like screen in the middle of the dashboard. What you’ll see here is a stripped-down map view that only shows you the navigational data you need at any given time.

And because the digital user interface isn’t meant to be a copy of its analog counterpart from yesteryear, the team also decided that it could play with more colors. That means that as you move from sport to eco mode, for example, the UI’s primary color changes from red to blue.

The instrument cluster is only part of the company’s redesign. It also took a look at what it calls the “Control Display” in the center console. That’s traditionally where the company has displayed everything from your music player to its built-in GPS maps (and Apple CarPlay, if that’s your thing). Here, BMW has simplified the menu structure by making it much flatter and also made some tweaks to the overall design. What you’ll see is that it also went for a design language here that’s still occasionally playful but that does away with many of the 3D effects, and instead opted for something that’s more akin to Google’s Material Design or Microsoft’s Fluent Design System. This is a subtle change, but the team told me that it very deliberately tried to go with a more modern and flatter look.

This display now also offers more tools for personalization, with the ability to change the layout to show more widgets, if the driver doesn’t mind a more cluttered display, for example.

Thanks to its integration with BMW Connect, the company’s cloud-based tools and services for saving and syncing data, managing in-car apps and more, the updated operating system also lays the foundation for the company’s upcoming e-commerce play. Dieter May, BMW’s VP for digital products and services, has talked about this quite a bit in the past, and the updated software and fully digital cockpit is what will enable the company’s next moves in this direction. Because the new operating system puts a new emphasis on the user’s digital account, which is encoded in your key fob, the car becomes part of the overall BMW ecosystem, which includes other mobility services like ReachNow, for example (though you obviously don’t need to have a BMW Connect account just to drive the car).

Unsurprisingly, the new operating system will launch with a couple of the company’s more high-end vehicles like the 8-series car that is launching today, but it will slowly trickle down to other models, as well.

15 Jun 2018

Showcase your country or state’s startups at Startup Alley

Disrupt SF is just a few months away (September 5-7 at Moscone Center West) and we’re looking for delegations of international startup groups, government innovation centers, incubators and accelerators to organize a country, state or regional pavilion in Startup Alley. Are you ready to step on a world stage, show off your emerging companies and be recognized as a leader in tech innovation?

Startup Alley is prime real estate, where hundreds of founders from everywhere in the world — and investors looking to fund them — gather to meet, connect and network. And maybe even produce a unicorn or two.

If you want to exhibit in Startup Alley as part of a country, state or region, your delegation startups must meet one requirement only: they must be Pre-Series A startups. If so, shoot our Startup Alley manager, Priya, an email at priya@techcrunch.com. Tell us about your delegation and where you’re from, and we’ll provide more information about the application process.

Regions that have participated in previous TechCrunch events include St. Louis, Argentina, Austria, Belgium, Brazil, the Caribbean, Catalonia, the Czech Republic, Germany, Hungary, Hong Kong, Korea, Japan, Lithuania, Taiwan, Ukraine and Uruguay. We believe that innovation and great ideas know no geographical boundaries, and we strive to increase the diversity within our regional pavilions at every Disrupt.

Organize a minimum of eight (8) startups in your region and you’ll receive a discount off each Startup Alley company’s exhibitor package — and you’ll get organizer passes to the event. Plus, if you book your pavilion before July 25, your startups will receive one additional Founder ticket to attend Disrupt SF. Email startupalley@techcrunch.com for more pricing information.

15 Jun 2018

Lemonade files lawsuit against wefox for IP infringement

Lemonade, the insurance platform based out of NYC, has filed a lawsuit against German company ONE Insurance, its parent company wefox, and founder Julian Teicke.

The complaint, filed in the U.S. District Court Southern District of NY, alleges that wefox reverse engineered Lemonade to create ONE, infringing Lemonade’s intellectual property, violating the Computer Fraud and Abuse Act, and breaching its contractual obligations to Lemonade not to “copy content… to provide any service that is competitive…or to…create derivative works.”

In the filing, Lemonade alleges that Teicke repeatedly registered for insurance on Lemonade under various names and for various addresses, some of which do not exist. Teicke also allegedly filed claims in what appeared to be an attempt to assess and copy the arrangement of those flows.

Lemonade’s counsel says Teicke started seven claims over the course of 20 days, prompting Lemonade to cancel his policy.

Alongside Teicke, a number of other executives and members of leadership at wefox also filed fake claims, despite having opted in to Lemonade’s user agreement and taking an honesty pledge, which is required of all Lemonade users.

This, according to Lemonade, violates the Computer Fraud and Abuse act. Lemonade also alleges that the ONE app infringes Lemonade’s IP, and that in assessing the Lemonade app and building a competitor, Teicke also violated Lemonade’s TOS.

Lemonade has revolutionized the insurance business in two key ways: First, it made the process of actually buying insurance as easy as a few clicks on your smartphone. Digitizing the process makes the issue of getting home or renters insurance far less daunting and more approachable to consumers. Secondly, Lemonade rethought the business model of insurance.

Normally, insurance providers charge you a certain monthly rate based on the value of the property/items looking to be insured. But at the end of the year, the money remaining in that policy becomes profit, putting the insurance company in direct opposition to the consumer any time a claim is filed.

Lemonade takes its profit directly out of each payment, and if a file isn’t claimed, it sends the rest of the leftover money to the charity of your choice, ensuring that Lemonade and the consumer are on the same page when a claim is filed.

In keeping with that thesis, any proceeds generated from this lawsuit will go directly to Code.org.

“We’re not trying to enrich ourselves by poking another startup,” said Lemonade CEO Daniel Schreiber . “We’re not anti-competition. We’re just saying ‘Play by the rules, play fair and square.'”

Folks interested in the lawsuit can check out the complaint here.