Year: 2019

05 Feb 2019

Healthcare by 2028 will be doctor-directed, patient-owned and powered by visual technologies

Visual assessment is critical to healthcare – whether that is a doctor peering down your throat as you say “ahhh” or an MRI of your brain. Since the X-ray was invented in 1895, medical imaging has evolved into many modalities that empower clinicians to see into and assess the human body.  Recent advances in visual sensors, computer vision and compute power are currently powering a new wave of innovation in legacy visual technologies(like the X-Ray and MRI) and sparking entirely new realms of medical practice, such as genomics.

Over the next 10 years, healthcare workflows will become mostly digitized, with wide swaths of personal data captured and computer vision, along with artificial intelligence, automating the analysis of that data for precision care. Much of the digitized data across healthcare will be visual and the technologies that capture and analyze it are visual technologies.

These visual technologies traverse a patient’s journey from diagnosis, to treatment, to continuing care and prevention.They capture, analyze, process, filter and manage any visual data from images, videos, thermal, x-ray’s, ultrasound, MRI, CT scans, 3D, and more. Computer vision and artificial intelligence are core to the journey.

Three powerful trends — including miniaturization of diagnostic imaging devices, next generation imaging to for the earliest stages of disease detection and virtual medicine — are shaping the ways in which visual technologies are poised to improve healthcare over the next decade.

Miniaturization of Hardware Along with Computer Vision and AI will allow Diagnostic Imaging to be Mobile

Medical imaging is dominated by large incumbents that are slow to innovate. Most imaging devices (e.g. MRI machines) have not changed substantially since the 1980s and still have major limitations:

  • Complex workflows: large, expensive machines that require expert operators and have limited compatibility in hospitals.

  • Strict patient requirements: such as lying still or holding their breath (a problem for cases such as pediatrics or elderly patients).

  • Expensive solutions: limited to large hospitals and imaging facilities.

But thanks to innovations in visual sensors and AI algorithms, “modern medical imaging is in the midst of a paradigm shift, from large carefully-calibrated machines to flexible, self-correcting, multi-sensor devices” says Daniel K. Sodickson, MD, PhD, NYU School of Medicine, Department of Radiology.

MRI glove-shaped detector proved capable of capturing images of moving fingers.  ©NYU Langone Health

Visual data capture will be done with smaller, easier to use devices, allowing imaging to move out of the radiology department and into the operating room, the pharmacy and your living room.

Smaller sensors and computer vision-enabled image capture will lead to imaging devices that are being redesigned a fraction of the size with:

  • Simpler imaging process: with quicker workflows and lower costs.

  • Lower expertise requirements: less complexity will move imaging from the radiology department to anywhere the patient is.

  • Live imaging via ingestible cameras: innovation includes powering ingestibles via stomach acid, using bacteria for chemical detection and will be feasible in a wider range of cases.

“The use of synthetic neural network-based implementations of human perceptual learning enables an entire class of low-cost imaging hardware and can accelerate and improve existing technologies,” says Matthew Rosen, PhD, MGH/Martinos Center at Harvard Medical School.

©Matthew Rosen and his colleagues at the Martinos Center for Biomedical Imaging in Boston want liberate the MRI.

Next Generation Sequencing, Phenotyping and Molecular Imaging Will Diagnose Disease Before Symptoms are Presented

Genomics, the sequencing of DNA, has grown at a 200% CAGR since 2015, propelled by Next Generation Sequencing (NGS) which uses optical signals to read DNA, like our LDV portfolio company Geniachip which was acquired by Roche. These techniques are helping genomics become a mainstream tool for practitioners, and will hopefully make carrier screening part of routine patient care by 2028.

Identifying the genetic makeup of a disease via liquid biopsies, where blood, urine or saliva is tested for tumor DNA or RNA, are poised to take a prime role in early cancer screening. The company GRAIL, for instance, raised $1B for a cancer blood test that uses NGS and deep learning to detect circulating tumor DNA before a lesion is identified.

Phenomics, the analysis of observable traits (phenotypes) that result from interactions between genes and their environment, will also contribute to earlier disease detection. Phenotypes are expressed physiologically and most will require imaging to be detected and analyzed.

Next Generation Phenotyping (NGP) uses computer vision and deep learning to analyze physiological data, understand particular phenotype patterns, then it correlates those patterns to genes. For example, FDNA’s Face2Gene technology can identify 300-400 disorders with 90%+ accuracy using images of a patient’s face. Additional data (images or videos of hands, feet, ears, eyes) can allow NGP to detect a wide range of disorders, earlier than ever before.

Molecular imaging uses DNA nanotech probes to quantitatively visualize chemicals inside of cells, thus measuring the chemical signature of diseases. This approach may enable early detection of neurodegenerative diseases such as Alzheimer’s, Parkinson’s and dementia.

Telemedicine to Overtake Brick-and-Mortar Doctors Visits

By 2028 it will be more common to visit the doctor via video over your phone or computer than it will be to go to an office.

Telemedicine will make medical practitioners more accessible and easier to communicate with. It will create an all digitized health record of visits for a patient’s profile and it will reduce the costs of logistics and regional gaps in specific medical expertise. An example being the telemedicine services rendered for 1.9M injured in the war in Syria.4

The integration of telemedicine into ambulances has led to stroke patients being treated twice as fast.  Doctors will increasingly call in their colleagues and specialists in real time.

Screening technologies will be integrated into telemedicine so it won’t just be about video calling a doctor. Pre-screening your vitals via remote cameras will deliver extensive efficiencies and hopefully health benefits.

“The biggest opportunity in visual technology in telemedicine is in solving specific use cases. Whether it be detecting your pulse, blood pressure or eye problems, visual technology will be key to collecting data,” says Jeff Nadler, Teldoc health.

Remote patient monitoring (RPM) will be a major factor in the growth of telemedicine and the overall personalization of care. RPM devices, like we are seeing with the Apple Watch, will be a primary source of real-time patient data used to make medical decisions that take into account everyday health and lifestyle factors. This personal data will be collected and owned by patients themselves and provided to doctors.

Visual Tech Will Power the Transformation of Healthcare Over the Next Decade

Visual technologies have deep implications for the future of personalized healthcare and will hopefully improve the health of people worldwide. It represents unique investment opportunities and we at LDV Capital have reviewed over 100 research papers from BCC Research, CBInsights, Frost & Sullivan, McKinsey, Wired, IEEE Spectrum and many more to compile our 2018 LDV Capital Insights report. This report highlights the sectors that power to improve healthcare based on the transformative nature of the technology in the sector, projected growth and business opportunity.

There are tremendous investment opportunities in visual technologies across diagnosis, treatment and continuing care & prevention that will help make people healthier across the globe.

05 Feb 2019

SF also denies JUMP’s electric scooter appeal

A neutral hearing officer in the San Francisco Municipal Transportation Agency has denied Uber -owned JUMP’s appeal regarding the SFMTA’s decision to not provide JUMP a permit to operate shared electric scooters in the city.

“We are pleased the hearing officer validated our permitting process, which above all, prioritized the public interest,” SFMTA Communications Manager Ben Jose said in a statement to TechCrunch.

JUMP argued that the SFTMA did not fairly judge its offering against the likes of Skip and Scoot — the two operators granted permits for shared electric scooter services. But SFMTA Hearing Officer James Doyle has determined the SFMTA did not improperly deny JUMP a permit to operate in the program.

“Given the equal opportunity afforded to each applicant, I cannot find that the SFMTA’s Pilot Program proposal was fundamentally unfair to any applicant, least of all to JUMP, which has been engaged in the alternative mobility industry for a considerable period of time before its e-scooter efforts commenced in San Francisco,” he wrote in his decision.

JUMP also argued it was given a fair rating in the experience category due to its relationship with Uber, which bought the company last year. Uber, of course, has had a contentious relationship with the city of San Francisco as a result of its ride-hailing business. In the SFMTA’s decision, it did point to JUMP’s relationship with Uber as an area of concern.

“The SFMTA’s ‘concern’ about the Uber connection does seem somewhat inappropriate, given how dissimilar the ride hailing industry is from the business of e-scooters for hire,” Doyle writes. “But again, even if JUMP had merited a strong rating in this category, I find that there is still a preponderance of evidence in the record to support the lower overall comparison of its application, and for the SFMTA’s denial of a permit to JUMP. As to the Uber connection, there remains absolutely no evidence that Uber’s parent company status has or will in any way jeopardize the strong working relationship that JUMP has already enjoyed with the City and with the SFMTA.”

In the event the SFMTA does decide to add more scooters during the second half of the pilot program, Doyle recommends the agency allow JUMP to participate “at some level.”

After the first six months of the program, in April, the SFMTA can potentially increase the number of scooters from the current max of 625 to 2,500. This juncture, Doyle said, should be able to accommodate the addition of other operators.

This decision, however, is not surprising given Ford-owned Spin received the same outcome last week. In his decision, Doyle said “Spin appears to be an experienced and capable operator.”

Similar to what Doyle said about Spin, he’s convinced JUMP “has the expertise and operational capacity to meet each of the terms of the Pilot Program proposal based on a history of successful past experience with its shared e-bikes in San Francisco — experience that may not have been adequately taken into account.  JUMP’s application may not be one of the most thoroughly detailed, but I find that certainly by now JUMP has all of the information it might need about the Agency’s requirements for its effective future participation in the 2d Phase of the Pilot Program.”

Still, as a condition for JUMP to be able to participate in phase two of the pilot, Doyle recommends JUMP be required to submit additional documentation to show it would comply with operational conditions and requirements the SFMTA might impose, “especially with respect to those categories in JUMP’s application that were deemed to be comparatively rated as ‘poor’ as well as ‘fair.'” JUMP received a poor rating pertaining to safety, and a fair rating for its experience.

I’ve reached out to Uber and will update this story if I hear back.

05 Feb 2019

Google Home can now translate conversations on-the-fly

Just last month, Google showed off an “Interpreter mode” that would let Google Home devices act as an on-the-fly translator. One person speaks one language, the other person speaks another, and Google Assistant tries to be the middle man between the two.

They were only testing it in select locations (hotel front desks, mostly) at the time, but it looks like it’s gotten a much wider rollout now.

Though Google hasn’t officially announced it, AndroidPolice noticed that a support page for the feature just went public. We tested it on our own Google Home devices, and sure enough: interpreter mode fired right up.

To get started, you just say something like “Hey Google, be my Spanish interpreter,” or “Hey Google, help me speak Italian.”

Curiously, you currently have to say the initial command in English, French, German, Italian, Japanese, or Spanish, but once it’s up and running you should be able to translate between the following languages:


• Czech
• Danish
• Dutch
• English
• Finnish
• French
• German
• Greek
• Hindi
• Hungarian
• Indonesian
• Italian
• Japanese
• Korean
• Mandarin
• Polish
• Portuguese
• Romanian
• Russian
• Slovak
• Spanish
• Swedish
• Thai
• Turkish
• Ukrainian
• Vietnamese

It works pretty well for basic conversations in our quick testing, but it has its quirks. Saying “Goodbye”, for example, ends the translation rather than translating it into the target language, which might be a little confusing if one half of the conversation didn’t realize the chat was nearing its end.

The new feature should work on any Google Home device — and if it’s one with a screen (like Google’s Home Hub), you’ll see the words as they’re translated.

05 Feb 2019

NASA cubecraft WALL-E and EVE sign off after historic Mars flyby

A NASA mission that sent two tiny spacecraft farther out than any like them before appears to have come to an end: Cubesats MarCO-A and B (nicknamed WALL-E and EVE) are no longer communicating from their positions a million and two million miles from Earth respectively.

The briefcase-sized craft rode shotgun on the Insight Mars Lander launch in May, detaching shortly after leaving orbit. Before long they had gone farther than any previous cubesat-sized craft, and after about a million kilometers EVE took a great shot of the Earth receding in its wake (if wake in space were a thing).

They were near Mars when Insight made its descent onto the Red Planet, providing backup observation and connectivity, and having done that, their mission was pretty much over. In fact, the team felt that if they made it that far it would already be a major success.

“This mission was always about pushing the limits of miniaturized technology and seeing just how far it could take us,” said the mission’s chief engineer, JPL’s Andy Klesh, in a news release. “We’ve put a stake in the ground. Future CubeSats might go even farther.”

The two craft together cost less than $20 million to make, a tiny fraction of what traditionally sized orbiters and probes cost, and of course their size makes them much easier to launch as well.

However, in the end these were experimental platforms not designed to last years — or decades, like Voyager 1 and 2. The two craft have ceased communicating with mission control, and although this was expected, the cause is still undetermined:

The mission team has several theories for why they haven’t been able to contact the pair. WALL-E has a leaky thruster. Attitude-control issues could be causing them to wobble and lose the ability to send and receive commands. The brightness sensors that allow the CubeSats to stay pointed at the Sun and recharge their batteries could be another factor. The MarCOs are in orbit around the Sun and will only get farther away as February wears on. The farther they are, the more precisely they need to point their antennas to communicate with Earth.

There’s a slim chance that when WALL-E and EVE’s orbits bring them closer to the sun, they’ll power back on and send a bit more information, and the team will be watching this summer to see if that happens. But it would just be a cherry on top of a cherry at this point.

You can learn more about the MarCO project here, and all the images the craft were able to take and send back are collected here.

05 Feb 2019

Play Iconary, a simple drawing game that hides a deceptively deep AI

It may not seem like it takes a lot of smarts to play a game like Pictionary, but in fact it involves a lot of subtle and abstract visual and linguistic skills. This AI built to play a game like it is similarly complex, and its interpretations and creations when you play it (as you can now) may seem eerily human — but it’s also refreshing to have such an agent working collaboratively with you rather than beating you with superhuman skills.

Iconary, as the game’s creators at the Allen Institute for AI decided to call it to avoid lawsuits from Mattel, has you drawing and arranging icons to form phrases, or guessing at the arrangements of the computer player.

For instance, if you were to get the phrase “woman drinking milk from a glass,” you’d probably draw a woman — a stick figure, probably, and then select the “woman” icon from the computer’s interpretations of your sketch. Then you’d draw a glass, and place that near the woman. Then… milk? How do you draw milk? There is actually a milk bottle icon if you look for it, but you could also draw a cow and put that in or next to the glass.

The computer then guesses at what you’ve put together, and after a few tries it would probably get it. You can also play it the other way, where the computer arranges icons and you have to guess.

Now, let’s get this right out of the way: this is very different from Google’s superficially similar “Quick, Draw” game. In that one the system has been can only guess whether your drawing is one of a few hundred pre-selected objects it’s been specifically trained to recognize.

Not only are there some 75,000 phrases supported in Iconary, with more being added regularly, but there’s no way to train the AI on them — the way that any one of them can be represented is uncountable.

“When you start bringing in phrases, the problem space explodes,” explained Ali Farhadi, one of the creators of the project; I talked with him and researcher Aniruddah Kembhavi about Iconary ahead of its release. “Sure, you can easily recognize a cat or a dog. But can you recognize a cat purring, or a dog scratching its back? There’s a huge diversity in elements people choose and how they position them.”

Although Pictionary may seem at first like a game that depends on your drawing skill, it’s really much more about arranging ideas and understanding the relationship with them — seeing the intent behind the drawing. How else can some people manage to recognize a word or phrase from a handful of crude shapes and some arrows?

The AI behind Iconary, then, isn’t a drawing recognition engine at all but one that has been trained to recognize relationships between objects, informed by their type, position, number, and everything else. This is, the researchers say, the most significant example of AI collaborating meaningfully with humans yet created.

And this logic is kept fuzzy enough that several “person” icons gathered together could mean women, men, people, group, crowd, team, or anything else. How would you know if it was a “team?” Well, if you put a soccer ball near it or put them on a play field, it becomes obvious. If there’s a blackboard there, it’s probably a class. And so on.

Of course, I say “and so on,” but that small phrase in a way encompasses the entirety of human intuition and years of training on how to view and interpret the visual world. Naturally Iconary isn’t nearly as good at it as we are, but its logic is frequently surprisingly human.

If you can only get part of the answer, you can ask the AI to draw again, and just like we do in Pictionary it will adapt its representation to address your needs.

It was of course trained on human drawings collected via Mechanical Turk, but it isn’t just replicating what people drew. If the only thing it ever saw to represent a scientist was a man next to a microscope, how would it know to recognize the same idea in a woman, or standing next to an atom or rocket? In fact, the model has never been exposed to the phrases you can play with now. As the researchers write:

AllenAI has never before encountered the unique phrases in Iconary, yet our preliminary games have shown that our AI system is able to both successfully depict and understand phrases with a human partner with an often surprising deftness and nuance. This feat requires combining natural language understanding, computer vision, and the use of common sense to reason about phrases and their depictions within the constraints of a small vocabulary of possible icons. Being successful at Iconary requires skills beyond basic pattern recognition, including multi-hop reasoning, abstraction, collaboration, and adaptation

Instead of simply pairing “ball” with “sport,” it learned about why those objects are related, and how to exert some basic common sense — a sort of holy grail in AI, though this is only a small step in that direction. If one person draws “father” as a man bigger than a smaller person, it isn’t obvious to the computer that the father is the big one, not the small. And it’s another logical jump that a “mother” would be a similarly-sized woman, or that the small one is a child.

But by observing how people used the objects and how they relate to one another, the AI built up a network of ideas about how different things are represented or related. “Child” is closer to “student” than “horse,” for instance. And “student” is close to “desk” and “laptop.” So if you draw a child by a desk, maybe it’s a student? This kind of robust logic is so simple to us that we don’t even recognize we’re doing it, but incredibly hard to build into a machine learning agent.

This type of AI is deceptively broad and intelligent, but it isn’t flashy the way that the human-destorying AlphaStar or AlphaGo are. It isn’t superhuman — in fact, it’s not even close to human. But board and PC games are tightly bounded problem spaces with set rules and limits. Visual expression of a complex phrase like “crowd celebrating a victory on a street” isn’t a question of how fast the computer can process, but the depth of its understanding of the concepts involved, and how others think about them.

This kind of learning is also more broadly applicable in the real world. Robots and self-driving cars do need to know how to exceed human capacity in some cases, but it’s also massively important to be able to understand the world around them in the same way people do. When it sees a person by a hospital bed holding a book, what does that mean? When a person leaves a knife out next to a whole tomato? And so on.

“Real life problems involve semantics, abstraction, and collaboration,” said Farhadi. “They involve theory of mind.”

Interestingly, the agent is biased a bit (as these things tend to be) owing to the natural bias of our language. Images “read” from left to right, as people tend to draw them, since we also read in that direction, so keep that in mind.

Try playing a couple games both drawing and guessing, and you may be surprised at the cleverness and weirdness of the AI’s suggestions. Don’t feel bad about skipping one — the agent is still learning, and sometimes its attempts to represent ideas are a bit too abstract. But I certainly found myself impressed more than baffled.

If you’d like to learn more, stay tuned: the team behind the system will be publishing a paper on it later this year. I’ll update this post when that happens.

05 Feb 2019

Showing the power of startup women’s health brands, P&G buys This is L

The P&G acquisition of This is L., a startup retailer of period products and prophylactics, shows just how profitable investing in women’s healthcare brands and products can be.

A person with knowledge of the investment put the price tag at roughly $100 million — a healthy outcome for investors and company founder Talia Frankel. But just as important as the financial outcome is the deal’s implications for other mission-driven companies.

This is L launched from Y Combinator in August 2015 with a service distributing condoms in New York and San Francisco and steadily expanded into feminine hygiene products.

Frenkel, a former photojournalist who worked for the United Nations and Red Cross, started the company in 2013 — roughly three years after an assignment in Africa revealed the toll that HIV/AIDs was taking on women and girls on the continent.

“I didn’t realize the No. 1 killer of women was completely preventable and I think that really inspired me to action,” Frenkel told TechCrunch at the time of the company’s launch.

Now the company has distributed roughly 250 million products to customers around the world.

“Our strong growth has enabled us to stand in solidarity with women in more than 20 countries,” said Talia Frenkel, CEO of This Is L., in a statement following the acquisition .“Our support has ranged from partnering with organizations to send period products to Native communities in South Dakota, to supplying pad-making machines to a women-led business in Tamil Nadu. Pairing our purpose with P&G’s expertise, scale and resources provides an extraordinary opportunity to contribute to a more equitable world.”

The company is available in more than 5,000 stores across the U.S. and is working with women entrepreneurs in countries from Uganda to India and beyond.

“This acquisition is a perfect complement to our Always and Tampax portfolio, with its commitment to a shared mission to advocate for girls’ confidence and serve more women,” said Jennifer Davis, President, P&G Global Feminine Care. “We feel this is a strong union and together we can be a greater force for good.”

For investors with knowledge of the company, the P&G acquisition is a harbinger of things to come. The combination of a non-technical, female founder operating in the consumer packaged goods market with a mission-driven company was an anomaly in the Silicon Valley of four years ago, but Frenkel’s success shows what kind of opportunities exist in the market.

“With this acquisition investors need to update their patterns,” said one investor with knowledge of the company.

05 Feb 2019

PSA: Go back up your Flickr photos before they’re deleted

Do you have a Flickr account? Does it have over 1,000 photos?

Go back them up, or you might lose a bunch of them forever.

We’ve known for a few months now that Flickr was prepping to drop its storage limit for non-Pro accounts from 1TB to just 1,000 photos following its acquisition by SmugMug — and that anything over the 1,000 photo cap would be deleted, starting with the oldest.

If you kept telling yourself that you’d “back it all up later”, “later” is now. Flickr has said they’d start deleting things after February 5th… and, well, that’s today.

So how do you back it all up? You can go through and download them one by one, but that’s pretty painful. Fortunately, there’s a quicker way:

  1. Go to Flickr.com on a desktop browser
  2. Log in
  3. Tap your profile picture in the upper right, then hit “Settings”
  4. Scroll down, and look for “Your Flickr Data” in the bottom right.
  5. Double check that the email address listed is your current one. If not, change it.
  6. Hit the “Request my Flickr data” button.
  7. Wait.

Within a few hours, you should get an email with a big ol’ zip file with all of your pictures. Take those and put them somewhere else — an external hard drive, Google Photos, a spare SD card, all of the above, whatever. Just go back them up. Even photos that you don’t really care about now can end up meaning a lot in a few years.

SmugMug outlined its thinking on why the 1 terabyte limit wasn’t working (and how the new 1,000 photo limit was chosen) in a post back in November.

(Disclosure: Though Flickr is now owned by SmugMug, it was owned by Yahoo/Oath before that. Oath owns TechCrunch. I don’t think there’s a conflict there, I just like to make these things clear.)

05 Feb 2019

Lawsuits no longer lingering, Hippo brings its service for buying discount drugs to market

The long and litigious saga of Hippo Technologies may finally be over now that the company is finally launching its service to sell discounted prescription drugs to members just three months after settling a lawsuit with its chief competitor, Blink Health.

Hippo and Blink were locked in a lawsuit for much of last year with Blink accusing the company of stealing pretty much every aspect of its business. For its part, Hippo’s chief executive and co-founder had sued Blink for wrongful termination under whistleblower protection rules after he allegedly uncovered corporate malfeasance at the discount drug membership service.

Both companies use a mobile app and online tool to help consumers find low prices on medications. In its March 2018 suit against Hippo, Blink wanted up to $250 million and had accused the company, which was founded by former Blink employees, of obtaining trade secrets, sabotaging existing contracts and unfairly competing with Blink’s business.

There’s no doubt that bad blood exists between Blink Health’s co-founders, the brothers Geoffrey and Matthew Chaiken and Hippo co-founder Eugene Kakaulin.

A former chief financial officer at Blink, Kakaulin filed a suit in 2016 claiming that Blink had fired him in retaliation for alerting the company founders to security violations.

With most of those lawsuits now settled, Hippo is bringing its service to market. The company said that it can save patients up to 97 percent on their prescription drugs at almost any pharmacy in the country.

“People deserve to know how much they will pay for meds and get access to the lowest prices available. This is why we started Hippo” said Kakaulin, the company’s co-founder, in a statement.

Kakaulin’s co-founder Charles A. Jacoby grew up in the healthcare business watching his father work as a general practitioner and grapple with prescribing patients with drugs that they can afford.

 “Markets are only fair and efficient when people are presented with pricing options. Whether people have good insurance, bad insurance or no insurance at all, they should check the Hippo price before going to the pharmacy,” Jacoby said in a statement.

Access to low cost medicine is a significant part of what’s broken about healthcare in the U.S. today. Blink and Hippo are among a slew of companies including GoodRX, Amazon (through its PillPack acquisition), and RxSave.

Hippo and its competitors operate on a simple premise. They cut out middlemen and guide consumers to use generic drugs taking a cut of the sales from the drug manufacturers. The process saves customers money and can also generate some revenue for pharmacies that agree to work with the companies.

Pharmacy benefits managers aggregate the purchasing power of buyers through insurance networks to cut the prices that customers have to pay for their medications. But many people argue that the discounts are not significant, and most of the difference in cost just goes to line the pockets of the benefits managers themselves.

What companies like GoodRX, Hippo and Blink do is bring those benefits to anyone who signs up. Hippo gives participating pharmacies a guaranteed rate for drugs in exchange for lower prices. Sometimes the company will make money on the sale of a drug and sometimes it will lose money, but it ideally is profitable by arbitraging costs across a population.

To sign up for Hippo, potential customers can text “Hello” to Hippo (44776) on their phone or visit the company’s website to receive an individual, digital Hippo card.

Users can then compare costs between medications at local pharmacies and see which location is offering the best price. Once in the pharmacy a user just shows the pharmacist their Hippo card and can start saving.

05 Feb 2019

Reddit is raising a huge round near a $3 billion valuation

Reddit is raising $150 million to $300 million to keep the front page of the Internet running, multiple sources tell TechCrunch. The forthcoming Series D round is said to be led by Chinese tech giant Tencent at a $2.7 billion pre-money valuation. Depending on how much follow-on cash Reddit drums up from Silicon Valley investors and beyond, its post-money valuation could reach an epic $3 billion.

As more people seek esoteric community and off-kilter entertainment online, Reddit continues to grow its link sharing forums. 330 million monthly active users now frequent its 150,000 Subreddits. That warrants the boost to its valuation, which previously reached $1.8 billion when raised $200 million in July 2017. As of then, Reddit’s majority stake was still held by publisher Conde Nast that bought in back in 2006 just a year after the site launched. Reddit had raised $250 million previously, so the new round will push it to $400 million to $550 million in total funding.

It should have been clear that Reddit was on the prowl after a month of pitching its growth to the press and beating its own drum. In December Reddit announced it had reached 1.4 billion video views per month, up a staggering 40 percent from just two months earlier after first launching a native video player in August 2017. And it made a big deal out of starting to sell cost per click ads in addition to promoted posts, cost per impression, and video ads. A 22 percent increase in engagement and 30 percent rise in total view in 2018 pushed it past $100 million in revenue for the year, CNBC reported.

The exact details of the Series D could fluctuate before it’s formally announced, and Reddit and Tencent declined to comment. But supporting and moderating all that content isn’t cheap. The company had 350 employees just under a year ago, and is headquartered in pricey San Francisco — though in one of its cheaper but troubled neighborhoods. Until Reddit’s newer ad products rev up, it’s still relying on venture capital.

Tencent’s money will give Reddit time to hit its stride. It’s said to be kicking in the first $150 million of the round. The Chinese conglomerate owns all-in-on messaging app WeChat and is the biggest gaming company in the world thanks to ownership of League Of Legends and stakes in Clash Of Clans-maker Supercell and Fortnite developer Epic. But China’s crackdown on gaming addiction has been rough for Tencent’s valuation and Chinese competitor Bytedance’s news reader app Toutiao has grown enormous. Both of those facts make investing in American news board Reddit a savvy diversification, even if Reddit isn’t accessible in China.

Reddit could seek to fill out its round with up to $150 million in additional cash from previous investors like Sequoia, Andreessen Horowitz, Y Combinator, or YC’s president Sam Altman. They could see potential in one of the web’s most unique and internet-native content communities. Reddit is where the real world is hashed out and laughed about by a tech savvy audience that often produces memes that cross over into mainstream culture. And with all those amateur curators toiling away for Internet points, casual users are flocking in for an edgier look at what will be the center of attention tomorrow.

Reddit has recently avoid much of the backlash hitting fellow social site Facebook, despite having to remove 1000 Russian trolls pushing political propaganda. But in the past, the anonymous site has had plenty of problems with racist, mysoginistic, and homophobic content. In 2015 it finally implemented quarantines and shut down some of the most offensive Subreddits. But harassment by users contributed to the departure of CEO Ellen Pao, who was replaced by Steve Huffman, Reddit’s co-founder. Huffman went on to abuse that power, secretly editing some user comments on Reddit to frame them for insulting the heads of their own Subreddits. He escaped the debacle with a slap on the wrist and an apology, claiming “I spent my formative years as a young troll on the Internet.”

Investors will have to hope Huffman has the composure to lead Reddit as it inevitably encounters more scrutiny as its valuation scales up. Its policy choice about what constitutes hate speech and harassment, its own company culture, and its influence on public opinion will all come under the microscope. Reddit has the potential to give a voice to great ideas at a time when flashy visuals rule the web. And as local journalism wanes, the site’s breed of vigilante web sleuths could be more in demand, for better or worse. But that all hinges on Reddit defining clear, consistent, empathetic policy that will help it surf atop the sewage swirling around the internet.

05 Feb 2019

Daily Crunch: Facebook lets you unsend recent messages

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here:

1. Facebook now lets everyone unsend messages for 10 minutes

For up to 10 minutes after sending a Facebook Message, the sender can tap on it and they’ll find the delete button has been replaced by “Remove for you” and “Remove for everyone” options. If you select the latter, recipients will see an alert saying that you removed a message, and they can still flag the message.

The feature could come in handy in those moments when you realize, right after hitting send, that you’ve made an embarrassing typo or said something dumb. It won’t, however, let people change ancient history.

2. Alphabet revenues are up 22% but the stock is still dropping

The company’s beat of analyst estimates would have been a miss if not for a $1.3 billion unrealized gain “related to a non-marketable debt security.”

3. Toyota’s new car subscription company Kinto is gamifying driving behavior

Toyota has officially launched Kinto, a company first revealed late last year that will manage a car subscription program and other mobility services in Japan, including the sale and purchase of used vehicles as well as automotive repair and inspection.

4. Apple pays millions in backdated taxes to French authorities

“The French tax administration recently concluded a multi-year audit on the company’s French accounts, and those details will be published in our public accounts,” the company told Reuters. French authorities can’t confirm the transaction due to tax secrecy.

5. Self-driving truck startup Ike raises $52 million

The startup was founded by veterans of Apple and Google, as well as Uber Advanced Technologies Group’s self-driving truck program. Its mission — expand and deploy — sounds a lot like other autonomous vehicle startups, but that’s where the parallels end.

6. Facebook bans four armed groups in Myanmar

Facebook has introduced new security features and announced plans to increase its team of Burmese language content translators to 100 people. While it doesn’t intend to open an office in Myanmar, it has ramped up its efforts to expel bad actors.

7. Backed by Benchmark, Blue Hexagon just raised $31 million for its deep learning cybersecurity software

According to co-founder Nayeem Islam, Blue Hexagon has created a real-time, cybersecurity platform that he says can detect known and unknown threats at first encounter, then block them in “sub seconds” so the malware doesn’t have time to spread.