Author: azeeadmin

09 Jun 2021

Health clouds are set to play a key role in healthcare innovation

The U.S. healthcare industry is amidst one of the biggest transformations any industry has seen since the dot-com boom of the late 1990s. This massive change is being stimulated by federal mandates, technological innovation, and the need to improve clinical outcomes and communication between providers, patients, and payers.

An aging population, increase in chronic diseases, lower reimbursement rates, and a shift to value-based payments—plus the COVID-19 pandemic—have added pressure and highlighted the need for new technology to enhance virtual and value-based care.

Improving medical outcomes now requires processing massive amounts of healthcare data, and the cloud plays a pivotal role in meeting the current needs of healthcare organizations.

Challenges in healthcare

Most of today’s healthcare challenges fall into two broad categories: rapidly rising costs, and an increased burden on resources. Rising costs — and the resulting inadequacy of healthcare resources — can stem from:

An aging population: As people age and live longer, healthcare gets more expensive. As medicine improves, people aged 65 and above are expected to account for 20% of the U.S. population by 2030, per the U.S. Census Bureau. And as older people spend more on healthcare, an aging population is expected to contribute to increasing healthcare costs over time.

Prevalence of chronic illnesses: According to a National Center for Biotechnology Information report, chronic disease treatment makes up 85% of healthcare costs, and more than half of all Americans have a chronic illness (diabetes, high blood pressure, depression, lower back and neck pain, etc.)

Higher ambulatory costs: The cost of ambulatory care, including outpatient hospital services and emergency room care, increased the most of all treatment categories covered in a 2017 study in the Journal of the American Medical Association.

Rising healthcare premiums, out-of-pocket costs, and Medicare and Medicaid: Healthcare premiums rose by an estimated 54% between 2009 and 2019. The COVID-19 pandemic has spurred enrollment into government programs like Medicaid and Medicare, which has increased the overall demand for medical services, contributing to rising costs. A 2021 IRS report highlighted that a shift to high-deductible health plans — with out-of-pocket costs of up to $14,000 per family — has also increased the cost of healthcare.

Delayed care and surgeries due to COVID-19: A poll by the Kaiser Family Foundation (KFF) in May 2020 indicated that up to 48% people have avoided or postponed medical care due to concerns about the COVID-19 pandemic. About 11% of those people reported that their medical condition worsened after skipping or postponing care. Non-emergency surgeries were frequently postponed, as resources were set aside for COVID-19 patients. These delays make treatable conditions more costly and increase overall costs.

A lack of pricing transparency: Without transparency, it’s difficult to know the actual cost of healthcare. The fragmented data landscape fails to capture complete details and complex medical bills, and does not give patients a complete view of payments.

The need to modernize

To mitigate the impact of increased costs and inadequate resources, healthcare organizations need to replace legacy IT programs and adopt modern systems designed to support rapid innovation for site-agnostic, collaborative, whole-person care — all while being affordable and accessible.

09 Jun 2021

Seoul-based Ringle raises $18M Series A for its one-on-one English tutoring platform

Many of the highest-profile English tutoring platforms focus on children, including VIPKID and Magic Ears. Ringle created a niche for itself by focusing on adults first, with courses like business English and interview preparation. The South Korea-based startup announced today it has raised an $18 million Series A led by returning investor Must Asset Management, at a valuation of $90 million. Ringle is preparing to launch a program for school kids later this year, and also has plans to open offine educational spaces in South Korea and the United States.

Other participants in the round, which brings Ringle’s total raised to $20 million, include returning investors One-asset management and MoCA Ventures, along with new backer Xoloninvest. Ringle claims its revenue has grown three times every year since it was founded in 2015, and that bookings for lessons have increased by 390% compared to the previous year.

Ringle currently has 700 tutors, who are pre-screened by the company, and 100,000 users. About 30% of its students, who learn through one-on-one live video sessions, are based outside of Korea, including in the U.S., the United Kingdom, Japan, Australia and Singapore.

Ringle’s co-founders are Seunghoon Lee and Sungpah Lee, who both earned MBAs from the Stanford University Graduate School of Business. They developed Ringle based on the challenges they faced as non-native English speakers and graduate students in the U.S. The startup was first created to serve professionals who are already established in their careers or in academia. Its students include people who have worked for companies like Google, Amazon, BCG, McKinsey and Samsung Electronics.

Seunghoon Lee told TechCrunch that Ringle creates proprietary learning materials based on current events to keep its students interested. For example, recent topics have included blockchain NFT tech, how the movie “Parasite” portrayed class conflict and global inequalities in vaccine access.

Ringle’s tutors are recruited from top universities and need to submit proof of education and verify their school emails. The company’s vetting process also includes a mock session with Ringle staff. Lee said applicants are asked to familiarize themselves with some of Ringle’s learning materials and lead a full lesson based on its guidance. Ringle assesses candidates on their teaching skills and ability to lead engaging discussions that also hone their students’ language skills.

Part of Ringle’s new funding is earmarked for its tech platform. It is currently developing a language diagnostics system that tracks the complexity and accuracy of students’ spoken English with researchers at KAIST (Korea Advanced Institute of Science and Technology).

The company already has an AI-based analytics system that uses speech-to-text and measures speech pacing (or words spoken per minute), the amount of filler words and range of words and expressions in lessons. It delivers feedback that allows students to compare their performance with the top 20% of Ringle users in different metrics.

The new language diagnostics system that is currently under development with KAIST will start releasing features over the next few months, including speech fluency scoring, a personalized dictionary and auto-paraphrasing suggestions.

The funding will also be used to create more original learning content, and hire for Ringle’s offices in Seoul and San Mateo, California. Ringle also plans to diversify its revenue sources by providing premium content on a subscription basis, and will launch its junior program for students aged 10 and above later this year.

 

09 Jun 2021

Seoul-based Ringle raises $18M Series A for its one-on-one English tutoring platform

Many of the highest-profile English tutoring platforms focus on children, including VIPKID and Magic Ears. Ringle created a niche for itself by focusing on adults first, with courses like business English and interview preparation. The South Korea-based startup announced today it has raised an $18 million Series A led by returning investor Must Asset Management, at a valuation of $90 million. Ringle is preparing to launch a program for school kids later this year, and also has plans to open offine educational spaces in South Korea and the United States.

Other participants in the round, which brings Ringle’s total raised to $20 million, include returning investors One-asset management and MoCA Ventures, along with new backer Xoloninvest. Ringle claims its revenue has grown three times every year since it was founded in 2015, and that bookings for lessons have increased by 390% compared to the previous year.

Ringle currently has 700 tutors, who are pre-screened by the company, and 100,000 users. About 30% of its students, who learn through one-on-one live video sessions, are based outside of Korea, including in the U.S., the United Kingdom, Japan, Australia and Singapore.

Ringle’s co-founders are Seunghoon Lee and Sungpah Lee, who both earned MBAs from the Stanford University Graduate School of Business. They developed Ringle based on the challenges they faced as non-native English speakers and graduate students in the U.S. The startup was first created to serve professionals who are already established in their careers or in academia. Its students include people who have worked for companies like Google, Amazon, BCG, McKinsey and Samsung Electronics.

Seunghoon Lee told TechCrunch that Ringle creates proprietary learning materials based on current events to keep its students interested. For example, recent topics have included blockchain NFT tech, how the movie “Parasite” portrayed class conflict and global inequalities in vaccine access.

Ringle’s tutors are recruited from top universities and need to submit proof of education and verify their school emails. The company’s vetting process also includes a mock session with Ringle staff. Lee said applicants are asked to familiarize themselves with some of Ringle’s learning materials and lead a full lesson based on its guidance. Ringle assesses candidates on their teaching skills and ability to lead engaging discussions that also hone their students’ language skills.

Part of Ringle’s new funding is earmarked for its tech platform. It is currently developing a language diagnostics system that tracks the complexity and accuracy of students’ spoken English with researchers at KAIST (Korea Advanced Institute of Science and Technology).

The company already has an AI-based analytics system that uses speech-to-text and measures speech pacing (or words spoken per minute), the amount of filler words and range of words and expressions in lessons. It delivers feedback that allows students to compare their performance with the top 20% of Ringle users in different metrics.

The new language diagnostics system that is currently under development with KAIST will start releasing features over the next few months, including speech fluency scoring, a personalized dictionary and auto-paraphrasing suggestions.

The funding will also be used to create more original learning content, and hire for Ringle’s offices in Seoul and San Mateo, California. Ringle also plans to diversify its revenue sources by providing premium content on a subscription basis, and will launch its junior program for students aged 10 and above later this year.

 

09 Jun 2021

A revival at the intersection of open source and open standards

Our world has big problems to solve, and something desperately needed in that pursuit is the open-source and open-standards communities working together.

Let me give you a stark example, taken from the harsh realities of 2020. Last year, the United States experienced nearly 60,000 wildland fires that burned more than 10 million acres, resulting in more than 9,500 homes destroyed and at least 43 lives lost.

I served as a volunteer firefighter in California for 10 years and witnessed firsthand the critical importance of technology in helping firefighters communicate efficiently and deliver safety-critical information quickly. Typically, multiple agencies show up to fight these fires, bringing with them radios made by different manufacturers that each use proprietary software to set radio frequencies. As a result, reprogramming these radios so that teams could communicate with one another is an unnecessarily slow — and potentially life-threatening — process.

If the radio manufacturers had instead all contributed to an open-source implementation conforming to a standard, the radios could have been quickly aligned to the same frequencies. Radio manufacturers could have provided a valuable, life-saving tool rather than a time-wasting obstacle, and they could have shared the cost of developing such software. In this situation, like so many others, there is no competitive advantage to be gained from proprietary radio-programming software and many priceless benefits to gain by standardizing.

Open source and open standards are obviously different, but the objectives of these communities are the same: interoperability, innovation and choice.

The benefit of coherent standards and corresponding open-source implementations is not unique to safety-critical situations like wildfires. There are many areas of our lives that could significantly benefit from a better integration of standards and open source.

Open source and open standards: What’s the difference?

“Open source” describes software that is publicly accessible and free for anyone to use, modify and share. It also describes a collaborative, community-oriented software development philosophy, with an open exchange of ideas, open participation, rapid prototyping, and open governance and transparency.

By contrast, the term “standard” refers to agreed-upon definitions of functionality. These requirements, specifications and guidelines ensure that products, services and systems perform in an interoperable way with quality, safety and efficiency.

Dozens of organizations exist for the purpose of establishing and maintaining standards. Examples include the International Organization for Standardization (ISO), the European Telecommunications Standards Institute (ETSI), and the World Wide Web Consortium (W3C). OASIS Open belongs in this category as well. A standard is “open” when it is developed via a consensus-building process, guided by organizations that are open, fair and transparent. Most people would agree that the standard-building process is careful and deliberate, ensuring consensus through compromise and resulting in long-lasting specifications and technical boundaries.

Where’s the common ground?

Open source and open standards are obviously different, but the objectives of these communities are the same: interoperability, innovation and choice. The main difference is how they accomplish those goals, and by that I’m referring primarily to culture and pace.

Chris Ferris, an IBM fellow and CTO of Open Technology, recently told me that with standards organizations, it often seems the whole point is to slow things down. Sometimes it’s with good reason, but I’ve seen competition get the best of people, too. Open source seems to be much more collaborative and less contentious or competitive. That doesn’t mean that there aren’t competitive projects out there that are tackling the same domain.

Another culture characteristic that affects pace is that open source is about writing code and standards organizations are about writing prose. Words outlive code with respect to long-term interoperability, so the standards culture is much more deliberate and thoughtful as it develops the prose that defines standards. Although standards are not technically static, the intent with a standard is to arrive at something that will serve without significant change for the long term. Conversely, the open-source community writes code with an iterative mindset, and the code is essentially in a state of continuous evolution. These two cultures sometimes clash when the communities try to move in concert.

If that’s the case, why try to find harmony?

Collaboration between open source and open standards will fuel innovation

The internet is a perfect example of what harmony between the open-source and open-standards communities can achieve. When the internet began as ARPANET, it relied on common shared communications standards that predated TCP/IP. With time, standards and open-source implementations brought us TCP/IP, HTTP, NTP, XML, SAML, JSON and many others, and also enabled the creation of additional key global systems implemented in open standards and code, like disaster warnings (OASIS CAP) and standardized global trade invoicing (OASIS UBL).

The internet has literally transformed our world. That level of technological innovation and transformative power is possible for the future, too, if we re-energize the spirit of collaboration between the open-standards and open-source communities.

Finding harmony and a natural path of integration

With all of the critical open-source projects residing in repositories today, there are many opportunities for collaboration on associated standards to ensure the long-term operability of that software. Part of our mission at OASIS Open is identifying those open-source projects and giving them a collaborative environment and all the scaffolding they need to build a standard without it becoming a difficult process.

Another point Ferris shared with me is the necessity for this path of integration to grow. For instance, this need is particularly prevalent if you want your technology to be used in Asia: If you don’t have an international standard, Asian enterprises don’t even want to hear from you. We’re seeing the European community asserting a strong preference for standards as well. That is certainly a driver for open-source projects that want to play with some of the heavy hitters in the ecosystem.

Another area where you can see a growing need for integration is when an open-source project becomes bigger than itself, meaning it begins to impact a whole lot of other systems, and alignment is needed between them. An example would be a standard for telemetry data, which is now being used for so many different purposes, from observability to security. Another example is the software bill of materials, or SBOM. I know some things are being done in the open-source world to address the challenge of tracking the provenance of software. This is another case where, if we’re going to be successful at all, we need a standard to emerge.

It’s going to take a team effort

Fortunately, the ultimate goals of the open-source and open-standards communities are the same: interoperability, innovation and choice. We also have excellent proof points of how and why we need to work together, from the internet to Topology and Orchestration Specification for Cloud Applications (TOSCA) and more. In addition, major stakeholders are carrying the banner, acknowledging that for certain open-source projects we need to take a strategic, longer-term view that includes standards.

That’s a great start to a team effort. Now it’s time for foundations to step up to the plate and collaborate with each other and with those stakeholders.

09 Jun 2021

A revival at the intersection of open source and open standards

Our world has big problems to solve, and something desperately needed in that pursuit is the open-source and open-standards communities working together.

Let me give you a stark example, taken from the harsh realities of 2020. Last year, the United States experienced nearly 60,000 wildland fires that burned more than 10 million acres, resulting in more than 9,500 homes destroyed and at least 43 lives lost.

I served as a volunteer firefighter in California for 10 years and witnessed firsthand the critical importance of technology in helping firefighters communicate efficiently and deliver safety-critical information quickly. Typically, multiple agencies show up to fight these fires, bringing with them radios made by different manufacturers that each use proprietary software to set radio frequencies. As a result, reprogramming these radios so that teams could communicate with one another is an unnecessarily slow — and potentially life-threatening — process.

If the radio manufacturers had instead all contributed to an open-source implementation conforming to a standard, the radios could have been quickly aligned to the same frequencies. Radio manufacturers could have provided a valuable, life-saving tool rather than a time-wasting obstacle, and they could have shared the cost of developing such software. In this situation, like so many others, there is no competitive advantage to be gained from proprietary radio-programming software and many priceless benefits to gain by standardizing.

Open source and open standards are obviously different, but the objectives of these communities are the same: interoperability, innovation and choice.

The benefit of coherent standards and corresponding open-source implementations is not unique to safety-critical situations like wildfires. There are many areas of our lives that could significantly benefit from a better integration of standards and open source.

Open source and open standards: What’s the difference?

“Open source” describes software that is publicly accessible and free for anyone to use, modify and share. It also describes a collaborative, community-oriented software development philosophy, with an open exchange of ideas, open participation, rapid prototyping, and open governance and transparency.

By contrast, the term “standard” refers to agreed-upon definitions of functionality. These requirements, specifications and guidelines ensure that products, services and systems perform in an interoperable way with quality, safety and efficiency.

Dozens of organizations exist for the purpose of establishing and maintaining standards. Examples include the International Organization for Standardization (ISO), the European Telecommunications Standards Institute (ETSI), and the World Wide Web Consortium (W3C). OASIS Open belongs in this category as well. A standard is “open” when it is developed via a consensus-building process, guided by organizations that are open, fair and transparent. Most people would agree that the standard-building process is careful and deliberate, ensuring consensus through compromise and resulting in long-lasting specifications and technical boundaries.

Where’s the common ground?

Open source and open standards are obviously different, but the objectives of these communities are the same: interoperability, innovation and choice. The main difference is how they accomplish those goals, and by that I’m referring primarily to culture and pace.

Chris Ferris, an IBM fellow and CTO of Open Technology, recently told me that with standards organizations, it often seems the whole point is to slow things down. Sometimes it’s with good reason, but I’ve seen competition get the best of people, too. Open source seems to be much more collaborative and less contentious or competitive. That doesn’t mean that there aren’t competitive projects out there that are tackling the same domain.

Another culture characteristic that affects pace is that open source is about writing code and standards organizations are about writing prose. Words outlive code with respect to long-term interoperability, so the standards culture is much more deliberate and thoughtful as it develops the prose that defines standards. Although standards are not technically static, the intent with a standard is to arrive at something that will serve without significant change for the long term. Conversely, the open-source community writes code with an iterative mindset, and the code is essentially in a state of continuous evolution. These two cultures sometimes clash when the communities try to move in concert.

If that’s the case, why try to find harmony?

Collaboration between open source and open standards will fuel innovation

The internet is a perfect example of what harmony between the open-source and open-standards communities can achieve. When the internet began as ARPANET, it relied on common shared communications standards that predated TCP/IP. With time, standards and open-source implementations brought us TCP/IP, HTTP, NTP, XML, SAML, JSON and many others, and also enabled the creation of additional key global systems implemented in open standards and code, like disaster warnings (OASIS CAP) and standardized global trade invoicing (OASIS UBL).

The internet has literally transformed our world. That level of technological innovation and transformative power is possible for the future, too, if we re-energize the spirit of collaboration between the open-standards and open-source communities.

Finding harmony and a natural path of integration

With all of the critical open-source projects residing in repositories today, there are many opportunities for collaboration on associated standards to ensure the long-term operability of that software. Part of our mission at OASIS Open is identifying those open-source projects and giving them a collaborative environment and all the scaffolding they need to build a standard without it becoming a difficult process.

Another point Ferris shared with me is the necessity for this path of integration to grow. For instance, this need is particularly prevalent if you want your technology to be used in Asia: If you don’t have an international standard, Asian enterprises don’t even want to hear from you. We’re seeing the European community asserting a strong preference for standards as well. That is certainly a driver for open-source projects that want to play with some of the heavy hitters in the ecosystem.

Another area where you can see a growing need for integration is when an open-source project becomes bigger than itself, meaning it begins to impact a whole lot of other systems, and alignment is needed between them. An example would be a standard for telemetry data, which is now being used for so many different purposes, from observability to security. Another example is the software bill of materials, or SBOM. I know some things are being done in the open-source world to address the challenge of tracking the provenance of software. This is another case where, if we’re going to be successful at all, we need a standard to emerge.

It’s going to take a team effort

Fortunately, the ultimate goals of the open-source and open-standards communities are the same: interoperability, innovation and choice. We also have excellent proof points of how and why we need to work together, from the internet to Topology and Orchestration Specification for Cloud Applications (TOSCA) and more. In addition, major stakeholders are carrying the banner, acknowledging that for certain open-source projects we need to take a strategic, longer-term view that includes standards.

That’s a great start to a team effort. Now it’s time for foundations to step up to the plate and collaborate with each other and with those stakeholders.

09 Jun 2021

“Alzheimer’s is open for business:” Controversial FDA approval could pave the way for future drugs

On Monday, a 17-year drought in the world of Alzheimer’s drugs ended with the FDA approval of Biogen’s Aduhlem (aducanumab). The controversy behind the FDA’s decision was considerable, but it doesn’t seem to be spooking drug developers who are now narrowing in on the degenerative brain disease. 

In a nutshell, the approval of Aduhelm came after conflicting results from clinical trials. In November 2020 an independent FDA advisory board did not recommend that the agency endorse the drug, but in June, the agency approved the drug anyway via an Accelerated Approval Program. 

Aduhelm is now the first novel treatment to address one underlying cause of Alzheimer’s – beta-amyloid plaques that accumulate in the brain. 

The drug received support from patient and industry groups (the FDA also noted that the “need for treatment is urgent”, in a statement explaining the agency’s choice). Still, there have been a number of doctors who have expressed concern. One member of the expert committee that voted not to recommend the approval of Aduhelm back in November has resigned since the announcement.

However, the inconsistency of the science and highly public debate around the approval of Aduhelm doesn’t seem to have halted enthusiasm within the pharmaceutical industry. Rather, it may signal a new wave of additional treatments in the next few years, which will piggyback off of the approval of Aduhelm (however controversial that approval may be).

“This is great news for investors and for drug companies that are working towards new drugs,” says Alison Ward, a research scientist at the USC Schaeffer Center for Health Policy and Economics. 

Historically there have been a few factors that have made the development of a drug for Alzheimer’s an uphill battle. 

The first, is a 17-year history of failure to bring a drug through clinical trials. Even Biogen’s clinical trials for Aduhelm were halted in 2019 because it wasn’t clear that they would reach their clinical endpoints (effectively, the target outcomes of the trial). In fact, Aduhlem was approved based on a “surrogate endpoint,” the decline of beta-amyloid, not the primary endpoint, cognitive function. 

Trials for Alzheimer’s drugs have also historically been expensive. A 2018 paper in Alzheimer’s and Dementia: Translational Research and Clinical Interventions (a journal run by the Alzheimer’s Association) estimated that the cost of developing an Alzheimer’s drug was about $5.6 billion. By comparison, the mean investment needed to bring a new drug to market is about $1.3 billion according to analysis of SEC filings for companies that applied for FDA approval between 2009 and 2018 (though the median cost was about $985 million). Older estimates have put the costs of bringing a drug to market at $2.8 billion

For Alzheimer’s specifically, Phase 3 trials are still largely sponsored by industry, but over the past five years, trials sponsored solely by the industry have decreased. Government grants and funding via public-private partnerships have made up an increasing share of available funds.

Martin Tolar, the founder CEO of Alzheon, another company pursuing an oral treatment for Alzheimer’s (currently in a phase 3 clinical trial), says that attracting other forms of funding was a challenge. 

“It was impossible to finance anything,” he says. “It was impossible to get Wall Street interested because everything was failing one after the other after the other.” 

He expects this recent approval of Aduhelm to change that outlook considerably. Already, we are seeing some increased interest in companies already in phase 3 clinical trials: After the FDA announcement, shares of Eli Lilly, also running a phase 3 clinical trial, surged by 10 percent

“I’ve had probably hundreds of discussions, of calls, from bankers, investors, collaborators, pharma, you name it,” Tolar says. “Alzheimer’s is open for business.”

With renewed interest and what appears like a pathway to approval at the FDA, the environment for the next generation of Alzheimer’s drugs seems to be ripening. Right now, there are about 130 phase 3 clinical trials on Alzheimer’s drugs that are either completed, active or recruiting. 

Tolar sees the FDA decision, based on imperfect data, as a “signal of urgency” to approve new treatments that are imminent arrivals. 

As Ward pointed out in a white paper on in-class drug innovation, “follow on” drugs go on to become leaders in the industry, especially if they demonstrate better safety or efficacy than the drug that was first to market.  That, the paper argues, suggests drug approval may “pave the way” for more effective drugs in the future. 

In the case of Alzheimer’s, it might not be one drug that dominates, even as more get approved, she notes. Rather, a cadre of new, approved drugs may go on to compliment one another.

“The way that the medical community is thinking of AD [Alzheimer’s Disease] now is that it’s likely going to be a combination of drugs or a cocktail of drugs that comes together to have true success at delaying progression,” she says.  

“If we’re looking to treat AD with a cocktail of drugs, history suggests it’s individually approved drugs that come together to make those drug cocktails.” 

There are still some potential pitfalls for future drugs to consider. One argument is that with an approved drug available, it may be more difficult to recruit participants in clinical trials, slowing the pace of drug discovery. In that respect, Ward argues that this will ultimately be dwarfed by patients who will now look into a potential diagnosis for Alzheimer’s now that there’s something to treat it with. 

There’s also the fact that Aduhelm’s costs are high (about $56,000 for a year’s supply, the brunt of which will be borne by Medicare), and the data remains questionable. Those factors may push patients towards other drugs, even if they’re in clinical trials. 

Additionally, there is the question of how well Aduhelm actually performs during the critical followup study mandated by the FDA as a condition of the drug’s approval. Whether Aduhelm can truly slow cognitive decline, as well as help address levels of beta-amyloid from the brain, remains questionable based on current data. 

Still, Tolar doesn’t see the results of that study as particularly relevant because the industry will have moved on. Biogen CEO Michel Vounatsos has said it may not share results of this trial for as many as nine years, though he noted the company would try to deliver data sooner. 

 “There will be better drugs by then,” Tolar predicts. 

Tolar’s phase 3 clinical trial just began dosing this week, and is scheduled to end by 2024

Biogen and Esai will likely also have another drug ready for evaluation by then, as two phase 3 clinical trials for another beta-amyloid antibody treatment called lecanemab are scheduled for completion by 2024 and 2027

The signal sent by Monday’s approval may be a pathway for future drugs, rather than an end itself. The data is imperfect, the costs high, and the controversy considerable, but the band-aid has been ripped off. 

09 Jun 2021

Joby Aviation eyes Asia and Europe as early markets alongside North America

While electric vertical take-off and landing passenger aircraft startup Joby Aviation is targeting North America for its initial commercial launch, founder and CEO JoeBen Bevirt expects the company to have an early presence in Asia and Europe as well.

Bevirt, who joined the TC Sessions: Mobility 2021 on June 9, didn’t give away the first location; although recent announcements suggest it is narrowed down to Los Angeles, Miami, New York and the San Francisco Bay Area. But he did weigh in on what those first cities will look like.

“I imagine that we will have early markets in each of the three regions,” he said. “Our initial launch market will be in North America just for proximity to the nexus of where most of our team is currently. But there are incredible opportunities and cities around the world and we want to provide as much benefit to as many people as quickly as we possibly can. And so that’s why we’re so focused on scaling manufacturing.”

Joby Aviation is expected to begin construction on a 450,000-square-foot manufacturing facility, designed in conjunction with Toyota, later this year. The company has completed a pilot manufacturing facility already.

Joby, once a secretive startup, has had a far more public six months of late. The company reached a deal to merge with special purpose acquisition company Reinvent Technology Partners, formed by well-known investor and LinkedIn co-founder Reid Hoffman, Michael Thompson and Zynga founder Mark Pincus. Hoffman also joined Bevirt at the TC Sessions: Mobility event.

Prior to its SPAC deal, Joby had gained attention and investors over the years as it developed its eVTOL. Toyota became an important backer and partner, leading a $620 million Series C round of funding in January 2020. Nearly a year later, Joby acquired Uber’s air taxi moonshot Elevate as part of a complex deal.

Today, Joby is focused on certification, which it has been working on with the FAA since 2018, as well manufacturing its eVTOL aircraft. The company is also starting to put the pieces together for how and where it will operate. And that’s adding to its size. In the past year, Joby has doubled its workforce, which now sits at about 800.

Earlier this month, Joby Aviation announced a partnership with REEF Technology, one of the country’s largest parking garage operators, and real estate acquisition company Neighborhood Property Group to build out its network of vertiports, with an initial focus on Los Angeles, Miami, New York and the San Francisco Bay Area.

When 2024 arrives, Bevirt anticipates launching in one to two cities in that first year of operation. 

“We do want to provide sufficient depth of coverage that consumers get to experience the transformative experience,” Bevirt said. “There have been cases where if a new service launches, and there’s not enough supply, consumers can be frustrated, right. And so we want to make sure we can, that we can really service, at least a portion of the demand and provide a really gratifying experience to our customers. I think that that’s the piece that we really care about as a company, is making customers into raving fans.”

09 Jun 2021

Spotlight gets more powerful in iOS 15, even lets you install apps

With the upcoming release of iOS 15 for Apple mobile devices, Apple’s built-in search feature known as Spotlight will become a lot more functional. In what may be one of its bigger updates since it introduced Siri Suggestions, the new version of Spotlight is becoming an alternative to Google for several key queries, including web images and information about actors, musicians, TV shows and movies. It will also now be able to search across your photo library, deliver richer results for contacts, and connect you more directly with apps and the information they contain. It even allows you to install apps from the App Store without leaving Spotlight itself.

Spotlight is also more accessible than ever before.

Years ago, Spotlight moved from its location to the left of the Home screen to become available with a swipe down in the middle of any screen in iOS 7, which helped grow user adoption. Now, it’s available with the same swipe down gesture on the iPhone’s Lock Screen, too.

Apple showed off a few of Spotlight’s improvements during its keynote address at its Worldwide Developer Conference, including the search feature’s new cards for looking up information on actors, movies and shows, as well as musicians. This change alone could redirect a good portion of web searches away from Google or dedicated apps like IMDb.

For years, Google has been offering quick access to common searches through its Knowledge Graph, a knowledge base that allows it to gather information from across sources and then use that to add informational panels above and the side of its standard search results. Panels on actors, musicians, shows and movies are available as part of that effort.

But now, iPhone users can just pull up this info on their home screen.

The new cards include more than the typical Wikipedia bio and background information you may expect — they also showcase links to where you can listen or watch content from the artist or actor or movie or show in question. They include news articles, social media links, official websites, and even direct you to where the searched person or topic may be found inside your own apps. (E.g. a search for “Billie Eilish” may direct you to her tour tickets inside SeatGeek, or a podcast where she’s a guest).

Image Credits: Apple

For web image searches, Spotlight also now allows you to search for people, places, animals, and more from the web — eating into another search vertical Google today provides.

Image Credits: iOS 15 screenshot

Your personal searches have been upgraded with richer results, too, in iOS 15.

When you search for a contact, you’ll be taken to a card that does more than show their name and how to reach them. You’ll also see their current status (thanks to another iOS 15 feature), as well as their location from FindMy, your recent conversations on Messages, your shared photos, calendar appointments, emails, notes, and files. It’s almost like a personal CRM system.

Image Credits: Apple

Personal photo searches have also been improved. Spotlight now uses Siri intelligence to allow you to search your photos by the people, scenes, elements in your photos, as well as by location. And it’s able to leverage the new Live Text feature in iOS 15 to find the text in your photos to return relevant results.

This could make it easier to pull up photos where you’ve screenshot a recipe, a store receipt, or even a handwritten note, Apple said.

Image Credits: Apple

A couple of features related to Spotlight’s integration with apps weren’t mentioned during the keynote.

Spotlight will now display action buttons on the Maps results for businesses that will prompt users to engage with that business’s app. In this case, the feature is leveraging App Clips, which are small parts of a developer’s app that let you quickly perform a task even without downloading or installing the app in question. For example, from Spotlight you may be prompted to pull up a restaurant’s menu, buy tickets, make an appointment, order takeout, join a waitlist, see showtimes, pay for parking, check prices and more.

The feature will require the business to support App Clips in order to work.

Image Credits: iOS 15 screenshot

Another under-the-radar change — but a significant one — is the new ability to install apps from the App Store directly from Spotlight.

This could prompt more app installs, as it reduces the steps from a search to a download, and makes querying the App Store more broadly available across the operating system.

Developers can additionally choose to insert a few lines of code to their app to make data from the app discoverable within Spotlight and customize how it’s presented to users. This means Spotlight can work as a tool for searching content from inside apps — another way Apple is redirecting users away from traditional web searches in favor of apps.

However, unlike Google’s search engine, which relies on crawlers that browse the web to index the data it contains, Spotlight’s in-app search requires developer adoption.

Still, it’s clear Apple sees Spotlight as a potential rival to web search engines, including Google’s.

“Spotlight is the universal place to start all your searches,” said Apple SVP of Software Engineering Craig Federighi during the keynote event.

Spotlight, of course, can’t handle “all” your searches just yet, but it appears to be steadily working towards that goal.

read more about Apple's WWDC 2021 on TechCrunch

09 Jun 2021

Decades-old ASCII adventure Nethack may hint at the future of AI

Machine learning models have already mastered Chess, Go, Atari games, and more, but in order for it to ascend to the next level, researchers at Facebook intend for AI to take on a different kind of game: the notoriously difficult and infinitely complex Nethack.

“We wanted to construct what we think is the most accessible ‘grand challenge’ with this game. It won’t solve AI, but it will unlock pathways towards better AI,” said Facebook AI Research’s Edward Grefenstette. “Games are a good domain to find our assumptions about what makes machines intelligent and break them.”

You may not be familiar with Nethack, but it’s one of the most influential games of all time. You’re an adventurer in a fantasy world, delving through the increasingly dangerous depths of a dungeon that’s different every time. You must battle monsters, navigate traps and other hazards, and meanwhile stay on good terms with your god. It’s the first “roguelike” (after Rogue, its immediate and much simpler predecessor) and arguably still the best — almost certainly the hardest.

(It’s free, by the way, and you can download and play it on nearly any platform.)

Its simple ASCII graphics, using a g for a goblin, an @ for the player, lines and dots for the level’s architecture, and so on, belie its incredible complexity. Because Nethack, which made its debut in 1987, has been under active development ever since, with its shifting team of developers expanding its roster of objects and creatures, rules, and the countless, countless interactions between them all.

And this is part of what makes Nethack such a difficult and interesting challenge for AI: It’s so open-ended. Not only is the world different every time, but every object and creature can interact in new ways, most of them hand-coded over decades to cover every possible player choice.

Nethack with a tile-based graphics update – all the information is still available via text.

“Atari, Dota 2, StarCraft 2… the solutions we’ve had to make progress there are very interesting. Nethack just presents different challenges. You have to rely on human knowledge to play the game as a human,” said Grefenstette.

In these other games, there’s a more or less obvious strategy to winning. Of course it’s more complex in a game like Dota 2 than in an Atari 800 game, but the idea is the same — there are pieces the player controls, a game board of environment, and win conditions to pursue. That’s kind of the case in Nethack, but it’s weirder than that. For one thing, the game is different every time, and not just in the details.

“New dungeon, new world, new monsters and items, you don’t have a save point. If you make a mistake and die you don’t get a second shot. It’s a bit like real life,” said Grefenstette. “You have to learn from mistakes and come to new situations armed with that knowledge.”

Drinking a corrosive potion is a bad idea, of course, but what about throwing it at a monster? Coating your weapon with it? Pouring it on the lock of a treasure chest? Diluting it with water? We have intuitive ideas about these actions, but a game-playing AI doesn’t think the way we do.

The depth and complexity of the systems in Nethack are difficult to explain, but that diversity and difficulty make the game a perfect candidate for a competition, according to Grefenstette. “You have to rely on human knowledge to play the game,” he said.

People have been designing bots to play Nethack for many years that rely not on neural networks but decision trees as complex as the game itself. The team at Facebook Research hopes to engender a new approach by building a training environment that people can test machine learning-based game-playing algorithms on.

Nethack screens with labels showing what the AI is aware of.

The Nethack Learning Environment was actually put together last year, but the Nethack Challenge is only just now getting started. The NLE is basically a version of the game embedded in a dedicated computing environment that lets an AI interact with it through text commands (directions, actions like attack or quaff)

It’s a tempting target for ambitious AI designers. While games like StarCraft 2 may enjoy a higher profile in some ways, Nethack is legendary and the idea of building a model on completely different lines from those used to dominate other games is an interesting challenge.

It’s also, as Grefenstette explained, a more accessible one than many in the past. If you wanted to build an AI for StarCraft 2, you needed a lot of computing power available to run visual recognition engines on the imagery from the game. But in this case the entire game is transmitted via text, making it extremely efficient to work with. It can be played thousands of times faster than any human could with even the most basic computing setup. That leaves the challenge wide open to individuals and groups who don’t have access to the kind of high-power setups necessary to power other machine learning methods.

“We wanted to create a research environment that had a lot of challenges for the AI community, but not restrict it to only large academic labs,” he said.

For the next few months, NLE will be available for people to test on, and competitors can basically build their bot or AI by whatever means they choose. But when the competition itself starts in earnest on October 15, they’ll be limited to interacting with the game in its controlled environment through standard commands — no special access, no inspecting RAM, etc.

The goal of the competition will be to complete the game, and the Facebook team will track how many times the agent “ascends,” as it’s called in Nethack, in a set amount of time. But “we’re assuming this is going to be zero for everyone,” Grefenstette admitted. After all, this is one of the hardest games ever made, and even humans who have played it for years have trouble winning even once in a lifetime, let alone several times in a row. There will be other scoring metrics to judge winners in a number of categories.

The hope is that this challenge provides the seed of a new approach to AI, one that more fundamentally resembles actual human thinking. Shortcuts, trial and error, score-hacking, and zerging won’t work here — the agent needs to learn systems of logic and apply them flexibly and intelligently, or die horribly at the hands of an enraged centaur or owlbear.

You can check out the rules and other specifics of the Nethack Challenge here. Results will be announced at the NeurIPS conference later this year.

09 Jun 2021

Todd and Rahul’s Angel Fund closes new $24 million fund

After making investments in 57 startups together, Superhuman CEO Rahul Vohra and Eventjoy founder Todd Goldberg are back at it with a new $24 million fund and big ambitions amid a venture capital renaissance with fast-moving deals a plenty.

Todd and Rahul’s Angel Fund” announced their first $7.3 million fund just weeks before the pandemic hit stateside last year and they were soon left with more access to deals than they had funding to support; they went on raise $3.5 million in a rolling fund designed around making investments in later stage deals beyond Seed and Series A rounds.

“We closed right before Covid hit and we had one plan but then everything accelerated,” Goldberg tells TechCrunch. “A lot of our companies started raising additional rounds.”

With their latest raise, Vohra and Goldberg are looking to maintain their wide outlook with a single fund, saying they plan to invest three-quarters of the fund in early stage deals while saving a quarter of the $24 million for later stage opportunities. Still, the duo know they likely could’ve chosen to raise more.

“A lot of our peers were scaling up into much larger funds,” Vohra says. “For us, we wanted to stay small and collaborative.”

Some of the firm’s investments from their first fund include NBA Top Shot creator Dapper Labs, open source Firebase alternative Supabase, D2C liquor brand Haus, alternative asset platform Alt, biowearable maker Levels and location analytics startup Placer. Their biggest hit was an early investment in audio chat app Clubhouse before Andreessen Horowitz led its buzzy seed round at a $100 million valuation. Clubhouse most recently raised at $4 billion.

The pair say they’ve learned a ton through the past year of navigating increasingly competitive rounds and that fighting for those deals has helped the duo hone how they market themselves to founders.

“You never want to be a passive check,” Goldberg says. “We do three things: help companies find product/market fit, we help them super-charge distribution.. and we help them find the best investors.”

A big part of the firm’s appeal to founders has been the “operator” status of its founders. Goldberg’s startup Eventjoy was acquired by Ticketmaster and Vohra’s Rapportive was bought by Linkedin while his current startup Superhuman has maintained buzz for its premium email service and has raised $33 million from investors including Andreessen Horowitz and First Round Capital.

Their new has an unusual LP base that’s made up of over 110 entrepreneurs and investors, including 40 founders that Vohra and Goldberg have previously backed themselves. Backers of their second fund include Plaid’s William Hockey, Behance’s Scott Belsky, Haus’s Helena Price Hambrecht, Lattice’s Jack Altman and Loom’s Shahed Khan.