Year: 2018

16 Apr 2018

Domino’s will now deliver to 150,000 parks, pools and other non-traditional locations

Domino’s will now deliver your pizza to the beach – well, sort of! Or the park, the sports field, that one gas station down the road, or some notable landmark in your city where it will be easy for your delivery driver to find you, among other places. The company announced today the launch of over 150,000 “Domino’s Hotspots,” which are locations that don’t have a traditional delivery address, like a home or business address. Instead, hotspots are just places where customers can meet up with their driver to accept a delivery order, when they’re not at home or work.

But while the headlines proclaim Domino’s is coming to you at the beach or park, don’t expect the delivery driver to traipse across the hot sand to your towel or down a walking path into the woods – there are limits to how off-the-grid these deliveries will go.

Instead, “beach” and “park” deliveries are about getting pizza to your unconventional but easily accessible location – like poolside at your beach hotel, a public beach access parking lot, or a covered shelter at a park. (I scanned a good chunk of the Florida coast and found no hotspots actually delivering “beachside.” Sorry!)

Of course, it’s been possible to convince a delivery driver to meet you somewhere unusual in the past. But you’d normally have to negotiate that over the phone – with varying degrees of success and confusion. With Domino’s Hotspots, however, you’ll actually be able to search for these meet-up spots online or in the Domino’s mobile app and place an order digitally.

In addition to non-traditional locations like parks and pools, other hotspots available at launch include the Tommy Lasorda Field of Dreams in Los Angeles and the James Brown statue in Augusta, Georgia, for example. We also poked around in the app and found hotspots for locations like the pool and clubhouse in a neighborhood or hotel, the office at schools and colleges, and spots nearby other businesses, like grocery stores, malls, gas stations, and more.

Those latter locations are the kind of places where people outside of Domino’s delivery zones in more rural areas will often meet up to take a pizza delivery.

To use the new feature, customers will select the hotspot as the delivery destination in the app or online, then leave additional instructions that can help the driver find them, including their mobile phone number. When their order is complete, they’ll get text message alerts about the delivery’s progress, including the estimated time of arrival at the hand-off spot.

Despite its name, Domino’s Hotspots don’t offer a Wi-Fi connection. So expect to bring a phone with a good signal so you can receive those alerts.

“We listened to customers and their need for pizza delivery to locations without a traditional address,” said Russell Weiner, president of Domino’s USA, in a statement about the launch. “We know that delivery is all about convenience, and Domino’s Hotspots are an innovation that is all about flexible delivery options for customers.”

In recent years, Domino’s has invested heavily in digital, including with its Anywhere platform which allows for ordering via SMS, voice, smart TVs, connected devicescars, smartwatches, chat apps like Messenger, Slack, and more. It also offers in-app order tracking, and experiments with future tech, like delivery by robot and drone.

While some of its initiatives feel more gimmicky than others (e.g. order by tweeting a pizza emoji), the company’s commitment to digital along with other changes like store remodels has helped push its stock up 5,000 percent since 2008.

It now sees over 60 percent of orders in the U.S. coming in from digital platforms, and delivery accounts for around 65 percent of orders.

Digital technology investments are also critical given the competition not only from other pizza chains, but newer food delivery services like Uber Eats, GrubHub/Seamless,

DoorDash, and those that will bring you groceries – including stores’ quick meal kits and prepared foods to your door, like Amazon’s Whole Foods (via Prime Now), Shipt, Instacart, and more.

You can check to see which Domino’s Hotspots are near you from the website www.dominos.com/hotspots.

16 Apr 2018

Porsche plans 500 fast charging stations across US

Porsche is planning a Supercharger-like network of fast charging stations at dealerships and highway locations across the US. There will be at least 500 by the end of 2019 according to Klaus Zellmer, CEO of Porsche Cars North America, speaking to Automotive News.

The timing coincides with the launch of its Mission E electric vehicle that is scheduled for a 2019 launch followed by a crossover EV in 2020 (shown above).

“If you want to buy that car, you want to know what happens if I go skiing and go further than 300 miles,” Zellmer told Automotive News. “What do I do? So we need to have answers for that.”

According to the report, Porsche is considering charging for the use of the chargers not at dealerships — it’s up to the independent dealership if they charge for use of the chargers. If this plan follows Tesla’s pricing, drivers can expect to pay slightly different rates in different states. Tesla charges a flat rate $0.26 per kWh in California while in Michigan the cost is $0.24 per minute above 60 kW and $0.12 per minute at or below 60 kW.

Porsche will not pay to have the chargers installed at its US-based dealerships. It will be up to the dealerships to decide it they want to cover the cost of installing the chargers.

Porsche is right and is following Tesla’s proven example. EV owners need places to recharge their vehicles and automakers have stepped up to build the infrastructure in the place of 3rd party companies.

16 Apr 2018

Tesla updates user interface, web browser in older Model S and Model X vehicles

A new update is bringing an improved user interface to older Tesla vehicles. According to this report citing forum users, the v8.1 (2018.12) update improves the speed and capability in Model S and Model X vehicles equipped with an Nvidia Tegra 3-powered MCU. This was expected; Musk stated in late December 2017 that Tesla was working to improve the browser for all its vehicles.

Users discovered the browser speed is dramatically faster, able to download at an average of over 5 Mbps. HTML5 capabilities also improved. This is just the latest in Tesla’s on-going mission to improve its vehicles after customers buy them.

Tesla launched the Model S with the Tegra 3 SoC and ran with it until late 2017 when the company switched to new x86_64-powered MCUs. Last month Elon Musk confirmed through Twitter that it was possible to retrofit older vehicles with new MCUs.

Though possible to upgrade older vehicles, it’s better for the consumer, and likely for the company, to upgrade existing hardware than make drivers bring in vehicles for a hardware upgrade.

16 Apr 2018

Bolt Threads joins Modern Meadow in the quest to bring lab-grown leather to market

There’s a new world of lab-grown replacements coming for everything from the meat department in your grocery store to a department store near you.

Lab-made leather replacements will soon join vegetable-based meat replacements on store shelves thanks to startups like Bolt Threads, which today announced that it would join companies like Modern Meadow in the quest to bring vegetable-based replacements for animal hides to market.

Earlier this year, the Silicon Valley-based Bolt Threads raised a $123 million financing to expand its business beyond the manufacture of spider silk which had brought the company acclaim — and an initial slate of products.

The announcement today of its new product, Mylo, is the first step on that path.

Working with established partner, Stella McCartney, and using technology licensed from the biomaterials company Ecovative Design, Bolt is bringing Mylo’s mushroom-based leather replacement to the world in a debut of one of McCartney’s Falabella bag designs made from the mushroom material.

The first bag will be available at the Victoria and Albert Museum’s Fashioned from Nature exhibit, open to the public on April 21st in London.

In an interview with Fast Company last year, McCartney discussed her commitment to sustainability. “I don’t think you should compromise anything for sustainability,” McCartney told the magazine. “The ultimate achievement for me is when someone comes into one of my stores and buys a Falabella bag thinking it’s real leather.”

While Bolt Threads is licensing its technology from Ecovative Design, Modern Meadow is choosing to develop its own intellectual property for growing a replacement leather.

Taking a different path to its California-based competitor, Brooklyn’s Modern Meadow model is going for a mass market while Bolt Threads is more bespoke.

The East Coast company partnered with the European chemical giant Evonik — and has raised over $40 million dollars from billionaire backers like Peter Thiel’s Breakout Ventures and Horizons Ventures (financed by Li Ka Shing — one of China’s wealthiest men) — along with the Singaporean investment giant, Temasek.

Both companies are examples of how animal husbandry is being replaced by technology in the search for a more sustainable way to feed and clothe the world’s growing population. It’s a population that’s demanding quality goods without sacrificing sustainable industrial practices — all things that are made possible by new material — and data — science along with novel manufacturing capabilities that show promise in taking things from the laboratory to the heart of the animal industries they’re looking to replace.

This is a pattern that’s not just happening in fashion, but being replicated in food science as well.

How quickly the change will come — and how viable these alternatives will be — depend on them scaling to meet a broad consumer demand. One purse in a museum show isn’t enough. Once there are hundreds of handbags on Target shelves — that’s when the revolution won’t need to be televised, because it will already have been commercialized.

16 Apr 2018

U.S. companies banned from selling components to ZTE

This time last year, Chinese electronics giant ZTE pled guilty to violating sanctions on Iran and North Korea. This morning, the U.S. Department of Commerce announced a seven-year export restriction for the company, resulting in a ban on U.S. component makers selling to ZTE. 

The company’s initial guilty plea was met with up to $1.2 billion penalties and fines, along with the dismissal of four senior employees, along with more fallout for lower level employees. As part of the initial agreement, ZTE was allowed to continue to work with U.S. companies, assuming it adhered to the rules laid out in the agreement. The DOC, however, contends that ZTE failed to significantly penalize those employees.

“ZTE made false statements to the U.S. Government when they were originally caught and put on the Entity List, made false statements during the reprieve it was given, and made false statements again during its probation,” Commerce Secretary Wilbur Ross said in a statement provided to TechCrunch. “ZTE misled the Department of Commerce.  Instead of reprimanding ZTE staff and senior management, ZTE rewarded them.  This egregious behavior cannot be ignored.”

A senior department official tells Reuters, in no uncertain terms, that the company, “provided information back to us basically admitting that they had made these false statements.”

The penalty is steep, given that U.S. companies are believed to provide more than a quarter of the components used in ZTE telecom equipment and mobile devices. The list includes names like San Diego-based Qualcomm, which provides Snapdragon processors for the company’s flagship devices.

The news arrives amid fears of a looming trade war between the U.S. and China. ZTE has also been repeatedly name-checked by U.S. intelligence officials over spying concerns, along with fellow Chinese smartphone maker Huawei — though ZTE has managed to make more inroads with U.S. carriers over the years, regularly showing up around fourth place in market share. 

TechCrunch reached out to ZTE for response, but have yet to hear back.

16 Apr 2018

Russia starts blocking Telegram for failing to turn over encryption keys

The Russian state telecommunication regulator has began blocking Telegram as expected. This comes after the messaging company refused to give Russian security services encryption keys. The service is expected to be blocked within the coming hours.

According to several reports Telegram is still operational in the country though several service providers have started blocking the company’s website.

Ran by its Russian founder Pavel Durov, Telegram has over 200 million users and is a top-ten messaging service made popular by its strong stance on privacy.

Telegram is recognized as an operator of information dissemination in Russia and therefore the company is required by Russian to provide keys to its encryption service to the Federal Security Service. This is so the FSS can reportedly read the messages of suspected terrorists. On March 20 the Russian communications regulator Roskomnadzor gave Telegram 15 days to comply. This was followed by Durov publicly decrying the order, saying Telegram will stand for freedom and privacy.

“The terrorist threat in Russia will stay at the same level, because extremists will continue to use encrypted communication channels – in other messengers, or through a VPN,” he said according to a report by Reuters.

Durov has long stood by this stance. Back in 2015 at TechCrunch Disrupt San Francisco, Durov revealed that ISIS was using Telegram. When asked if it concerns him, he said “I think that privacy, ultimately, and the right for privacy is more important than our fear of bad things happening, like terrorism. If you look at ISIS — yes, there’s a war going on in the Middle East. It’s a series of tragic events. But ultimately, the ISIS will always find a way to communicate within themselves. And if any means of communication turns out to be not secure for them, they’ll just switch to another one. So I don’t think we are actually taking part in these activities. I don’t think we should be guilty or feel guilty about it. I still think we’re doing the right thing, protecting our users’ privacy.”

It’s unclear how this block will change Telegram’s plan for a billion-dollar ICO. We’ve reached out to Telegram for comment.

16 Apr 2018

UK report urges action to combat AI bias

The need for diverse development teams and truly representational data-sets to avoid biases being baked into AI algorithms is one of the core recommendations in a lengthy Lords committee report looking into the economic, ethical and social implications of artificial intelligence, and published today by the upper House of the UK parliament.

“The main ways to address these kinds of biases are to ensure that developers are drawn from diverse gender, ethnic and socio-economic backgrounds, and are aware of, and adhere to, ethical codes of conduct,” the committee writes, chiming with plenty of extant commentary around algorithmic accountability.

“It is essential that ethics take centre stage in AI’s development and use,” adds committee chairman, Lord Clement-Jones, in a statement. “The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences.”

The report also calls for the government to take urgent steps to help foster “the creation of authoritative tools and systems for auditing and testing training datasets to ensure they are representative of diverse populations, and to ensure that when used to train AI systems they are unlikely to lead to prejudicial decisions” — recommending a publicly funded challenge to incentivize the development of technologies that can audit and interrogate AIs.

“The Centre for Data Ethics and Innovation, in consultation with the Alan Turing Institute, the Institute of Electrical and Electronics Engineers, the British Standards Institute and other expert bodies, should produce guidance on the requirement for AI systems to be intelligible,” the committee adds. “The AI development sector should seek to adopt such guidance and to agree upon standards relevant to the sectors within which they work, under the auspices of the AI Council” — the latter being a proposed industry body it wants established to help ensure “transparency in AI”.

The committee is recommending a cross-sector AI Code to try to steer developments in a positive, societally beneficial direction — though not for this to be codified in law (the suggestion is could “provide the basis for statutory regulation, if and when this is determined to be necessary”).

Among the five principles they’re suggesting as a starting point for the voluntary code are that AI should be developed for “the common good and benefit of humanity”, and that it should operate on “principles of intelligibility and fairness”.

Though, elsewhere in the report, the committee points out it can be a challenge for humans to understand decisions made by some AI technologies — going on to suggest it may be necessary to refrain from using certain AI techniques for certain types of use-cases, at least until algorithmic accountability can be guaranteed.

“We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take,” it writes in a section discussing ‘intelligible AI’. “In cases such as deep neural networks, where it is not yet possible to generate thorough explanations for the decisions that are made, this may mean delaying their deployment for particular uses until alternative solutions are found.”

A third principle the committee says it would like to see included in the proposed voluntary code is: “AI should not be used to diminish the data rights or privacy of individuals, families or communities”.

Though this is a curiously narrow definition — why not push for AI not to diminish rights, period?

“It’s almost as if ‘follow the law’ is too hard to say,” observes Sam Smith, a coordinator at patient data privacy advocacy group, medConfidential, discussing the report.

Looking at the tech industry as a whole, it’s certainly hard to conclude that self-defined ‘ethics’ appear to offer much of a meaningful check on commercial players’ data processing and AI activities.

Topical case in point: Facebook has continued to claim there was nothing improper about the fact millions of people’s information was shared with professor Aleksandr Kogan. People “knowingly provided their information” is the company’s defensive claim.

Yet the vast majority of people whose personal data was harvested from Facebook by Kogan clearly had no idea what was possible under its platform terms — which, until 2015, allowed one user to ‘consent’ to the sharing of all their Facebook friends. (Hence ~270,000 downloaders of Kogan’s app being able to pass data on up to 87M Facebook users.)

So Facebook’s self-defined ‘ethical code’ has been shown to be worthless — aligning completely with its commercial imperatives, rather than supporting users to protect their privacy. (Just as its T&Cs are intended to cover its own “rear end”, rather than clearly inform people’s about their rights, as one US congressman memorably put it last week.)

“A week after Facebook were criticized by the US Congress, the only reference to the Rule of Law in this report is about exempting companies from liability for breaking it,” Smith adds in a MedConfidential response statement to the Lords report. “Public bodies are required to follow the rule of law, and any tools sold to them must meet those legal obligations. This standard for the public sector will drive the creation of tools which can be reused by all.”

Health data “should not be shared lightly”

The committee, which took evidence from Google -owned DeepMind as one of a multitude of expert witnesses during more than half a year’s worth of enquiry, touches critically on the AI company’s existing partnerships with UK National Health Service Trusts.

The first of which, dating from 2015 — and involving the sharing of ~1.6 million patients’ medical records with the Google-owned company — ran into trouble with the UK’s data protection regulator. The UK’s information commissioner concluded last summer that the Royal Free NHS Trust’s agreement with DeepMind had not complied with UK data protection law.

Patients’ medical records were used by DeepMind to develop a clinical task management app wrapped around an existing NHS algorithm for detecting a condition known as acute kidney injury. The app, called Streams, has been rolled out for use in the Royal Free’s hospitals — complete with PR fanfare. But it’s still not clear what legal basis exists to share patients’ data.

“Maintaining public trust over the safe and secure use of their data is paramount to the successful widespread deployment of AI and there is no better exemplar of this than personal health data,” the committee warns. “There must be no repeat of the controversy which arose between the Royal Free London NHS Foundation Trust and DeepMind. If there is, the benefits of deploying AI in the NHS will not be adopted or its benefits realised, and innovation could be stifled.”

The report also criticizes the “current piecemeal” approach being taken by NHS Trusts to sharing data with AI developers — saying this risks “the inadvertent under-appreciation of the data” and “NHS Trusts exposing themselves to inadequate data sharing arrangements”.

“The data held by the NHS could be considered a unique source of value for the nation. It should not be shared lightly, but when it is, it should be done in a manner which allows for that value to be recouped,” the committee writes.

A similar point — about not allowing a huge store of potential value which is contained within publicly-funded NHS datasets to be cheaply asset-stripped by external forces — was made by Oxford University’s Sir John Bell in a UK government-commissioned industrial strategy review of the life sciences sector last summer.

Despite similar concerns, the committee also calls for a framework for sharing NHS data be published by the end of the year, and is pushing for NHS Trusts to digitize their current practices and records — with a target deadline of 2022 — in “consistent formats” so that people’s medical records can be made more accessible to AI developers.

But worryingly, given the general thrust towards making sensitive health data more accessible to third parties, the committee does not seem to have a very fine-grained grasp of data protection in a health context — where, for example, datasets can be extremely difficult to render truly anonymous given the level of detail typically involved.

Although they are at least calling for the relevant data protection and patient data bodies to be involved in provisioning the framework for sharing NHS data, alongside Trusts that have already worked with DeepMind (and in one case received an ICO wrist-slap).

They write:

We recommend that a framework for the sharing of NHS data should be prepared and published by the end of 2018 by NHS England (specifically NHS Digital) and the National Data Guardian for Health and Care should be prepared with the support of the ICO [information commissioner’s office] and the clinicians and NHS Trusts which already have experience of such arrangements (such as the Royal Free London and Moorfields Eye Hospital NHS Foundation Trusts), as well as the Caldicott Guardians [the NHS’ patient data advocates]. This framework should set out clearly the considerations needed when sharing patient data in an appropriately anonymised form, the precautions needed when doing so, and an awareness of the value of that data and how it is used. It must also take account of the need to ensure SME access to NHS data, and ensure that patients are made aware of the use of their data and given the option to opt out.

As the Facebook-Cambridge Analytica scandal has clearly illustrated, opt-outs alone cannot safeguard people’s data or their legal rights — which is why incoming EU data protection rules (GDPR) beef up consent requirements to require a clear affirmative. (And it goes without saying that opt-outs are especially concerning in a medical context where the data involved is so sensitive — yet, at least in the case of a DeepMind partnership with Taunton and Somerset NHS Trust, patients do not even appear to have been given the ability to say no to their data being processed.)

Opt-outs (i.e. rather than opt-in systems) for data-sharing and self-defined/voluntary codes of ‘ethics’ demonstrably do very little to protect people’s legal rights where digital data is concerned — even if it’s true, for example, that Facebook holds itself in check vs what it could theoretically do with data, as company execs have suggested (one wonders what kind stuff they’re voluntarily refraining from, given what they have been caught trying to manipulate).

The wider risk of relying on consumer savvy to regulate commercial data sharing is that an educated, technologically aware few might be able to lock down — or reduce — access to their information; but the mainstream majority will have no clue they need to or even how it’s possible. And data protection for a select elite doesn’t sound very equitable.

Meanwhile, at least where this committee’s attitude to AI is concerned, developers and commercial entities are being treated with favorable encouragement — via the notion of a voluntary (and really pretty basic) code of AI ethics — rather than being robustly reminded they need to follow the law.

Given the scope and scale of current AI-fueled sandals, that risks the committee looking naive.

Though the government has made AI a strategic priority, and policies to foster and accelerate data-sharing to drive tech developments are a key part of its digital and industrial strategies. So the report needs to be read within that wider context.

The committee does add its voice to questions about whether/how legal liability will mesh with automated decision making — writing that “clarity is required” on whether “new mechanisms for legal liability and redress” are needed or not.

We recommend that the Law Commission consider the adequacy of existing legislation to address the legal liability issues of AI and, where appropriate, recommend to Government appropriate remedies to ensure that the law is clear in this area,” it says on this. “At the very least, this work should establish clear principles for accountability and intelligibility. This work should be completed as soon as possible.” 

But this isn’t exactly cutting edge commentary. Last month the government announced a three-year regulatory review focused on self-driving cars and the law, for instance. And the liability question is already well-aired — and in the autonomous cars case, at least, already having its tires extensively kicked in the UK.

What’s less specifically discussed in government circles is how AIs are demonstrably piling pressure on existing laws. And what — if anything — should be done to address those kind of AI-fueled breaking points. (Terrorist content has been an exception for some years, with government ministers more than happy to make platforms and technologies their scapegoat; more recently hate speech on online platforms has also become a major political target for governments in Europe.)

The committee briefly touches on some of these pressure points in a section on AI’s impact on “social and political cohesion”, noting concerns raised to it about issues such as filter bubbles and the risk of AIs being used to manipulate elections. “[T]here is a rapidly growing need for public understanding of, and engagement with, AI to develop alongside the technology itself. The manipulation of data in particular will be a key area for public understanding and discussion in the coming months and years,” it writes here. 

However it has little in the way of gunpowder — merely recommending that research is commissioned into “the possible impact of AI on conventional and social media outlets”, and to investigate “measures which might counteract the use of AI to mislead or distort public opinion as a matter of urgency”.

Elsewhere in the report, it does also raise an interesting concern about data monopolies — noting that investments by “large overseas technology companies in the UK economy” are “increasing consolidation of power and influence by a select few”, which it argues risks damaging the UK’s home-grown AI start-up sector.

But again there’s not much of substance in its response. The committee doesn’t seem to have formed its own ideas on how or even whether the government needs to address data being concentrating power in the hands of big tech — beyond calling for “strong” competition frameworks. This lack of conviction is attributed to hearing mixed messages on the topic from its witnesses. (Though may well also be related to the economic portion of the enquiry’s focus.)

“The monopolisation of data demonstrates the need for strong ethical, data protection and competition frameworks in the UK, and for continued vigilance from the regulators,” it concludes. “We urge the Government, and the Competition and Markets Authority, to review proactively the use and potential monopolisation of data by the big technology companies operating in the UK.”

The report also raises concerns about access to funding for UK AI startups to ensure they can continue scaling domestic businesses — recommending that a chunk of the £2.5BN investment fund at the British Business Bank, which the government announced in the Autumn Budget 2017, is “reserved as an AI growth fund for SMEs with a substantive AI component, and be specifically targeted at enabling such companies to scale up”.

No one who supports the startup cause would argue with trying to make more money available. But if data access has been sealed up by tech giants all the scale up funding in the world won’t help domestic AI startups break through that algorithmic ceiling.

Also touched on: The looming impact of Brexit, with the committee calling on the government to “commit to underwriting, and where necessary replacing, funding for European research and innovation programmes, after we have left the European Union” . Which boils down to another whistle in a now very long score of calls for replacement funding after the UK leaves the EU.

Funding for regulators is another concern, with a warning that the ICO must be “adequately and sustainably resourced” — as a result of the additional burden the committee expects AI to put on existing regulators.

This issue is also on the radar of the UK’s digital minister, Matt Hancock, who has said he’s considering what additional resources the ICO might need — such as the power to compel testimony from individuals. (Though the ICO itself has previously raised concerns that the minister and his data protection bill are risking undermining her authority.) For now it remains to be seen how well armed the agency will be to meet the myriad challenges generated and scaled by AI’s data processors.

“Blanket AI-specific regulation, at this stage, would be inappropriate,” the report adds. “We believe that existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation which may be needed. We welcome that the Data Protection Bill and GDPR appear to address many of the concerns of our witnesses regarding the handling of personal data, which is key to the development of AI. The Government Office for AI, with the Centre for Data Ethics and Innovation, needs to identify the gaps, if any, where existing regulation may not be adequate. The Government Office for AI must also ensure that the existing regulators’ expertise is utilised in informing any potential regulation that may be required in the future.”

The committee’s last two starter principles for their voluntary AI code serve to underline how generously low the ethical bar is really being set here — boiling down to: AI shouldn’t be allowed to kill off free schools for our kids, nor be allowed to kill us — which may itself be another consequence of humans not always being able to clearly determine how AI does what it does or exactly what it might be doing to us.

16 Apr 2018

UK report urges action to combat AI bias

The need for diverse development teams and truly representational data-sets to avoid biases being baked into AI algorithms is one of the core recommendations in a lengthy Lords committee report looking into the economic, ethical and social implications of artificial intelligence, and published today by the upper House of the UK parliament.

“The main ways to address these kinds of biases are to ensure that developers are drawn from diverse gender, ethnic and socio-economic backgrounds, and are aware of, and adhere to, ethical codes of conduct,” the committee writes, chiming with plenty of extant commentary around algorithmic accountability.

“It is essential that ethics take centre stage in AI’s development and use,” adds committee chairman, Lord Clement-Jones, in a statement. “The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences.”

The report also calls for the government to take urgent steps to help foster “the creation of authoritative tools and systems for auditing and testing training datasets to ensure they are representative of diverse populations, and to ensure that when used to train AI systems they are unlikely to lead to prejudicial decisions” — recommending a publicly funded challenge to incentivize the development of technologies that can audit and interrogate AIs.

“The Centre for Data Ethics and Innovation, in consultation with the Alan Turing Institute, the Institute of Electrical and Electronics Engineers, the British Standards Institute and other expert bodies, should produce guidance on the requirement for AI systems to be intelligible,” the committee adds. “The AI development sector should seek to adopt such guidance and to agree upon standards relevant to the sectors within which they work, under the auspices of the AI Council” — the latter being a proposed industry body it wants established to help ensure “transparency in AI”.

The committee is recommending a cross-sector AI Code to try to steer developments in a positive, societally beneficial direction — though not for this to be codified in law (the suggestion is could “provide the basis for statutory regulation, if and when this is determined to be necessary”).

Among the five principles they’re suggesting as a starting point for the voluntary code are that AI should be developed for “the common good and benefit of humanity”, and that it should operate on “principles of intelligibility and fairness”.

Though, elsewhere in the report, the committee points out it can be a challenge for humans to understand decisions made by some AI technologies — going on to suggest it may be necessary to refrain from using certain AI techniques for certain types of use-cases, at least until algorithmic accountability can be guaranteed.

“We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take,” it writes in a section discussing ‘intelligible AI’. “In cases such as deep neural networks, where it is not yet possible to generate thorough explanations for the decisions that are made, this may mean delaying their deployment for particular uses until alternative solutions are found.”

A third principle the committee says it would like to see included in the proposed voluntary code is: “AI should not be used to diminish the data rights or privacy of individuals, families or communities”.

Though this is a curiously narrow definition — why not push for AI not to diminish rights, period?

“It’s almost as if ‘follow the law’ is too hard to say,” observes Sam Smith, a coordinator at patient data privacy advocacy group, medConfidential, discussing the report.

Looking at the tech industry as a whole, it’s certainly hard to conclude that self-defined ‘ethics’ appear to offer much of a meaningful check on commercial players’ data processing and AI activities.

Topical case in point: Facebook has continued to claim there was nothing improper about the fact millions of people’s information was shared with professor Aleksandr Kogan. People “knowingly provided their information” is the company’s defensive claim.

Yet the vast majority of people whose personal data was harvested from Facebook by Kogan clearly had no idea what was possible under its platform terms — which, until 2015, allowed one user to ‘consent’ to the sharing of all their Facebook friends. (Hence ~270,000 downloaders of Kogan’s app being able to pass data on up to 87M Facebook users.)

So Facebook’s self-defined ‘ethical code’ has been shown to be worthless — aligning completely with its commercial imperatives, rather than supporting users to protect their privacy. (Just as its T&Cs are intended to cover its own “rear end”, rather than clearly inform people’s about their rights, as one US congressman memorably put it last week.)

“A week after Facebook were criticized by the US Congress, the only reference to the Rule of Law in this report is about exempting companies from liability for breaking it,” Smith adds in a MedConfidential response statement to the Lords report. “Public bodies are required to follow the rule of law, and any tools sold to them must meet those legal obligations. This standard for the public sector will drive the creation of tools which can be reused by all.”

Health data “should not be shared lightly”

The committee, which took evidence from Google -owned DeepMind as one of a multitude of expert witnesses during more than half a year’s worth of enquiry, touches critically on the AI company’s existing partnerships with UK National Health Service Trusts.

The first of which, dating from 2015 — and involving the sharing of ~1.6 million patients’ medical records with the Google-owned company — ran into trouble with the UK’s data protection regulator. The UK’s information commissioner concluded last summer that the Royal Free NHS Trust’s agreement with DeepMind had not complied with UK data protection law.

Patients’ medical records were used by DeepMind to develop a clinical task management app wrapped around an existing NHS algorithm for detecting a condition known as acute kidney injury. The app, called Streams, has been rolled out for use in the Royal Free’s hospitals — complete with PR fanfare. But it’s still not clear what legal basis exists to share patients’ data.

“Maintaining public trust over the safe and secure use of their data is paramount to the successful widespread deployment of AI and there is no better exemplar of this than personal health data,” the committee warns. “There must be no repeat of the controversy which arose between the Royal Free London NHS Foundation Trust and DeepMind. If there is, the benefits of deploying AI in the NHS will not be adopted or its benefits realised, and innovation could be stifled.”

The report also criticizes the “current piecemeal” approach being taken by NHS Trusts to sharing data with AI developers — saying this risks “the inadvertent under-appreciation of the data” and “NHS Trusts exposing themselves to inadequate data sharing arrangements”.

“The data held by the NHS could be considered a unique source of value for the nation. It should not be shared lightly, but when it is, it should be done in a manner which allows for that value to be recouped,” the committee writes.

A similar point — about not allowing a huge store of potential value which is contained within publicly-funded NHS datasets to be cheaply asset-stripped by external forces — was made by Oxford University’s Sir John Bell in a UK government-commissioned industrial strategy review of the life sciences sector last summer.

Despite similar concerns, the committee also calls for a framework for sharing NHS data be published by the end of the year, and is pushing for NHS Trusts to digitize their current practices and records — with a target deadline of 2022 — in “consistent formats” so that people’s medical records can be made more accessible to AI developers.

But worryingly, given the general thrust towards making sensitive health data more accessible to third parties, the committee does not seem to have a very fine-grained grasp of data protection in a health context — where, for example, datasets can be extremely difficult to render truly anonymous given the level of detail typically involved.

Although they are at least calling for the relevant data protection and patient data bodies to be involved in provisioning the framework for sharing NHS data, alongside Trusts that have already worked with DeepMind (and in one case received an ICO wrist-slap).

They write:

We recommend that a framework for the sharing of NHS data should be prepared and published by the end of 2018 by NHS England (specifically NHS Digital) and the National Data Guardian for Health and Care should be prepared with the support of the ICO [information commissioner’s office] and the clinicians and NHS Trusts which already have experience of such arrangements (such as the Royal Free London and Moorfields Eye Hospital NHS Foundation Trusts), as well as the Caldicott Guardians [the NHS’ patient data advocates]. This framework should set out clearly the considerations needed when sharing patient data in an appropriately anonymised form, the precautions needed when doing so, and an awareness of the value of that data and how it is used. It must also take account of the need to ensure SME access to NHS data, and ensure that patients are made aware of the use of their data and given the option to opt out.

As the Facebook-Cambridge Analytica scandal has clearly illustrated, opt-outs alone cannot safeguard people’s data or their legal rights — which is why incoming EU data protection rules (GDPR) beef up consent requirements to require a clear affirmative. (And it goes without saying that opt-outs are especially concerning in a medical context where the data involved is so sensitive — yet, at least in the case of a DeepMind partnership with Taunton and Somerset NHS Trust, patients do not even appear to have been given the ability to say no to their data being processed.)

Opt-outs (i.e. rather than opt-in systems) for data-sharing and self-defined/voluntary codes of ‘ethics’ demonstrably do very little to protect people’s legal rights where digital data is concerned — even if it’s true, for example, that Facebook holds itself in check vs what it could theoretically do with data, as company execs have suggested (one wonders what kind stuff they’re voluntarily refraining from, given what they have been caught trying to manipulate).

The wider risk of relying on consumer savvy to regulate commercial data sharing is that an educated, technologically aware few might be able to lock down — or reduce — access to their information; but the mainstream majority will have no clue they need to or even how it’s possible. And data protection for a select elite doesn’t sound very equitable.

Meanwhile, at least where this committee’s attitude to AI is concerned, developers and commercial entities are being treated with favorable encouragement — via the notion of a voluntary (and really pretty basic) code of AI ethics — rather than being robustly reminded they need to follow the law.

Given the scope and scale of current AI-fueled sandals, that risks the committee looking naive.

Though the government has made AI a strategic priority, and policies to foster and accelerate data-sharing to drive tech developments are a key part of its digital and industrial strategies. So the report needs to be read within that wider context.

The committee does add its voice to questions about whether/how legal liability will mesh with automated decision making — writing that “clarity is required” on whether “new mechanisms for legal liability and redress” are needed or not.

We recommend that the Law Commission consider the adequacy of existing legislation to address the legal liability issues of AI and, where appropriate, recommend to Government appropriate remedies to ensure that the law is clear in this area,” it says on this. “At the very least, this work should establish clear principles for accountability and intelligibility. This work should be completed as soon as possible.” 

But this isn’t exactly cutting edge commentary. Last month the government announced a three-year regulatory review focused on self-driving cars and the law, for instance. And the liability question is already well-aired — and in the autonomous cars case, at least, already having its tires extensively kicked in the UK.

What’s less specifically discussed in government circles is how AIs are demonstrably piling pressure on existing laws. And what — if anything — should be done to address those kind of AI-fueled breaking points. (Terrorist content has been an exception for some years, with government ministers more than happy to make platforms and technologies their scapegoat; more recently hate speech on online platforms has also become a major political target for governments in Europe.)

The committee briefly touches on some of these pressure points in a section on AI’s impact on “social and political cohesion”, noting concerns raised to it about issues such as filter bubbles and the risk of AIs being used to manipulate elections. “[T]here is a rapidly growing need for public understanding of, and engagement with, AI to develop alongside the technology itself. The manipulation of data in particular will be a key area for public understanding and discussion in the coming months and years,” it writes here. 

However it has little in the way of gunpowder — merely recommending that research is commissioned into “the possible impact of AI on conventional and social media outlets”, and to investigate “measures which might counteract the use of AI to mislead or distort public opinion as a matter of urgency”.

Elsewhere in the report, it does also raise an interesting concern about data monopolies — noting that investments by “large overseas technology companies in the UK economy” are “increasing consolidation of power and influence by a select few”, which it argues risks damaging the UK’s home-grown AI start-up sector.

But again there’s not much of substance in its response. The committee doesn’t seem to have formed its own ideas on how or even whether the government needs to address data being concentrating power in the hands of big tech — beyond calling for “strong” competition frameworks. This lack of conviction is attributed to hearing mixed messages on the topic from its witnesses. (Though may well also be related to the economic portion of the enquiry’s focus.)

“The monopolisation of data demonstrates the need for strong ethical, data protection and competition frameworks in the UK, and for continued vigilance from the regulators,” it concludes. “We urge the Government, and the Competition and Markets Authority, to review proactively the use and potential monopolisation of data by the big technology companies operating in the UK.”

The report also raises concerns about access to funding for UK AI startups to ensure they can continue scaling domestic businesses — recommending that a chunk of the £2.5BN investment fund at the British Business Bank, which the government announced in the Autumn Budget 2017, is “reserved as an AI growth fund for SMEs with a substantive AI component, and be specifically targeted at enabling such companies to scale up”.

No one who supports the startup cause would argue with trying to make more money available. But if data access has been sealed up by tech giants all the scale up funding in the world won’t help domestic AI startups break through that algorithmic ceiling.

Also touched on: The looming impact of Brexit, with the committee calling on the government to “commit to underwriting, and where necessary replacing, funding for European research and innovation programmes, after we have left the European Union” . Which boils down to another whistle in a now very long score of calls for replacement funding after the UK leaves the EU.

Funding for regulators is another concern, with a warning that the ICO must be “adequately and sustainably resourced” — as a result of the additional burden the committee expects AI to put on existing regulators.

This issue is also on the radar of the UK’s digital minister, Matt Hancock, who has said he’s considering what additional resources the ICO might need — such as the power to compel testimony from individuals. (Though the ICO itself has previously raised concerns that the minister and his data protection bill are risking undermining her authority.) For now it remains to be seen how well armed the agency will be to meet the myriad challenges generated and scaled by AI’s data processors.

“Blanket AI-specific regulation, at this stage, would be inappropriate,” the report adds. “We believe that existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation which may be needed. We welcome that the Data Protection Bill and GDPR appear to address many of the concerns of our witnesses regarding the handling of personal data, which is key to the development of AI. The Government Office for AI, with the Centre for Data Ethics and Innovation, needs to identify the gaps, if any, where existing regulation may not be adequate. The Government Office for AI must also ensure that the existing regulators’ expertise is utilised in informing any potential regulation that may be required in the future.”

The committee’s last two starter principles for their voluntary AI code serve to underline how generously low the ethical bar is really being set here — boiling down to: AI shouldn’t be allowed to kill off free schools for our kids, nor be allowed to kill us — which may itself be another consequence of humans not always being able to clearly determine how AI does what it does or exactly what it might be doing to us.

16 Apr 2018

iHeartRadio opens up its playlists to all users with launch of Playlist Radio

iHeartRadio is best known for its free service offering thousands of live, streaming AM and FM radio stations and its ability to create your own custom station, similar to Pandora. Today, the company is adding a new feature for all users – both free and paid – that blurs the lines between streaming radio and the typically premium-only option of using playlists: Playlist Radio.

Like most playlists, Playlist Radio isn’t a random assortment of songs.

Instead, the songs it plays are curated and programmed by radio DJs and other iHeartRadio staff. That means there isn’t an algorithm deciding what to play next – you’re listening to a selection of songs an actual person has put together.

However, because it’s still “radio” you can’t do some of the things you could with the premium product’s playlists – like reorganizing tracks, adding or removing songs, or playing a particular song in the playlist on-demand. Instead, the songs will play in their given order, though you can skip up to six songs per hour within a playlist – the same as free users have when they’re listening to iHeartRadio’s artist stations.

The addition of Playlist Radio opens up iHeartRadio’s over 1,000 existing playlists to a wider audience.

This includes all nearly the artist-created, genre-based, activity-focused, musical era-focused, and theme-based playlists, with the exception of a handful of playlists that have too few songs to turn into a radio experience.

Before now, those playlists were only available behind a paywall for iHeartRadio Plus, the $4.99/month on-demand music service, and iHeartRadio All Access, which offers unlimited access to millions of songs and offline listening.

In addition, the playlists will be updated every week, save for those where it doesn’t make sense – like those focused on a particular era, like ’60’s music, for example.

“One of the things we’re most excited about and the area where i feel like we really excel is in music curation,” explains iHeart’s Chief Product Officer, Chris Williams, of how Playlist Radio came to be. “We have some of the greatest music curators on the planet within iHeartRadio. We have the best radio programmers, music directors, and program directors who are out there curating every single day for their radio stations. So we tapped into the resources that we had there, as well as finding some external expertise.”

The idea is that these programmers have already built these great, curated listening experiences, but because free products can only offer radio play as opposed to on-demand streams, the subset of iHeartRadio’s 110+ million registered users who aren’t on a subscription tier were missing out.

However, Playlist Radio could also drive those free users to upgrade, in order to better take advantage of the on-demand options.

“I think it’s exposing a great listening experience to our existing free users, and offering them up a listening opportunity that doesn’t exist on the free tier right now,” says Williams. “I think what radio does a brilliant job at is programming formatically. And I think what Playlist Radio does a great job of is offering listening occasions that are thematic,” he notes. The new products aims to marry the two. 

While on-demand music services are growing, there’s an increased interest in lean-back modes of listening, even for on-demand users who can play whatever they choose. For example, Pandora just challenged Spotify with the launch of dozens of personalized playlists based on its Music Genome; and Spotify, of course, is still well-loved for its popular “Discover Weekly” personalized playlist and its curated trendsetters, like RapCaviar.

Of course, the launch also comes at a time when iHeartRadio is facing steep competition from those competitors and others, including Apple and Amazon, in music.

In fact, the streaming service’s parent company, iHeartMedia – which also owns hundreds of radio stations, a concert business, and a 90% stake in Clear Channel Outdoor’s billboard company – recently filed for Chapter 11 bankruptcy. Consumers won’t know the difference when it comes to using iHeartRadio’s streaming service in the near-term. However, Pandora investor Liberty Media (SiriusXM’s owner) was interested in a deal with iHeartMedia which could impact iHeartRadio’s business in the future.

Playlist Radio is rolling out today to all iHeartRadio users on iOS, Android and desktop, before making its way to other platforms.

16 Apr 2018

Elon Musk’s latest SpaceX idea involves a party balloon and bounce house

Elon Musk took to Twitter Sunday night to announce a new recovery method for an upper stage SpaceX rocket. A balloon — a “giant party balloon” to quote him directly — will ferry part of a rocket to a bounce house. Seriously.

If anyone else proposed this idea they would be ignored, but Elon Musk lately has a way of turning crazy ideas into reality.

It was just in 2012 that SpaceX launched and landed its first rocket and now the company is doing it with rockets significantly larger. And then early this year SpaceX made a surprise announcement that it would attempt to use a high-speed boat and large net to catch part of rocket. And it worked after a failed first attempt.

This isn’t the first time a balloon has tried to be used to return a rocket. Legendary programmer John Carmack’s rocket company attempted to use a ballute in 2012 to return a rocket body and nose cone. It didn’t work as planned and according to officials at the time, the rocket made a “hard landing” around the Spaceport America property in New Mexico.

Just like SpaceX’s self-landing rockets and its giant net boat, the goal is to reduce the cost of launching rockets by reusing parts. It’s unclear when this latest plan will be implemented but chances are SpaceX will at least attempt it in the coming future.