05 Apr 2018

Should AI researchers kill people?

AI research is increasingly being used by militaries around the world for offensive and defensive applications. This past week, groups of AI researchers began to fight back against two separate programs located halfway around the world from each other, generating tough questions about just how much engineers can affect the future uses of these technologies.

From Silicon Valley, The New York Times published an internal protest memo written by several thousand Google employees, which vociferously opposed Google’s work on a Defense Department-led initiative called Project Maven, which aims to use computer vision algorithms to analyze vast troves of image and video data.

As the department’s news service quoted Marine Corps Col. Drew Cukor last year about the initiative:

“You don’t buy AI like you buy ammunition,” he added. “There’s a deliberate workflow process and what the department has given us with its rapid acquisition authorities is an opportunity for about 36 months to explore what is governmental and [how] best to engage industry [to] advantage the taxpayer and the warfighter, who wants the best algorithms that exist to augment and complement the work he does.”

Google’s employees are demanding that the company step back from exactly that sort of partnership, writing in their memo:

Amid growing fears of biased and weaponized AI, Google is already struggling to keep the public’s trust. By entering into this contract, Google will join the ranks of companies like Palantir, Raytheon, and General Dynamics. The argument that other firms, like Microsoft and Amazon, are also participating doesn’t make this any less risky for Google. Google’s unique history, its motto Don’t Be Evil, and its direct reach into the lives of billions of users set it apart.

Meanwhile, in South Korea, there is growing outrage over a program to develop offensive robots jointly created by the country’s top engineering university KAIST — the Korea Advanced Institute of Science and Technology — and Korean conglomerate Hanhwa, which among other product lines is one of the largest producers of munitions for the country. Dozens of AI academics around the world have initiated a protest of the collaboration, writing that:

At a time when the United Nations is discussing how to contain the threat posed to international security by autonomous weapons, it is regrettable that a prestigious institution like KAIST looks to accelerate the arms race to develop such weapons. We therefore publicly declare that we will boycott all collaborations with any part of KAIST until such time as the President of KAIST provides assurances, which we have sought but not received, that the Center will not develop autonomous weapons lacking meaningful human control.

Here’s the thing: These so-called “killer robots” are seriously the least of our concerns. Such offensive technology is patently obvious, and researchers are free to decide whether they want to participate or not participate in such endeavors.

The wider challenge for the field is that all artificial intelligence research is equally applicable to offensive technologies as it is to improving the human condition. The entire research program around AI is to create new capabilities for computers to perceive, predict, decide and act without human intervention. For researchers, the best algorithms are idealized and generalizable, meaning that they should apply to any new subject with some tweaks and maybe more training data.

Practically, there is no way to prevent these newfound capabilities from entering offensive weapons. Even if the best researchers in the world refused to work on technologies that abetted offensive weapons, others could easily take these proven models “off the shelf” and apply them relatively straightforwardly to new applications. That’s not to say that battlefield applications don’t have their own challenges that need to be figured out, but developing core AI capabilities is the critical block in launching these sorts of applications.

AI is a particularly vexing problem of dual-use — the ability of a technology to be used for both positive applications and negative ones. A good example is nuclear theory, which can be used to massively improve human healthcare through magnetic resonance imagery and power our societies with nuclear power reactors, or it can be used in a bomb to kill hundreds of thousands.

AI is challenging because unlike, say, nuclear weapons, which require unique hardware that signals their development to other powers, AI has no such requirements. For all the talk of Tensor Processing Units, the key innovations in AI are mathematical and software in origin, before hardware performance optimization. We could build an autonomous killing drone today with a consumer-grade drone, a robotic gun trigger and computer vision algorithms downloaded from GitHub. It may not be perfect, but it would “work.” In this way, it is similar to bioweapons, which can similarly be built with standard lab equipment.

Other than outright stopping development of artificial intelligence capabilities entirely, this technology is going to get built, which means it is absolutely possible to build these weapons and launch them against adversaries.

In other words, AI researchers are going to kill people, whether they like it or not.

Given that context, the right mode for organizing isn’t to stop Google from working with the Pentagon, it is to encourage Google, which is among the most effective lobbying forces in Washington, to push for more international negotiations to ban these sorts of offensive weapons in the first place. Former Alphabet chairman Eric Schmidt chairs the Defense Innovation Board, and has a perfect perch from which to make these concerns known to the right policymakers. Such negotiations have been effective in limiting bioweapons, chemical warfare and weapons in outer space, even during the height of the Cold War. There is no reason to believe that success is out of reach.

That said, one challenge with this vision is competition from China. China has made autonomous warfare a priority, investing billions into the industry in pursuit of new tools to fight American military hegemony. Even if the U.S. and the world wanted to avoid these weapons, we may not have much of a choice. I, for one, would prefer to see the world’s largest dictatorship not acquire these weapons without any sort of countermeasure from the democratized world.

It’s important to note, though, that such fears about war and technology are hardly new. Computing power was at the heart of the “precision” bombing campaigns in Vietnam throughout the 1960s, and significant campus protests were focused on stopping newly founded computation centers from conducting their work. In many cases, classified research was banned from campus, and ROTC programs were similarly removed, only to be reinstated in recent years. The Pugwash conferences were conceived in the 1950s as a forum for scientists concerned about the global security implications of emerging technologies, namely nuclear energy.

These debates will continue, but we need to be aware that all AI developments will likely lead to better offensive weapons capabilities. Better to accept that reality today and work to protect the ethical norms of war than try to avoid it, only to discover that other adversaries have taken the AI lead — and international power with it.

05 Apr 2018

Now you can use your Echo to call the kids for dinner

Amazon just rolled out a long-promised for the Echo. Alexa Announcements turns the company’s smart speaker line into a one-way intercom system, letting owners communicate with the family across a household.

At its base level, it’s kind of a relay system. Tell the device, “Alexa, announce that dinner is ready,” and Alexa will say, “Dinner is ready” through all connected Echos in the home. It’s like the world’s most boring game of Telephone. You can also speak directly through the device, by saying “Alexa, tell everyone…” or “Alexa, broadcast…”

It should be a pretty handy addition for those who have multiple Echo devices in different rooms. It joins existing communications like Alexa Calling, Messaging and Drop In — only this one is, as they say in horror movies, coming from inside the house. It’s also a good way for Amazon to sell second and third Echos to people who are already on-board with Amazon’s smart speaker.

The feature has started rolling out to Echo users in the US and Canada.

05 Apr 2018

Instacart is reportedly raising another $150 million

Instacart, the grocery delivery startup that has a partnership of sorts with Whole Foods, is raising $150 million in funding, Axios first reported. This is on top of Instacart’s $200 million raise at a $4.2 billion valuation in February.

The additional funding comes shortly after Amazon, owner of Whole Foods, announced free two-hour delivery of natural and organic products from Whole Foods via Prime Now.

I’ve reached out to Instacart and will update this story if I hear back.

05 Apr 2018

Google pay discrimination lawsuit is moving forward

A class-action lawsuit alleging pay discrimination at Google based on gender is moving forward, California Superior Court Judge Mary E. Wiss recently ordered.

The original suit was dismissed in December due to the fact that plaintiffs defined the class of affected workers too broadly. In January, the plaintiffs filed a revised lawsuit, which added Heidi Lamar, a former teacher at Google’s Children Center in Palo Alto. The revised January lawsuit focuses on those who hold engineer, manager, sales or early childhood education positions, which comes out to 30 covered positions. The suit also alleges Google has a history of improperly asking about prior salaries and assigning women to lower job levels with lower salaries.

In court, Google objected to the class-action designation pertaining to “Engineer Covered Positions” and “Program Manager Covered Positions,” saying the plaintiffs did not allege specific violations. But Judge Wiss has since said the court was not persuaded by Google’s argument.

“It is enough for Plaintiff’s to allege a numerous and ascertainable class with a well-defined community of interest, as well as a pattern or practice of gender discrimination across all Covered Positions in Google,” Wiss wrote in her order.

I’ve reached out to Google and will update this story if I hear back.

05 Apr 2018

Mission Bit receives $1 million to expand computer science education in SF

Mission Bit, a nonprofit organization that teaches high school students computer science, has received a $1 million five-year grant from the San Francisco Department of Children, Youth and Their Families.

Each semester, Mission Bit offers after-school computer science classes to high school students. The fall and spring courses run for 13 weeks, requiring four hours per week from students. The semester-long course covers HTML, CSS and JavaScript.

Mission Bit also offers a six-week summer program for students. This fall, Mission Bit will launch a two-year program in order to facilitate ongoing learning and development, Mission Bit CEO Stevon Cook told me.

The two-year course aligns well with the DCYF’s goals for Mission Bit, Cook said. Mission Bit plans to use the funding to focus more on youth who are disconnected or disenfranchised, Cook said, such as those in foster care, public housing or those who have immigrated to the U.S. In order to do that, Mission Bit will partner with existing organizations that already work with marginalized kids, Cook said.

Throughout the San Francisco Bay Area, 100,000 high school students lack access to computer science classes at their schools, according to a study consulting firm Inspire conducted on behalf of Mission Bit. By 2020, Mission Bit hopes to serve 10,000 students in the area, specifically focusing on black and Latinx students, as well as students on free/reduced price lunch programs.

Mission Bit has a goal of serving 10,000 students in the San Francisco Bay Area by 2020. To date, 1,600 students have participated in Mission Bit’s program. There are 150 students in Mission Bit’s current cohort.

05 Apr 2018

Careship, the German marketplace for in-home care, scores further €6M funding

Careship, the German marketplace for in-home senior care, has raised €6 million in further funding. The round is led by Creandum, the European early-stage investor best known for being an early backer of Spotify, and will be used by the Berlin startup to further expand nationally.

In addition to Creandum, European ‘impact’ investor Ananda Ventures joined the round, with participation from existing backers Spark Capital, Atlantic Labs, and Axel Springer Plug and Play. Careship had previously raised €4 million, disclosed in January 2017.

Founded in 2015 by siblings Antonia and Nikolaus Albert after they could not find a suitable caregiver for their grandmother, Careship operates a marketplace for caregivers that aims to disrupt the traditional agency model. The marketplace connects families needing elderly care with access to qualified personnel using a “matchmaking algorithm” to help solve the suitability problem.

There are other value-adds, too, such as advising on insurance benefits. In Germany, elderly care is funded by a state health insurance system and Careship co-founder Antonia Albert tells me the company has a 50/50 mix of state-funded and private customers. “If you are care dependent in Germany, you are eligible to care and companionship services and get state-funded budgets for them,” she says. Careship also offers general consultation to help you choose the best care option.

Noteworthy, caregivers on the platform set their own price, similar to the U.K.’s HomeTouch. Likewise, Careship handles billing and coordinating insurances, in addition to keeping families connected and therefore remains a central point of contact and is arguably not prone to disintermediation.

As it stands, Careship is based in Berlin and the service is also available in three other major German cities as well as the area of North Rhine-Westphalia. Albert tells me the new funding will see the startup add more coverage across Germany in 2018 and that the broader aim is to go international. “[The] long-term vision is to make Careship available to as many families as possible in Europe,” she says.

Meanwhile, direct competitors are cited as traditional “ambulatory care provides” in Germany as well as other care platforms, such as Rocket Internet’s Pflegetiger, which operates a full stack not marketplace model, and care marketplace Pflegix.

05 Apr 2018

3 tests show Facebook is determined to make Stories the default

Facebook isn’t backing down from Stories despite criticism that it copied Snapchat and that Instagram Stories is enough. Instead, it’s committed to figuring out how to adapt the slideshow format into the successor to the status update. That’s why today the company is launching three significant tests that make Facebook Stories a default way to share.

“The way people share and connect is changing; it’s quickly becoming more real-time and visual. We’re testing new creative tools to bring pictures and videos to life, and introducing easier ways to find and share stories,” a Facebook spokesperson told me.

Meanwhile, Facebook has been fixing the biggest problems with its Stories: redundancy between Facebook, Messenger and Instagram. Now you can set your Instagram Stories to automatically be reposted to your Facebook Story, and Stories on Facebook and Messenger sync with each other. That means you can just post to Instagram and have your Story show up on all three apps. That way if you want extra views or to include friends who aren’t Insta-addicts, you can show them your Story with no extra uploads.

It was a year ago that Facebook rolled out Stories. But Facebook has so many features that it has to make tough decisions about which to promote and which to bury. It often launches features with extra visibility at first, but forces them to grow popular on their own before giving them any additional attention.

Facebook is vulnerable to competitors if it doesn’t make Stories work, and users may eventually grow tired of the News Feed full of text updates from distant acquaintances. But Instagram Stories and WhatsApp’s version Status have both grown to more than 250 million daily users, showing there’s obviously demand for this product if Facebook can figure out how Stories fit in its app.

Hence, these tests:

  1. The Facebook status composer on mobile will immediately show an open camera window and the most recent images in your camera roll to spur Stories sharing. Given that Facebook has as many as 17 choices for status updates, from check-ins to recommendations to GIFs, the new camera and camera roll previews make Stories a much more prominent option. Facebook isn’t going so far as to launch with the camera as the home screen like Snapchat, or half the screen like it once tried, but it clearly believes it will be able to ride the trend and people will get more out of sharing if they choose Stories. This starts testing today to a small subset of users around the world.
  2. When you shoot something with the augmented reality-equipped Facebook Camera feature, the sharing page will now default to having Stories selected. Previously, users had to choose if they wanted to post to Stories, News Feed or send their creation to someone through Messenger. Facebook is now nudging users to go with Stories, seemingly confident of its existing dominance over the ranked feed and messaging spaces. This test will begin with all users in the Dominican Republic.
  3. Above the News Feed, Facebook Stories will show up with big preview tiles behind the smaller profile pictures of the people who created them. Teasing what’s inside a Story could make users a lot more likely to click to watch them. Facebook uses a similar format, but with smaller preview circles on Messenger. And while Instagram leaves more room for the main feed by just showing profile pic bubbles for Stories, if you keep scrolling you might see a call-out in the feed for Stories you haven’t watched using a big preview tile format similar to what Facebook Stories is trying. More views could encourage users to share more Stories, helping to dismantle the ghost town perception of Facebook Stories. This will also test to a small percentage of users around the world.

    One of Facebook’s new Stories tests shows big preview tiles behind people’s profile bubbles

If Facebook finds these tests prove popular, they could roll out everywhere and make Stories a much more central part of the app’s experience. Facebook will have to avoid users feeling like Stories are getting crammed down their throats. But the open camera, Stories default and bigger previews all disappear with a quick tap or swipe.

The fact is that the modern world of computing affords a very different type of social media than when Facebook launched 14 years ago. Then, you’d update your status with a line of text from your desktop computer because your phone didn’t have a good camera (or maybe even the internet), screens were small, mobile networks were slow and it was tough to compute on the go. Now with every phone equipped with a great camera, a nice screen, increasingly fast mobile networks and everyone else staring at them all the time, it makes sense to share through photos and videos you post throughout the day.

This isn’t a shift driven by Facebook, or even really Snapchat. Visual communication is an inevitable evolution. For Facebook, Stories aren’t an “if,” just a “how.”

05 Apr 2018

Stripe launches a new billing tool to tap demand from online businesses

As more and more spending moves online — whether that’s shopping or subscribing to services like Netflix and Spotify — there’s increasing demand for tools that allow those companies, especially smaller ones, to start getting paid.

Stripe has made its name by providing developers with a simpler way to start charging customers and handling transactions, but today they hope to take another step by launching a billing product for online businesses. That’ll allow them to handle subscription recurring revenue, as well as invoicing, within the Stripe platform and get everything all in the same place. The goal was to replace a previously hand-built setup, whether using analog methods for invoicing or painstakingly putting together a set of subscription tools, and make that experience as seamless as charging for products on Stripe.

“These large enterprise companies have the resources to build internal recurring billing in house,” Tara Seshan, PM on the billing product, said. “Even then they would tell us what challenge it would be. What we did was took a step back and think about, how should this work, how can we make billing tools that are only available to enterprises be available to everyone. That meant something really flexible and really easy to implement. If you’re [running a small operation], you should have the same subscription tools as Spotify. What we have here is a set of building blocks so you get the speed and flexibility you need.”

Indeed, a lot of the Internet has slowly but surely shifted to a subscription model. There’s even a good chance that even the phone you have in your pocket is paid for in an annual subscription to amortize the big ticket price of that product over the course of several months. Larger companies have had these tools in place, but it’s a traditional very startup-y problem to just not have the resources to build them even by cobbling together online payments tools in order to get these running. Startups often have a long list of priorities, and they need to start generating revenue immediately if they want to continue growing.

This launch is, in part, a response to customers demanding a billing product that gets all these invoices and subscription expenses into a single spot. Stripe at its heart is an enterprise company, which means it has to keep close tabs on the needs of its customers while still balancing the needs to continue creating new products that small businesses didn’t realize would actually solve those problems in an elegant way. That’s especially true when it comes to Internet-oriented businesses, which are often changing their business models over time, Seshan said.

“Unlike something like Instagram or Facebook, where you’re doing analytics A/B testing voodoo to figure out what you should build, with Stripe, our businesses know what they want,” Seshan said. “They have clear requests, so we’re much more inclined to listen to our users as opposed to sitting in an ivory tower coming up with a strategy. As they look to add new products, that applies to the startup selling fast and iterating to the large tech companies about to launch a new subscription line or about to add a “for work” side of their product. What we saw often was that billing was the limiting factor to getting a product to market.”

In addition to all this, Stripe looks to apply the machine learning tools it’s created for things like fraud prevention into a new area of expertise. One example of this is figuring out when to intelligently retry a recurring billing charge, which may fail for any number of reasons. Stripe tries to get around problems like lost credit cards or anything along those lines to try to keep the experience as seamless as possible. Seshan said Stripe businesses that implement billing see a 10% increase in revenue — which, for flipping a switch, is pretty substantial.

As companies get bigger and bigger, they will also likely graduate beyond just a simple subscription. An enterprise software company, for example, will probably have to start targeting larger customers that have a salesforce and a different approach for implementing new technology. That means getting invoice-level revenue, which has different implementation requirements than just normal subscription billing. In that case, it’s not like the CIO of a Fortune 100 company can just put a credit card number into a billing service, as those require more robust research and a partnership in place.

While this is a tool that’s a natural fit for something like Stripe, it’s certainly one that’s created a substantial business opportunity. Last month, Zuora — an enterprise subscription services company — filed to go public amid a fresh wave of enterprise IPOs that included Dropbox and Zscaler (and also, to a certain extent, Salesforce’s big acquisition of Mulesoft). Zuora’s subscription services revenue continues to grow, showing that Stripe will certainly have competition here, but also that there’s a large market opportunity.

“We want to think about Stripe as growing the economic infrastructure to increase the GDP of the Internet,” Seshan said. “What we noticed is, we invested in marketplaces in the past, but we’re investing in the next wave of software-as-a-service businesses. We want to power that next trend, and it’s gonna accelerate in the year ahead. We’re really thrilled to power that with billing and subscriptions and we want to make that available to companies with all sizes.”

05 Apr 2018

Xprize is relaunching its Moon challenge without Google, but they need a new sponsor

The deadline for Google’s Lunar Xprize passed just days ago without a winner, but the lengthy 10-year competition to send a robot to the Moon’s surface had known for months that none of the five teams were ready for launch by the extended deadline of March 31, 2018. As a result, back in January, Google announced it was taking its $30 million in prize money back, leaving the exciting challenge with a bit of an anticlimactic end.

Xprize is back however, announcing today that the show will go on without a cash award or Google support, though they’re looking for a new sponsor to step in and float the prize for the Xprize.

“We are extraordinarily grateful to Google for funding the $30 million Google Lunar XPRIZE between September 2007 and March 31st, 2018. While that competition is now over, there are at least five teams with launch contracts that hope to land on the Lunar surface in the next two years,” said Xprize founder Peter H. Diamandis, M.D. in a statement. “Because of this tremendous progress, and near-term potential, XPRIZE is now looking for our next visionary Title Sponsor who wants to put their logo on these teams and on the lunar surface.”

The big focus of the whole scenario was to give private companies a chance and major incentive to join the host of government orgs that had landed a craft on the Moon. Xprize says that Google’s pledge and support eventually netted the teams involved as much as $300 million in investment to fund the missions.

Though Google Lunar Xprize stretched on through many deadline extensions only to end without a winner, with this new launch competition, the organization hopes they can capture the public’s imagination once again while hopefully soon also capturing the support of a mega-donor to put their name on the competition.

“At this point, we don’t want to give up on these teams, these teams are going to make it,” Diamandis said.

 

05 Apr 2018

Twitter claims more progress on squeezing terrorist content

Twitter has put out its latest Transparency Report providing an update on how many terrorist accounts it has suspended on its platform — with a cumulative 1.2 million+ suspensions since August 2015.

During the reporting period of July 1, 2017 through December 31, 2017 — for this, Twitter’s 12th Transparency Report — the company says a total of 274,460 accounts were permanently suspended for violations related to the promotion of terrorism.

“This is down 8.4% from the volume shared in the previous reporting period and is the second consecutive reporting period in which we’ve seen a drop in the number of accounts being suspended for this reason,” it writes. “We continue to see the positive, significant impact of years of hard work making our site an undesirable place for those seeking to promote terrorism, resulting in this type of activity increasingly shifting away from Twitter.”

Six months ago the company claimed big wins in squashing terrorist activity on its platform — attributing drops in reports of pro-terrorism accounts then to the success of in-house tech tools in driving terrorist activity off its platform (and perhaps inevitably rerouting it towards alternative platforms — Telegram being chief among them, according to experts on online extremism).

At that time Twitter reported a total of 299,649 pro-terrorism accounts had been suspended — which it said was a 20 per cent drop on figures reported for July through December 2016.

So the size of the drops are also shrinking. Though it’s suggesting that’s because it’s winning the battle to discourage terrorists from trying in the first place.

For its latest reporting period, ending December 2017, Twitter says 93% of the accounts were flagged by its internal tech tools — with 74% of those also suspended before their first tweet, i.e. before they’d been able to spread any terrorist propaganda.

Which means that around a quarter of the pro-terrorist accounts did manage to get out at least one terror tweet.

This proportion is essentially unchanged since the last report period (when Twitter reported suspending 75% before their first tweet) — so whatever tools it’s using to automate terror account identification and blocking appear to be in a steady state, rather than gaining in ability to pre-filter terrorist content.

Twitter also specifies that government reports of violations related to the promotion of terrorism represent less than 0.2% of all suspensions in the most recent reporting period — or 597 to be exact.

As with its prior transparency report, a far larger number of Twitter accounts are being reported by governments for “abusive behavior” — which refers to long-standing problems on Twitter’s platform such as hate speech, racism, misogyny and trolling.

And in December a Twitter policy staffer was roasted by UK MPs during a select committee session after the company was again shown failing to remove violent, threatening and racist tweets — which committee staffers had reported months earlier in that case.

Twitter’s latest Transparency Report specifies that governments reported 6,254 Twitter accounts for abusive behavior — yet the company only actioned a quarter of these reports.

That’s still up on the prior reporting period, though, when it reported actioning a paltry 12% of these type of reports.

The issue of abuse and hate speech on online platforms generally has rocketed up the political agenda in recent years, especially in Europe — where Germany now has a tough new law to regulate takedowns.

Platforms’ content moderation policies certainly remain a bone of contention for governments and lawmakers.

Last month the European Commission set out a new rule of thumb for social media platforms — saying it wants them to take down illegal content within an hour of it being reported.

This is not legislation yet, but the threat of EU-wide laws being drafted to regulate content takedowns remains a discussion topic — to encourage platforms to improve performance voluntarily.

Where terrorist content specifically is concerned, the Commission has also been pushing for increased used by tech firms of what it calls “proactive measures”, including “automated detection”.

And in February the UK government also revealed it had commissioned a local AI firm to build an extremist content blocking tool — saying it could decide to force companies to use it.

So political pressure remains especially high on that front.

Returning to abusive content, Twitter’s report specifies that the majority of the tweets and accounts reported to it by governments which it did remove violated its rules in the following areas: impersonation (66%), harassment (16%), and hateful conduct (12%).

This is an interesting shift on the mix from the last reported period when Twitter said content was removed for: harassment (37%), hateful conduct (35%), and impersonation (13%).

It’s difficult to interpret exactly what that development might mean. One possibility is that impersonation could cover disinformation agents, such as Kremlin bots, which Twitter has being suspending in recent months as part of investigations into election interference — an issue that’s been shown to be a problem across social media, from Facebook to Tumblr.

Governments may also have become more focused on reporting accounts to Twitter that they believe are wrappers for foreign agents to spread false information to try to meddle with democratic processes.

In January, for example, the UK government announced it would be setting up a civil service unit to combat state-led disinformation campaigns.

And removing an account that’s been identified as a fake — with the help of government intelligence — is perhaps easier for Twitter than judging whether a particular piece of robust speech might have crossed the line into harassment or hate speech.

Judging the health of conversations on its platform is also something the company recently asked outsiders to help it with. So it doesn’t appear overly confident in making those kind of judgement calls.