Category: UNCATEGORIZED

13 Jun 2019

Microsoft will offer console streaming for free to Xbox One owners

Microsoft’s Sunday E3 pressure was all about the games. In fact, while the company did offer some information about hardware and services, the information all arrived fast and furious at the end of the conference. While it’s probably unsurprising that the company had very little to offer in the way of information about its upcoming 8K console, Project Scarlett, most of us expected Project xCloud to get a lot more face time on stage.

The company powered through a whole lot of information about its upcoming streaming offering like it was going out of style (or, perhaps, like the lights were going out at its own theater). The speed and brevity of it all left a number of audience members confused on the specifics — and caused some to speculate that the service night not be as far along as Microsoft had hoped.

We caught up with a few Microsoft reps on our final day at the show to answer some questions. The company is unsurprisingly still mum on a number of key details around the offering. A couple of key things are worth clarifying, though. For starters console stream is not considered a part of Project xCloud. Rather, the ability to play games on one’s own Xbox One remotely is a separate feature that will be coming to users via a software update.

Asked what advantages console streaming has over the parallel xCloud offering, Microsoft’s answer was simple: it’s free. Fair enough. This serves a two-fold purpose. First, it helps differentiate Microsoft’s streaming offerings from Stadia and second, it provides another value proposition for the console itself. As to how performance is expected to differ between console streaming and XCloud, it wouldn’t comment.

As I wrote earlier today, the company does see the potential of a large scale move to the cloud, but anticipates that such a shift is a long ways off. After all, if it didn’t, it likely wouldn’t have announced a new console this week at E3.

12 Jun 2019

Fall Guys is a kinder, gentler battle royale

You’d be forgiven for assuming that “battle royale” is an inherently violent genre. Hell, the word battle is right there, staring you in the face. Certainly the most prominent examples of the category, like Fornite and PUBG, represent a particular strand of gun-toting mayhem. Mediatonic’s Fall Guys, on the other hand, presents an interesting example of a warmer, fuzzier direction for the category.

I ventured across the street at E3 earlier today, to the series of trailers where Devolver set up shop this week, in staunch defiance of the conference’s over-the-top show floor. Inside one (mercifully air conditioned), Mediatonic set up shop with a gaming demo with a kind of nursery school rumpus room aesthetic — a fitting choice for the subject matter.

I sat down in one of the bean bag chairs and demoed a trio of short “qualifying” games. The game will support 100 players when it launches on PS4 and Steam. For the sake of the demo, it was me and a handful of human players pitted against a whole bunch of less-sophisticated bots.

The first level involved racing through a series of walls. Some crumbled with contact and others were solid as concrete. You can either follow behind and let the few couple of waves of players test out their density or lead the way and risk losing precious time by slamming headlong into one.

The second level was a version of tag that revolved around snatching a tail from one of your oblong compatriots. They’ll almost immediately steal it back. The only rule is that you have the tail in your possession when the clock runs out.

The third level is a kind of catchall uphill obstacle course requiring you to avoid obstacles, like swinging hammers. I was awesome on the first two and utterly sucked at the third. There’s plenty of room for self-improvement is my point. Ultimately there will be around 30 levels in all.

It’s a fun time, but I couldn’t shake the feeling that I was playing a casual, mobile-style game on the PS4. Certainly there’s no hardware demands that require such hardware. It seems like an easy thing to port to iOS or Android — particularly in the age of cross-platform battle royal like Fornite. Mediatonic senior developer Stephen Taylor says the company opted for the most advanced platform for control/interface reasons, though the company’s exploring the possibility.

I suspect Devolver’s involvement played a role in this as well. The publisher’s been far more interested in console and PC gaming, along a premium charge up front, rather than the free to play Fortnite model. Fall Guys will follow this model — though the pricing has yet to be announced. Ultimately, of course, paying upfront is generally cheaper for many gamers than the death by a million cuts that is in-game purchases.

12 Jun 2019

We won’t be listening to music in a decade according to Vinod Khosla

Depending on who you ask, the advantage of technology based on artificial or machine intelligence could be a topsy-turvy funhouse mirror world – even in some very fundamental ways.

“I actually think 10 years from now, you won’t be listening to music,” is a thing venture capitalist Vinod Khosla said on stage today during a fireside chat at Creative Destruction Lab’s second annual Super Session event.

Instead, he believes we’ll be listening to custom song equivalents that are automatically designed specifically for each individual, and tailored to their brain, their listening preferences and their particular needs.

Khosla noted that AI-created music is already making big strides – and it’s true that it’s come a long way in the past couple of years, as noted recently by journalist Stuart Dredge writing on Medium.

As Dredge points out, one recent trend is the rise of mood or activity based playlists on Spotify and channels on YouTube. There are plenty of these types of things where the artist, album and song name are not at all important, or even really surfaced. Not to mention that there’s a big financial incentive for an entity like Spotify to prefer machine-made alternatives, since it could help alleviate or eliminate the licensing costs that severely limit their ability to make margin on their primary business of serving up music to customers.

AI-generated chart toppers and general mood music is one thing, but a custom soundtrack specific to every individual is another. It definitely sidesteps the question of what happens to the communal aspect of music when everyone’s music-replacing auditory experience is unique to the person. Guess we’ll find out in ten years.

12 Jun 2019

Every secure messaging app needs a self-destruct button

The growing presence of encrypted communications apps makes a lot of communities safer and stronger. But the possibility of physical device seizure and government coercion is growing as well, which is why every such app should have some kind of self-destruct mode to protect its user and their contacts.

End to end encryption like that you see in Signal and (if you opt into it) WhatsApp is great at preventing governments and other malicious actors from accessing your messages while they are in transit. But as with nearly all cybersecurity matters, physical access to either device or user or both changes things considerably.

For example, take this Hong Kong citizen who was forced to unlock their phone and reveal their followers and other messaging data to police. It’s one thing to do this with a court order to see if, say, a person was secretly cyberstalking someone in violation of a restraining order. It’s quite another to use as a dragnet for political dissidents.

This particular protestor ran a Telegram channel that had a number of followers. But it could just as easily be a Slack room for organizing a protest, or a Facebook group, or anything else. For groups under threat from oppressive government regimes it could be a disaster if the contents or contacts from any of these were revealed to the police.

Just as you should be able to choose exactly what you say to police, you should be able to choose how much your phone can say as well. Secure messaging apps should be the vanguard of this capability.

There are already some dedicated “panic button” type apps, and Apple has thoughtfully developed an “emergency mode” (activated by hitting the power button five times quickly) that locks the phone to biometrics and will wipe it if it is not unlocked within a certain period of time. That’s effective against “Apple pickers” trying to steal a phone or during border or police stops where you don’t want to show ownership by unlocking the phone with your face.

Those are useful and we need more like them — but secure messaging apps are a special case. So what should they do?

The best case scenario, where you have all the time in the world and internet access, isn’t really an important one. You can always delete your account and data voluntarily. What needs work is deleting your account under pressure.

The next best case scenario is that you have perhaps a few seconds or at most a minute to delete or otherwise protect your account. Signal is very good about this: The deletion option is front and center in the options screen, and you don’t have to input any data. WhatsApp and Telegram require you to put in your phone number, which is not ideal — fail to do this correctly and your data is retained.

Signal, left, lets you get on with it. You’ll need to enter your number in WhatsApp (right) and Telegram.

Obviously it’s also important that these apps don’t let users accidentally and irreversibly delete their account. But perhaps there’s a middle road whereby you can temporarily lock it for a preset time period, after which it deletes itself if not unlocked manually. Telegram does have self-destructing accounts, but the shortest time you can delete after is a month.

What really needs improvement is emergency deletion when your phone is no longer in your control. This could be a case of device seizure by police, or perhaps being forced to unlock the phone after you have been arrested. Whatever the case, there need to be options for a user to delete their account outside the ordinary means.

Here are a couple options that could work:

  • Trusted remote deletion: Selected contacts are given the ability via a one-time code or other method to wipe each other’s accounts or chats remotely, no questions asked and no notification created. This would let, for instance, a friend who knows you’ve been arrested remotely remove any sensitive data from your device.
  • Self-destruct timer: Like Telegram’s feature, but better. If you’re going to a protest, or have been “randomly” selected for additional screening or questioning, you can just tell the app to delete itself after a certain duration (as little as a minute perhaps) or at a certain time of the day. Deactivate any time you like, or stall for the five required minutes for it to trigger.
  • Poison PIN: In addition to a normal unlock PIN, users can set a poison PIN that when entered has a variety of user-selectable effects. Delete certain apps, clear contacts, send prewritten messages, unlock or temporarily hard-lock the device, etc.
  • Customizable panic button: Apple’s emergency mode is great, but it would be nice to be able to attach conditions like the poison PIN’s. Sometimes all someone can do is smash that button.

Obviously these open new avenues for calamity and abuse as well, which is why they will need to be explained carefully and perhaps initially hidden in “advanced options” and the like. But overall I think we’ll be safer with them available.

Eventually these roles may be filled by dedicated apps or by the developers of the operating systems on which they run, but it makes sense for the most security-forward app class out there to be the first in the field.

12 Jun 2019

As payment and surveillance technologies collide, free speech could be a victim

Anyone who has traveled to Hong Kong knows how ubiquitous the Octopus Card is. Distributed by a company which is majority owned by the Hong Kong government, the cards are used to pay for everything from public transit to groceries, to Starbucks coffee. It’s an incredible payment solution that’s used by almost everyone in the city.

But as hundreds of thousands of people gather in the city center to protest against proposed regulations that residents view as tearing down the last protections against the authoritarian control of mainland China, those same citizens are viewing their Octopus cards in a different light.

Protestors in Hong Kong are waiting in line to pay cash for a single-use card rather then use an Octopus card that’s tied to their bank accounts and identity. Their fear, as QZ journalist Mary Hui notes, is that the government will track their data and location.

Already, China’s security apparatus are leaning on citizens. In one instance they allegedly requested that the organizer of a large Telegram group open their phone and reveal their contacts.

Potential privacy concerns are a huge downside for cashless technology. While electronic payments can make things more convenient for the people that can afford it, it opens up new avenues for government or corporate surveillance and monitoring.

On the mainland, the Chinese government is already experimenting with social credit scoring that can affect a citizen’s access to everything from home and personal loans to public transportation.

Examples like this are another argument against the push for cashless systems.

Indeed, as some cities in the U.S. consider — or enact — bans on cashless stores, companies are shifting their policies on how to develop the technology. Philadelphia became the first city to ban cashless stores in March and the state of New Jersey quickly followed suit. Other cities considering the bans include New York, San Francisco and Chicago.

As nations like China and India push to go cashless, it’s worth noting that the ease of use promised by integrated electronic payment systems can be coupled with increasingly sophisticated forms of surveillance. Locking citizens in to a model where all financial transactions can be tracked — or are intrinsically linked — to a smartphone, may be great for governments, but it’s potentially terrible for democracy and its support for free speech and assembly.

 

12 Jun 2019

Newly public CrowdStrike wants to become the Salesforce of cybersecurity

Like many good ideas, CrowdStrike, a seller of subscription-based software that protects companies from breaches, began as a few notes scribbled on a napkin in a hotel lobby.

The idea was to leverage new technology to create an endpoint protection platform powered by artificial intelligence that would blow incumbent solutions out of the water. McAfee, Palo Alto Networks and Symantec, long-time leaders in the space, had been too slow to embrace new technologies and companies were suffering, the CrowdStrike founding team surmised.

Co-founders George Kurtz and Dmitri Alperovitch, a pair of former McAfee executives, weren’t strangers to legacy cybersecurity tools. McAfee had for years been a dominant player in endpoint protection and antivirus. At least, until the emergence of cloud computing.

Since 2012, CrowdStrike’s Falcon Endpoint Protection platform has been pushing those incumbents into a new era of endpoint protection. By helping enterprises across the globe battle increasingly complex attack scenarios more efficiently, CrowdStrike, as well as other fast-growing cybersecurity upstarts, has redefined company security standards much like Salesforce redefined how companies communicate with customers.

“I think we had the foresight that [CrowdStrike] was going to be a foundational element for security,” CrowdStrike chief executive officer George Kurtz told TechCrunch this morning. The full conversation can be read further below.

CrowdStrike co-founder and CEO George Kurtz.

12 Jun 2019

Laundry startup FlyCleaners confirms major layoffs

FlyCleaners, a New York startup offering on-demand laundry pickup and delivery, has laid off “a large number” of its employees, co-founder and CEO David Salama told TechCrunch.

This confirms a story earlier this week in Crain’s New York reporting that FlyCleaners filed a notification with the Department of Labor outlining plans to close its Long Island City plant and lay off 116 employees.

As Salama explained when we profiled him several years ago, FlyCleaners customers can use the mobile app whenever they want someone to pick up their laundry — the startup handles pickup and return, while the actual cleaning is handled by local businesses.

In an email about the layoffs, Salama told me that the company (which raised a $2 million round led by Zelkova Ventures back in 2013) created its own team for pickup and delivery because “when we started FlyCleaners six years ago, the last mile logistics industry was simply not where we needed it to be in order to effectively service our customers.” More recently, however, the company has been testing out partnerships with other logistics companies as a way to “supplement” its own team.

“Recently, it became clear to us that the cost of our internal team was just too large to bear and it was starting to hamper our ability to execute strategically and to sustain and grow our business,” Salama continued. “And so, that [led] to the painful decision to lay off a large number of employees and to proceed as a more asset light organization.”

He added, “We don’t anticipate that this change will materially decrease the service we offer our customers. If anything, by partnering with larger scale logistics providers, our service should be more efficient and resilient than it currently is.”

But if partners are handling pickups, delivery and the laundry, what does FlyCleaners bring to the table? When I asked what the company will focus on moving forward, Salama said, “I prefer to be discreet about it[,] but I’m comfortable saying that our plan is to leverage our technology to create the best customer experience possible.”

He also said that the startup is working with its logistics partners to find new positions for laid-off employees.

12 Jun 2019

Why Vinod Khosla thinks radiologists still practicing in 10 years will be ‘causing deaths’

Doubling down on comments he’s made throughout the years regarding AI’s potential impact on the medical industry, legendary Silicon Valley investor and Sun Microsystems founder Vinod Khosla said on Wednesday that he believes “any radiologist to plans to practice in ten years will be killing patients every day” because machine-powered solutions will have advanced to such a point that they’ll be far more effective than professional human practitioners.

Speaking at the closing keynote of Creative Destruction Lab’s Super Session in Toronto, Khosla also said on stage that “radiologists are toast,” and that they flat out “shouldn’t be a job,” continuing that in a decade when AI-based diagnostic technology has advanced, people in this profession will “be causing deaths, because [they] choose to practice.”

The position was in keeping with his past statements on the subject, dating back to as early as 2017, when he expressed the belief that some types of doctors would be “obsolete” within five years (the timeline seems to have gotten a bit longer in the interim, but he qualified later that this include the time it will take for acceptance of the fact that the tech is better by the community and general public). Khosla added that he also believes that oncologists will also be surpassed by alternatives based on domain-specific AI solutions, but that that’s probably a bit further out on the 15-year horizon.

Instead, he believes that human general practitioner doctors will be more valuable, and will work with AI solutions for more specialized medical fields often currently considered more highly skilled. This is in keeping with the general thinking about how narrowly focused AI is easier to accomplish than machine intelligence that addresses more general topics.

Khosla noted further that oncology is “much easier to automate” than the job of a factory worker, since the job of a factory worker “has much more dimensionality.”

The investor qualified the strength of his statements by adding that he believes the time for being polite is over, since he does believe that on balance people will be more dangerous than machine intelligence in the specific domain of radiology in the ten year timeframe.

12 Jun 2019

Facebook collected device data on 187,000 users using banned snooping app

Facebook obtained personal and sensitive device data on about 187,000 users of its now-defunct Research app, which Apple banned earlier this year after the app violated its rules.

The social media giant said in a letter to lawmakers — which TechCrunch obtained — that it collected data on 31,000 users in the U.S., including 4,300 teenagers. The rest of the collected data came from users in India.

Earlier this year, a TechCrunch investigation found both Facebook and Google were abusing their Apple-issued enterprise developer certificates, designed to only allow employees to run iPhone and iPad apps used only inside the company. The investigation found the companies were building and providing apps for consumers outside Apple’s App Store, in violation of Apple’s rules. The apps paid users in return for collecting data on how participants used their devices and understand app habits by gaining access to all of the network data in and out of their device.

Apple banned the apps by revoking Facebook’s enterprise developer certificate — and later Google’s enterprise certificate. In doing so, the revocation knocked both companies’ fleet of internal iPhone or iPad app offline that relied on the same certificates.

But in response to lawmakers’ questions, Apple said it didn’t know how many devices installed Facebook’s rule-violating app.

“We know that the provisioning profile for the Facebook Research app was created on April 19, 2017, but this does not necessarily correlate to the date that Facebook distributed the provisioning profile to end users,” said Timothy Powderly, Apple’s director of federal affairs, in his letter.

Facebook said the app dated back to 2016.

TechCrunch also obtained the letters sent by Apple and Google to lawmakers in early March, but were never made public.

These “research” apps relied on willing participants to download the app from outside the app store and use the Apple-issued developer certificates to install the apps. Then, the apps would install a root network certificate, allowing the app to collect all the data out of the device — like web browsing histories, encrypted messages, and mobile app activity — potentially also including data from their friends — for competitive analysis.

A response by Facebook about the number of users involved in Project Atlas. (Image: TechCrunch)

In Facebook’s case, the research app — dubbed Project Atlas — was a repackaged version of its Onavo VPN app, which Facebook was forced to remove from Apple’s App Store last year for gathering too much device data.

Just this week, Facebook relaunched its research app as Study, only available on Google Play and for users who have been approved through Facebook’s research partner, Applause. Facebook said it would be more transparent about how it collects user data.

Facebook’s vice-president of public policy Kevin Martin defended the company’s use of enterprise certificates, saying it “was a relatively well-known industry practice.” When asked, a Facebook spokesperson didn’t quantify this further. Later, TechCrunch found dozens of apps that used enterprise certificates to evade the app store.

Facebook previously said it “specifically ignores information shared via financial or health apps.” In its letter to lawmakers, Facebook stuck to its guns, saying its data collection was focused on “analytics,” but confirmed “in some isolated circumstances the app received some limited non-targeted content.”

“We did not review all of the data to determine whether it contained health or financial data,” said a Facebook spokesperson. “We have deleted all user-level market insights data that was collected from the Facebook Research app, which would include any health or financial data that may have existed.”

But Facebook didn’t say what kind of data, only that the app didn’t decrypt “the vast majority” of data sent by a device.

Facebook describing the type of data it collected — including “limited, non-targeted content.” (Image: TechCrunch)

Google’s letter, penned by public policy vice-president Karan Bhatia, did not provide a number of devices or users, saying only that its app was a “small scale” program. When reached, a Google spokesperson did not comment by our deadline.

Google also said it found “no other apps that were distributed to consumer end users,” but confirmed several other apps used by the company’s partners and contractors, which no longer rely on enterprise certificates.

Google explaining which of its apps were improperly using Apple-issued enterprise certificates. (Image: TechCrunch)

Apple told TechCrunch that both Facebook and Google “are in compliance” with its rules as of the time of publication. At its annual developer conference last week, the company said it now “reserves the right to review and approve or reject any internal use application.”

Facebook’s willingness to collect this data from teenagers — despite constant scrutiny from press and regulators — demonstrates how valuable the company sees market research on its competitors. With its restarted paid research program but with greater transparency, the company continues to leverage its data collection to keep ahead of its rivals.

Facebook and Google came off worse in the enterprise app abuse scandal, but critics said in revoking enterprise certificates Apple retains too much control over what content customers have on their devices.

The Justice Department and the Federal Trade Commission are said to be examining the big four tech giants — Apple, Amazon, Facebook, and Google-owner Alphabet — for potentially falling foul of U.S. antitrust laws.

12 Jun 2019

Drones are making a difference in the world and regulatory agencies are helping

About two months ago, in the middle of the night, a small, specially designed unmanned aircraft system – a drone – carried a precious cargo at 300 feet altitude and 22 miles per hour from West Baltimore to the University of Maryland Medical Center downtown, a trip of about 5 minutes. They called it, “One small hop for a drone; one major leap for medicine.”

The cargo was a human kidney, and waiting for that kidney at the hospital was a patient whose life would be changed for the better.

“This whole thing is amazing,” the 44-year-old recipient later told the University of Maryland engineering and medical teams that designed the drone and the smart container. The angel flight followed more than two years of research, development and testing by the Maryland aerospace and medical teams and close coordination with the Federal Aviation Administration (FAA) .

There were many other ways the kidney could have been delivered to the hospital, but proving that it could be done by drone sets the stage for longer and longer flights that will ultimately lower the cost and speed up the time it takes to deliver an organ. And speed is life in this case – the experts say the length of time it takes to move an organ by traditional means is a major issue today.

This is one example of how small drones are already changing the landscape of our economy and society. Our job at the Department of Transportation (DOT), through the FAA, is to safely integrate these vehicles into the National Airspace System.

Time is of the essence. The Department has been registering drones for less than four years and already there are four times as many drones—1.5 million– on the books as manned aircraft. This week in Baltimore, more than 1,000 members of the drone community are coming together to discuss the latest issues in this fast-growing sector  as part of the fourth annual FAA UAS Symposium, which the Department co-hosts with the Association for Unmanned Aircraft Systems.

Along with public outreach, the Department is also involved in demonstration projects, including the Integration Pilot Program, or IPP. Created by this Administration in 2017, the IPP allows the FAA to work with state, local and tribal governments across the U.S. to get the experience needed to develop the regulations, policy and   guidance for safely integrating drones, including tackling tough topics like security and privacy. The experience gained and the data collected will help ensure the United States remains the global leader in safe UAS integration and fully realizes the economic and societal benefits of this technology.

A couple of IPP examples show the ingenuity of the drone community.

In San Diego, the Chula Vista police department and CAPE, a private UAS teleoperations company, are using drones as first responders to potentially save the lives of officers and make the department more efficient. Since October, they have launched drone first responders on more than 400 calls in which 59 arrests were made, and for half of those calls, the drone was first on the scene with an average on-scene response time of 100 seconds. Equally important is the 60 times that having the drone there first eliminated the need to send officers at all.

Recently as the result of an IPP project, the FAA granted the first airline certification to Alphabet Inc.’s Wing Aviation, a commercial drone operator that will deliver packages in rural Blacksburg, Virginia.

What happens next is that the FAA will gradually implement new rules to expand when and how those operators can conduct their business safely and securely. To manage all the expected traffic, the FAA is working with NASA and industry on a highly automated UAS Traffic Management, or UTM, concept.

At the end of the day, drones will help communities like Baltimore — and others throughout the country — save lives and deliver new services. DOT and the FAA will help ensure it’s all done safely, and that public concerns about privacy and security are addressed.