Year: 2018

14 Jun 2018

NXP-Qualcomm $44B deal to clear China as Trump authorizes $50B tariffs

The U.S.-China trade battle enters an important new phase. The South China Morning Post is reporting that China’s Ministry of Commerce will clear Qualcomm’s pending $44 billion acquisition of NXP Semiconductors. One independent source also conveyed the same news to TechCrunch, although there has been no official word from Qualcomm, NXP or China at time of publication.

That acquisition was expected to close months ago, but the Chinese government repeatedly delayed its assent to the deal as part of its ongoing fight with the Trump administration over the future of bilateral trade. China’s ministry remained the last competition authority worldwide pending to approve the deal, and presumably it will close rapidly now that antitrust review has been completed.

The news of the approval broke just as The Wall Street Journal reported that the White House has authorized $50 billion in tariffs on Chinese goods. The final list of goods that will be subject to the tariffs has not been released, although TechCrunch has done a data analysis on the last set of tariffs, which focused on aluminum and steel imports. Direct news from the White House is expected Friday.

There has been a studied response and counter-response between the two countries over trade the past year, as both Presidents Trump and Xi Jinping sought high ground over the spat. The most recent set of issues has concerned ZTE, which was offered a reprieve by President Trump only to have its fate brought to Congress for a decision this week.

In my analysis on ZTE’s potential death sentence, I wrote this afternoon that:

Ironically — and to be clear on this view, I am not getting this from sources, but rather pointing out a unique strategy vector here — it might well be Qualcomm that uses its DC policy shop to try to save ZTE. Those lobbyists protected Qualcomm from a takeover by Broadcom earlier this year, and it could try to make the case to Congress that it will be irreparably damaged if legislators don’t back off their threats.

The timing of the approval for Qualcomm could come with an understanding that it help ZTE with its congressional woes. Qualcomm has already agreed to form a strategic partnership with Baidu in the interim around AI and deep learning, which one source said to me was part of a package of concessions offered to placate Beijing.

Without a doubt, the news will prove a rare bit of relief for Qualcomm, which has been buffeted by challenges over the past year, including its hostile takeover battle with Broadcom and ongoing patent lawsuits with some of its biggest customers like Apple. Shareholders are likely to be enthusiastic with the outcome, and the stock was up 3 percent in after-hours trading following the news.

The acquisition of NXP is expected to provide a new set of technologies and patents for Qualcomm, particularly in strategic growth spaces like automotive, where Qualcomm has been weak on its product side.

14 Jun 2018

Purdue’s PHADE technology lets cameras ‘talk’ to you

It’s become almost second nature to accept that cameras everywhere — from streets, to museums and shops — are watching you, but now they may be able to communicate with you, as well. New technology from Purdue University computer science researchers has made this dystopian prospect a reality in a new paper published today. But, they argue, it’s safer than you might think.

The system is called PHADE, which allows for something called “private human addressing,” where camera systems and individual cell phones can communicate without transmitting any personal data, like an IP or Mac address. Instead of using an IP or Mac address, the technology relies on motion patterns for the address code. That way, even if a hacker intercepts it, they won’t be able to access the person’s physical location.

Imagine you’re strolling through a museum and an unfamiliar painting catches your eye. The docents are busy with a tour group far across the gallery and you didn’t pay extra for the clunky recorder and headphones for an audio tour. While pondering the brushwork you feel your phone buzz, and suddenly a detailed description of the artwork and its painter is in the palm of your hand.

To achieve this effect, researchers use an approach similar to the kind of directional audio experience you might find at theme parks. Through processing the live video data, the technology is able to identify the individual motion patterns of pedestrians and when they are within a pertinent range — say, in front of a painting. From there they can broadcast a packet of information linked to the motion address of the pedestrian. When the user’s phone identifies that the motion address matches their own, the message is received.

While this tech can be used to better inform the casual museum-goer, the researchers also believe it has a role in protecting pedestrians from crime in their area.

“Our system serves as a bridge to connect surveillance cameras and people,” He Wang, a co-creator of the technology and assistant professor of computer science, said in a statement. “[It can] be used by government agencies to enhance public safety [by deploying] cameras in high-crime or high-accident areas and warn[ing] specific users about potential threats, such as suspicious followers.”

While the benefits of an increasingly interconnected world are still being debated and critiqued daily, there might just be an upside to knowing a camera’s got its eye on you.

14 Jun 2018

AI edges closer to understanding 3D space the way we do

If I show you a single picture of a room, you can tell me right away that there’s a table with a chair in front of it, they’re probably about the same size, about this far from each other, with the walls this far away — enough to draw a rough map of the room. Computer vision systems don’t have this intuitive understanding of space, but the latest research from DeepMind brings them closer than ever before.

The new paper from the Google -owned research outfit was published today in the journal Science (complete with news item). It details a system whereby a neural network, knowing practically nothing, can look at one or two static 2D images of a scene and reconstruct a reasonably accurate 3D representation of it. We’re not talking about going from snapshots to full 3D images (Facebook’s working on that) but rather replicating the intuitive and space-conscious way that all humans view and analyze the world.

When I say it knows practically nothing, I don’t mean it’s just some standard machine learning system. But most computer vision algorithms work via what’s called supervised learning, in which they ingest a great deal of data that’s been labeled by humans with the correct answers — for example, images with everything in them outlined and named.

This new system, on the other hand, has no such knowledge to draw on. It works entirely independently of any ideas of how to see the world as we do, like how objects’ colors change toward their edges, how they get bigger and smaller as their distance changes and so on.

It works, roughly speaking, like this. One half of the system is its “representation” part, which can observe a given 3D scene from some angle, encoding it in a complex mathematical form called a vector. Then there’s the “generative” part, which, based only on the vectors created earlier, predicts what a different part of the scene would look like.

(A video showing a bit more of how this works is available here.)

Think of it like someone handing you a couple of pictures of a room, then asking you to draw what you’d see if you were standing in a specific spot in it. Again, this is simple enough for us, but computers have no natural ability to do it; their sense of sight, if we can call it that, is extremely rudimentary and literal, and of course machines lack imagination.

Yet there are few better words that describe the ability to say what’s behind something when you can’t see it.

“It was not at all clear that a neural network could ever learn to create images in such a precise and controlled manner,” said lead author of the paper, Ali Eslami, in a release accompanying the paper. “However we found that sufficiently deep networks can learn about perspective, occlusion and lighting, without any human engineering. This was a super surprising finding.”

It also allows the system to accurately recreate a 3D object from a single viewpoint, such as the blocks shown here:

I’m not sure I could do that.

Obviously there’s nothing in any single observation to tell the system that some part of the blocks extends forever away from the camera. But it creates a plausible version of the block structure regardless that is accurate in every way. Adding one or two more observations requires the system to rectify multiple views, but results in an even better representation.

This kind of ability is critical for robots, especially because they have to navigate the real world by sensing it and reacting to what they see. With limited information, such as some important clue that’s temporarily hidden from view, they can freeze up or make illogical choices. But with something like this in their robotic brains, they could make reasonable assumptions about, say, the layout of a room without having to ground-truth every inch.

“Although we need more data and faster hardware before we can deploy this new type of system in the real world,” Eslami said, “it takes us one step closer to understanding how we may build agents that learn by themselves.”

14 Jun 2018

Facebook’s VP of communications and public policy leaves after a decade

In a high-level departure, Facebook is saying goodbye to its VP of communications of public policy, Elliot Scharge, who announced today that he is stepping down.

Scharge has been a key player in Facebook’s public persona and, as of late, rocky dealings with the public for just over a decade. Scharge says this move is one he’s long been thinking about and discussed with CEO Mark Zuckerberg and COO Sheryl Sandberg. And while the move does come several months after Facebook’s Cambridge Analytica implosion and Zuckerberg’s subsequent Capitol Hill excursion (for which Scharge had court side seats) it is hard to imagine that these didn’t expedite the decision, at least a little.

On his Facebook page, where Scharge aptly made his announcement, he said “leading policy and communications for hyper growth technology companies is a joy — but it’s also intense and leaves little room for much else.”

But, while he may be stepping down from his official role at the company Scharge still plans to stay on as an advisor to Zuckerberg and Sandberg for the time being. While Facebook begins searching externally for Scharge’s replacement, its communication department is being run by Caryn Marooney and Rachel Whetstone, with VP of Global Public Policy Joel Kaplan taking the lead of Facebook’s public policies.

In a statement to Recode, Sandberg reflected on the departure and said “[Scharge’s] been instrumental in building our policy and communications teams [and] Mark and I look forward to his ongoing advice over the years ahead.”

As Facebook begins its journey to rebrand its platform as once more a place for friends, and not the news they consume, Scharge’s replacement may need all the advice they can get.

14 Jun 2018

Facebook’s longtime head of policy and comms steps down

A prominent figure that helped shape Facebook public perception over the course of the last decade is on the way out. In a Facebook post today, Elliot Schrage, vice president of communications and public policy, announced his departure.

Schrage joined the company in 2008 after leaving his position in the same role at Google. He had come under fire over the last year at Facebook for his influence in shaping Facebook’s highly criticized public reaction to a series of scandals that began with the platform’s policies during the 2016 U.S. presidential election. In response to questions about Facebook’s potential unwitting role in influencing the outcome of the election, Mark Zuckerberg famously dismissed such concerns as a “pretty crazy idea.”

via Facebook/Elliot Schrage

In a Facebook post, Schrage elaborates:

After more than a decade at Facebook, I’ve decided it’s time to start a new chapter in my life. Leading policy and communications for hyper growth technology companies is a joy — but it’s also intense and leaves little room for much else. Mark, Sheryl and I have been discussing this for a while. I’ll lead the search to identify someone new to oversee our communications and policy teams. We expect to find someone with the same passion, integrity, determination and energy that our teams bring to Facebook every day. Mark and Sheryl have asked me to stay to manage the transition and then to stay on as an advisor to help on particular projects – and I’m happy to help.

Earlier this week, Schrage reportedly apologized for comments made in response to questions from Arjuna Capital Managing Partner Natasha Lamb during an investor meeting. Lamb inquired about Facebook’s plans to correct it gender pay gap among other uncomfortable lines of questioning for the company. Schrage reportedly told Lamb that the company would not answer her questions because she was “not nice.” It’s not clear if that event influences the timing of Schrage’s departure.

14 Jun 2018

Developers – hack your way to free passes to Disrupt SF 2018

You’ve heard how TechCrunch Disrupt San Francisco 2018 is going to be the biggest, most ambitious Disrupt ever — and we’re serious. So serious, in fact, that we’re super-sizing the Hackathon, taking it online and making it global. Thousands of the world’s most talented developers, programmers, hackers and tech makers can participate and submit their hacks from anywhere in the world. The clock starts now – you have a little less than 6 weeks to build your team and start creating your projects – so sign up today to get started.

We’re asking you to show us how you’d creatively produce and apply technology to solve various challenges. Judges will review all eligible submitted hacks and rate them on a scale of 1-5 based on the quality of the Idea, technical implementation of the idea and the product’s potential impact. The 100 top scoring teams will receive up to 5 Innovator Passes for their team to attend TechCrunch Disrupt SF 2018.

Plus, the 30 highest-scoring teams will advance to the semi-finals, where they get to demo their newly created product at Disrupt SF. From there, we’ll choose 10 of those teams to pitch their hack on The Next Stage in front of thousands of Disrupt SF attendees. One of those 10 teams will win the $10,000 grand prize and be the first-ever TechCrunch Disrupt Virtual Hackathon champ.

But that’s not all! We’ll also have some fantastic sponsor contests already announced from BYTON, TomTom and Viond:

BYTON

What can AI do for you while on the move?

What will people want to do in a car that has a 49” screen and drives autonomously? How can we create an enjoyable time with a vehicle that’s able to communicate with other vehicles on the road or smart city infrastructure? We challenge you to think creatively and develop unique solutions to give people their “time to be” while on the move. Smart agendas, recommendations, and digital assistants are only some of the ways we’re thinking about doing artificial intelligence during the age of autonomy.

At BYTON we define ourselves by giving customers their “time to be” while in our cars through a unique user experience, interior design, and autonomous driving. The BYTON Concept is designed to make technology benefit life, providing an enjoyable time for people on the move. BYTON Life is the core of that experience. It is an open digital cloud platform that connects applications, data and smart devices. When integrated with innovative human-vehicle interaction, it takes the intelligent experience with the vehicle to a whole new level.

Sponsor Prizes: $5,000 and an invitation to a BYTON Co-creation Event to meet the creators of BYTON Life will be awarded to the top team that utilizes existing technology and APIs of their choice to develop a unique and creative solution. Awards for 2nd prize will be $2,000 and 3rd prize will be $1,000.

TomTom

TomTom created the easy-to-use navigation device, one of the most influential inventions of all time. Since then, our software and navigation technologies have been powering hundreds of millions of applications globally. From industry-leading location-based products, mapmaking technologies, innovative apps and connected car services. We continue to shape the future, leading the way with autonomous driving, smart mobility and smarter cities.

Location Based AR app on GitHub – Build an Augmented Reality (AR) application which demonstrates how the TomTom Maps APIs can be combined with AR technology to generate custom 3D worlds, enable location annotations or incorporate traffic-enabled directions and travel times in applications. The TomTom Maps APIs are already enabling developers to create location aware applications that can display maps and traffic information, search for locations and points of interest and calculate traffic aware routes and travel times. With this challenge, we want developers to enhance their solutions by combining TomTom location technology with AR technology in a mobile or web application. Creativity and innovation will be highly valued and rewarded!

Sponsor Prize: $5,000.

Viond

Viond allows creatives, agencies and businesses to create interactive 360° / VR experiences with drag-and-drop capabilities without the need for VR programming skills. Viond significantly reduces the effort and cost of creating 360° /VR Experiences providing creatives with the tool and platform to explore this new medium.

Viond provides an authoring tool to create interactive 360° experiences. The authoring tool can be downloaded on the Viond Hackathon site and is available for Mac and PC. Experiences can then be published via the Viond Cloud to the Viond player available for iOS, Android and Oculus.

Sponsor Prize: First place: $1,000 US + 12 month Viond Enterprise license + the chance to win the TechCrunch hackathon. Second Place: $500 US + 12 month Viond Professional license. Third Place: $250 US + 12 month Viond Professional license.

It’s free to participate in our virtual hackathon and you can participate from anywhere! So what are you waiting for? Gather up a group of your closest developer friends and sign up today!

14 Jun 2018

The problem with ‘explainable AI’

The first consideration when discussing transparency in AI should be data, the fuel that powers the algorithms. Companies should disclose where and how they got the data they used to fuel their AI systems’ decisions. Consumers should own their data and should be privy to the myriad ways that businesses use and sell such information, which is often done without clear and conscious consumer consent. Because data is the foundation for all AI, it is valid to want to know where the data comes from and how it might explain biases and counterintuitive decisions that AI systems make.

On the algorithmic side, grandstanding by IBM and other tech giants around the idea of “explainable AI” is nothing but virtue signaling that has no basis in reality. I am not aware, for instance, of any place where IBM has laid bare the inner workings of Watson — how do those algorithms work? Why do they make the recommendations/predictions they do?

There are two issues with the idea of explainable AI. One is a definition: What do we mean by explainability? What do we want to know? The algorithms or statistical models used? How learning has changed parameters throughout time? What a model looked like for a certain prediction? A cause-consequence relationship with human-intelligible concepts?

Each of these entail different levels of complexity. Some of them are pretty easy — someone had to design the algorithms and data models so they know what they used and why. What these models are, is also pretty transparent. In fact, one of the refreshing facets of the current AI wave is that most of the advancements are made in peer-reviewed papers — open and available to everyone.

What these models mean, however, is a different story. How these models change and how they work for a specific prediction can be checked, but what they mean is unintelligible for most of us. It would be like buying an iPad that had a label on the back explaining how a microprocessor and touchscreen works — good luck! And then, adding the layer of addressing human-intelligible causal relationships, well that’s a whole different problem.

Part of the advantage of some of the current approaches (most notably deep learning), is that the model identifies (some) relevant variables that are better than the ones we can define, so part of the reason why their performance is better relates to that very complexity that is hard to explain because the system identifies variables and relationships that humans have not identified or articulated. If we could, we would program it and call it software.

The second overarching factor when considering explainable AI is assessing the trade-offs of “true explainable and transparent AI.” Currently there is a trade-off in some tasks between performance and explainability, in addition to business ramifications. If all the inner workings of an AI-powered platform were publicly available, then intellectual property as a differentiator is gone.

Imagine if a startup created a proprietary AI system, for instance, and was compelled to explain exactly how it worked, to the point of laying it all out — it would be akin to asking that a company disclose its source code. If the IP had any value, the company would be finished soon after it hit “send.” That’s why, generally, a push for those requirements favor incumbents that have big budgets and dominance in the market and would stifle innovation in the startup ecosystem.

Please don’t misread this to mean that I’m in favor of “black box” AI. Companies should be transparent about their data and offer an explanation about their AI systems to those who are interested, but we need to think about the societal implications of what that is, both in terms of what we can do and what business environment we create. I am all for open source, and transparency, and see AI as a transformative technology with a positive impact. By putting such a premium on transparency, we are setting a very high burden for what amounts to an infant but high-potential industry.

14 Jun 2018

Google releases first diversity report since the infamous anti-diversity memo

Google has released its first diversity report since the infamous James Damore memo and the fallout that resulted from it. Those are both long stories but the TL;DR is that Damore said some sexist things in a memo that went viral. He got fired and then sued Google for firing him. That lawsuit, however, was shot down by the National Labor Relations Board in February. Then, it turned out another employee, Tim Chevalier, alleges he was fired for advocating for diversity, as reported by Gizmodo later that month. Now, Chevalier is suing Google.

“I was retaliated against for pointing out white privilege and sexism as they exist in the workplace at Google and I think that’s wrong,” Chevalier told TechCrunch few months ago about why he decided to sue. “I wanted to be public about it so that the public would know about what’s going on with treatment of minorities at Google.”

In court, Google is trying to move the case into arbitration. Earlier this month, Google’s attorney said Chevalier previously “agreed in writing to arbitrate the claims asserted” in his original complaint, according to court documents filed June 11, 2018.

Now that I’ve briefly laid out the state of diversity and inclusion at Google, here’s the actual report, which is Google’s fifth diversity report to date and by far the most comprehensive. For the first time, Google has provided information around employee retention and intersectionality.

First, here are some high-level numbers:

  • 30.9 percent female globally
  • 2.5 percent black in U.S.
  • 3.6 percent Latinx In U.S.
  • 0.3 percent Native American in U.S.
  • 4.2 percent two or more races in U.S.

Google also recognizes its gender reporting is “not inclusive of our non-binary population” and is looking for the best way to measure gender moving forward. As Google itself notes, representation for women, black and Latinx people has barely increased, and for Latinx representation, it’s actually gotten worse. Last year, Google was 31 percent female, two percent black and four percent Latinx.

At the leadership level, Google has made some progress year over year, but the company’s higher ranks are still 74.5 percent male and 66.9 percent white. So, congrats on the progress but please do better next time because this is not good enough.

Moving forward, Google says its goal is to reach or exceed the available talent pool in terms of underrepresented talent. But what that would actually look like is not clear. In an interview with TechCrunch, Google VP of Diversity and Inclusion Danielle Brown told me Google looks at skills, jobs and census data around underrepresented groups graduating with relevant degrees. Still, she said she’s not sure what the representation numbers would look like if Google achieved that. In response to what a job well done would look like, Brown said:

You know as well as we do that it’s a long game. Do we ever get to good? I don’t know. I’m optimistic we’ll continue to make progress. It’s not a challenge we’ll solve over night. It’s quite systemic. Despite doing it for a long time, my team and I remain really optimistic that this is possible.

As noted above, Google has also provided data around attrition for the first time. It’s no surprise — to me, at least — that attrition rates for black and Latinx employees were the highest in 2017. To be clear, attrition rates are an indicator of how many people leave a company. When one works at a company that has so few black and brown people in leadership positions, and at the company as a whole, the unfortunate opportunity to be the unwelcome recipient of othering, micro-aggressions, discrimination and so forth are plentiful.

“A clear low light, obviously, in the data is the attrition for black and Latinx men and women in the U.S.,” Brown told TechCrunch. “That’s an area where we’re going to be laser-focused.”

She added that some of Google’s internal survey data shows employees are more likely to leave when they report feeling like they’re not included. That’s why Google is doing some work around ally training and “what it means to be a good ally,” Brown told me.

“One thing we’ve all learned is that if you stop with unconscious bias training and don’t get to conscious action, you’re not going to get the type of action you need,” she said.

From an attrition stand point, where Google is doing well is around the retention of women versus men. It turns out women are staying at Google at higher rates than men, across both technical and non-technical areas. Meanwhile, Brown has provided bi-weekly attrition numbers to Google CEO Sundar Pichai and his leadership team since January in an attempt to intervene in potential issues before they become bigger problems, she said.

via Google: Attrition figures have been weighted to account for seniority differences across demographic groups to ensure a consistent baseline for comparison.

As noted above, Google for the first time broke out information around intersectionality. According to the company’s data, women of all races are less represented than men of the same race. That’s, again, not surprising. While Google is 3 percent black, just 1.2 percent of its black population is female. And Latinx women make up just 1.7 percent of Google’s 5.3 percent Latinx employee base. That means, as Google notes, the company’s gains in representation of women has “largely been driven by” white and Asian women.

Since joining Google last June from Intel, Brown has had a full plate. Shortly after the Damore memo went viral in August — just a couple of months after Brown joined — Brown said “part of building an open, inclusive environment means fostering a culture in which those with alternative views, including different political views, feel safe sharing their opinions. But that discourse needs to work alongside the principles of equal employment found in our Code of Conduct, policies, and anti-discrimination laws.”

Brown also said the document is “not a viewpoint that I or this company endorses, promotes or encourages.”

Today, Brown told me the whole anti-diversity memo was “an interesting learning opportunity for me to understand the culture and how some Googlers view this work.”

“I hope what this report underscores is our commitment to this work,” Brown told me. “That we know we have a systemic and persistent challenge to solve at Google and in the tech industry.”

Brown said she learned “not every employee is going to agree with Google’s viewpoint.” Still, she does want employees to feel empowered to discuss either positive or negative views. But “just like any workplace, that does not mean anything goes.”

When someone doesn’t follow Google’s code of conduct, she said, “we have to take it very seriously” and “try to make those decisions without regard to political views.”

Megan Rose Dickey’s PGP fingerprint for email is: 2FA7 6E54 4652 781A B365 BE2E FBD7 9C5F 3DAE 56BD

14 Jun 2018

3,000 journalists covering Kim-Trump this week is WTF is wrong with media

Media businesses are in the dumper. Every week, we hear of new layoffs, budget cuts, diminished editorial quality, and more, way more. And yet, somehow, miraculously, more than 3,000 journalists managed to find the funds to travel to Singapore to cover the Kim-Trump Summit Extraordinaire this week.

How many journalists got to see the summit activity? From Politico: “Most notably, the number of American journalists allowed to witness the meeting between Trump and Kim was limited to seven — a smaller group than would usually be present for such a summit, and one that excluded representatives from the major wire services” (emphasis added).

It’s a huge news story, a major historical moment in the relations between the DPRK and the United States, and one that portends massive changes in that relationship going forward. The event should be fervently covered by the global press. Yet, 3,000 seems a stupendous number of people to cover an event so scripted and managed. Journalists watched from a warehouse and even got so bored, they started interviewing each other rather than, I don’t know, a source.

I notice this same dynamic watching the keynote videos of any of the top tech companies — there are hundreds if not thousands of journalists covering these events from the audience. Exactly how you build a unique story sitting there, beats me.

In media, one of the most critical qualities of a great story is salience — how important a story is to a particular audience. Tech readers want to know everything happening at an Apple keynote, just as much as the whole world is curious about what shakes down in Singapore. It makes sense to have a density of journalists to cover these events.

The problem in my mind is the sheer duplication of work, when the increasingly precious time of journalists could be spent on finding more differentiated or unique stories that are under-reported. In Singapore, how many English-language journalists needed to be there? How many Chinese-speaking or Korean-speaking journalists? I’m not suggesting the answer in aggregate is one each, but certainly the number should be fractions of 3,000.

Journalists taking pictures of a TV screen of Kim and Trump. How is this journalism?

I have given a lot of thought to subscription models in media the past few weeks, arguing that consumers are increasingly facing a “subscription hell” and fighting against the notion that paying for content should only be the preserve of the top 1%.

Yet, if we want readers to pay for our content, it has to be a differentiated product. This makes complete sense to every participant in industries like music, or movies, or books. Musicians may cover other artists, but they almost invariably try to perform original music on their own. Ultimately, without your own sound, you have no voice and no fanbase.

Nonetheless, I feel journalists and particularly editors have to be reminded of this on a regular basis. Journalists still cling to the generalist model of our forebears, rather than becoming specialists on a beat where they can offer deeper insights and original reporting. Everyone can’t cover everything.

That’s one reason why people like Ben Thompson at Stratechery and Bill Bishop at Sinocism have grown to be so popular — they do one thing well, and don’t try to offer a bundle of content in the same old way. Instead, they have staked their brands and reputations on their deep focus. Readers can then add and subtract these subscriptions as their interests shift.

The biggest block to improving this duplication is the lack of cooperation among media companies. Syndication of content happens occasionally, such as a recent deal between Politico and the South China Morning Post to provide more China-focused coverage to the U.S.-dominated readership of Politico . Those deals though tend to take months to hash out, and are often not ephemeral enough to match the news cycle.

Imagine instead a world where specialists are covering focused beats. Kim-Trump could have been covered by people who specialize in Singaporean foreign affairs (as hosts, they had the most knowledge of what was going on), as well as North Korea watchers and U.S.-Asia foreign policy junkies. Clearinghouses for syndication (blockchain or no blockchain) could have ensured that the content from these specialists was distributed to all who had an interest in adding coverage. No generalists need apply.

This isn’t an efficiency argument for further newsroom cutbacks, but rather an argument to use the talent and time of existing journalists to trailblaze unique paths and coverage. Until the media learns that not everyone can become a North Korea or Google expert overnight, we are going to continue to see warehouses and ballrooms filled to the brim with preening writers and camera teams, while the stories that most need telling remain overlooked.

14 Jun 2018

Reflections on E3 2018

After taking a year off, I returned to E3 this week. It’s always a fun show, in spite of the fact that the show floor has come to rival Comic-Con in terms of the mass of people the show’s organizers are able to cram into the aisles of the convention center floor.

We’ve been filing stories all week, but here is a very much incomplete collection of my thoughts on this year’s show.

Zombies are still very much a thing

I’d have thought we’d have hit peak zombie years ago, but here we are, zombies everywhere. That includes the LA Convention Center lobby, which was swarming with actors decked out as the undead. There’s something fundamentally disturbing about watching gamers get pictures taken with fake, bloody corpses. Or maybe it’s just the perfect allegory for our time.

Nintendo’s back

A slight adjustment in approach certainly played a role, as the company has embraced mobile gaming. But the key to Nintendo’s return was a refocus on what it does best: offering an innovative experience with familiar IP. Oh, and the GameCube controller Smash Bros. compatibility was a brilliant bit of fan service, even by Nintendo’s standards.

Quantity versus quality?

Microsoft’s event was a sort of video game blitzkrieg. The company showed off 50 titles, a list that included 15 exclusives. Sony, on the other hand, stuck to a handful, but presented them in much greater depth. Ultimately, I have to say I preferred the latter. Real game play footage feels like an extremely finite resource at these events.

Ultra violence in ultra high-def

Certainly not a new trend in gaming, but there’s something about watching someone bite off someone else’s face on the big screen that’s extra upsetting. Sony’s press conference was a strange sort of poetry, with some of the week’s most stunning imagery knee-deep in blood and gore.

Reedus ’n fetus

We saw more footage and somehow we understand the game less?

Checkmate

Indiecade is always a favorite destination at E3. It’s a nice respite from the big three’s packed booths. Interestingly, there were a lot more desktop games than I remember. You know, the real kind with physical pieces and no screens.

Death of a Tomb Raider

I played Shadow of the Tomb Raider on a PC in NVIDIA’s meeting space. It’s good, but I’m not good at it. I killed poor Lara A LOT. I can deal with that sort of thing when my character is in full Master Chief regalia or whatever, but those close-up shots of her face when I drowned her for the fifth time kind of bummed me out. Can video games help foster empathy or are we all just destined to desensitize ourselves because we have tombs to raid, damn it?

I saw the light

NVIDIA also promised me that its ray-tracing tech would be the most impressive demo I saw at E3 that day. I think they were probably right, so take that, Sonic Racing. The tech, which was first demoed at GDC, “brings real-time, cinematic-quality rendering to content creators and game developers.”

VR’s still waiting in the wings

At E3 two years ago, gaming felt like an industry on the cusp of a VR breakthrough. In 2018, however, it doesn’t feel any closer. There were a handful of compelling new VR experiences at the event, but it felt like many of the peripheral and other experiences were sitting on the fringes of the event — both literally and metaphorically — waiting for a crack at the big show.

Remote Control

Sony’s Control trailer was the highest ratio of excitement to actual information I experienced. Maybe it’s Inception the video game or the second coming of Quantum Break. I dunno, looks fun.

AR’s a thing, but not, like, an E3 thing

We saw a few interesting examples of this, including the weirdly wonderful TendAR, which requires you to make a bunch of faces so a fake fish doesn’t die. It’s kind of like version of Seaman that feeds on your own psychic energy. At the end of the day, though, E3 isn’t a mobile show.

Cross-platform

Having said that, there are some interesting examples of cross-platform potential popping up here and there. The $50 Poké Ball Plus for the Switch is a good example I’m surprised hasn’t been talked about more. Along with controlling the new Switch titles, it can be used to capture Pokémon via Pokémon GO. There’s some good brand synergy right there. And then, of course, there’s Fortnite, which is also on the Switch. The game’s battle royale mode is a great example of how cross-platform play can lead to massive success. Though by all accounts, Sony doesn’t really want to play ball.

V-Bucks

Oh, Epic Games has more money than God now.

Moebius strip

Video games are art. You knew that already, blah, blah, blah. But Sable looks like a freaking Moebius comic come to life. I worry that it will be about as playable as Dragon’s Lair, but even that trailer is a remarkable thing.