Year: 2018

02 May 2018

Facebook teases major VR display upgrades with Oculus ‘Half Dome’ prototype

For the last couple years the hardware upgrades that VR headsets have been getting have been pretty stale, there might be a resolution bump or the addition of eye-tracking or new controllers, but most of the advances have seemed to be on the software side. Facebook is focusing on some more fundamental display issues in their latest internal VR headset prototype which they showed off at day 2 of F8.

Oculus “Half Dome” will focus on allowing users to see more of their environment at once inside the headset while also making some sophisticated changes that will allow them to shift focus between objects.

The prototype will bring the field-of-view from 100 degrees to 140 degrees allowing users to see more of the visual world in their periphery. What’s even more impressive is that Facebook has achieved this without creating an even bulkier design, the prototype maintains the size of the existing Rift headset thanks to their “continued advances in lenses.” It seems that there are some fundamental display issues that the company is aiming to tackle before it shifts the focus to making the headset smaller. 

In terms of depth-of-field, existing headsets don’t give users multiple focal lengths. What that means is that if someone gives you something to read or another object you need to see clearly, they stick the focus around 2 meters from the user. This was one of the major issues that Magic Leap was claiming they had solved with their display technology though it’s unclear what of their research is actually making it into the end product.

Oculus says they have been able to achieve variable focus in one of their newest prototypes, they’re doing this by physically moving the screens inside the headset to accommodate the different depths of field. It works similarly to the auto-focus function in cameras but won’t cause any noise or vibrations for users, the company says.

Oculus has talked about a lot of advances towards displays like this on the research-side, but this shows how close to the real-deal they are with a headset that integrates the technology without increasing the bulk of current Oculus hardware. While Oculus has devoted much of their public attention on prototypes to standalone headsets, “Half Dome” showcases some big changes that will move the dial on the highest-end VR headset displays.

02 May 2018

Google revamps its Google Maps developer platform

Google is launching a major update to its Google Maps API platform for developers today — and it’s also giving it a new name: the Google Maps Platform.

This is one of the biggest changes to the platform in recent years and it’ll greatly simplify the Google Maps developer offerings and how Google charges for access to those APIs, though starting June 11, all Google Maps developers will have to have valid API key and a Google Cloud Platform billing account, too.

As part of this new initiative, Google is combining the 18 individual Maps APIs the company currently offers into only three core products: Maps, Routes and Places. The good new for developers here is that Google promises that their existing code will continue to work without any changes.

As part of this update, Google is also changing how it charges for access to these APIs. It now offers a single pricing plan with access to free support. Currently, Google offers both a Standard and Premium plan (where the premium plan included access to support, for example), but going forward, it’ll only offer a single one, which also provides developers with $200 of free monthly usage. As usual, there are also bespoke pricing plans for enterprise customers.

As Google also today announced, the company plans to continue to launch various Maps-centric industry-specific solutions. Earlier this year, the company launched a program for game developers who want to build real-world games on Maps data, for example, and today it announced similar solutions for asset tracking and ridesharing. Lyft already started using the ridesharing product in its app last year.

“Our asset tracking offering helps businesses improve efficiencies by locating vehicles and assets in real-time, visualizing where assets have traveled, and routing vehicles with complex trips,” the Maps team writes in today’s announcement. “We expect to bring new solutions to market in the future, in areas where we’re positioned to offer insights and expertise.”

Overall, the Google Maps team seems to be moving in the right direction here. Google Maps API access has occasionally been a divisive issue, especially during times when Google changed its free usage levels. Today’s change likely won’t create this kind of reaction from the developer community since it’ll likely make life for developers easier in the long run.

02 May 2018

Cambridge Analytica shuts down in light of ‘unfairly negative’ press coverage

Cambridge Analytica is done. In light of the sprawling controversy around its role in improperly obtaining data from Facebook users through a third party, the company will end its U.S. and U.K. operations.

In a press release confirming the decision, the company said that “unfairly negative media coverage” around the Facebook incident has “driven away virtually all of the Company’s customers and suppliers,” making its business no longer financially viable. The same goes for the SCL Group, Cambridge Analytica’s U.K.-based affiliate and parent company:

Earlier today, SCL Elections Ltd., as well as certain of its and Cambridge Analytica LLC’s U.K. affiliates (collectively, the “Company” or “Cambridge Analytica”) filed applications to commence insolvency proceedings in the U.K.  The Company is immediately ceasing all operations…

Additionally, parallel bankruptcy proceedings will soon be commenced on behalf of Cambridge Analytica LLC and certain of the Company’s U.S. affiliates in the United States Bankruptcy Court for the Southern District of New York.

On Wednesday, just before the company went public with its news, Gizmodo reported that employees of Cambridge Analytica’s U.S. offices learned that their jobs were being terminated when they were ordered to hand over their company keycards.

Given its already fairly shadowy business practices, it remains to be seen if this is really the end for Cambridge Analytica or just a strategic rebrand while it waits for the “siege” of negative media coverage to cool off.

02 May 2018

Stories are about to surpass feed sharing. Now what?

We’re at the cusp of the visual communication era. Stories creation and consumption is up 842 percent since early 2016, according to consulting firm Block Party. Nearly a billion accounts across Snapchat, Instagram, WhatsApp, Facebook, and Messenger now create and watch these vertical, ephemeral slideshows. And yesterday, Facebook chief product officer Chris Cox showed a chart detailing how “the Stories format is on a path to surpass feeds as the primary way people share things with their friends sometime next year.”

The repercussions of this medium shift are vast. Users now consider how every moment could be glorified and added to the narrative of their day. Social media platforms are steamrolling their old designs to highlight the camera and people’s Stories. And advertisers must rethink their message not as a headline, body text, and link, but as a background, overlays, and a feeling that lingers even if viewers don’t click through.

WhatsApp’s Stories now have over 450 million daily users. Instagram’s have over 300 million. Facebook Messenger’s had 70 million in September. And Snapchat as a whole just reached 191 million, about 150 million of which use Stories according to Block Party. With 970 million accounts, it’s the format of the future. Block Party calculates that Stories grew 15X faster than feeds from Q2 2016 to Q3 2017. And that doesn’t even count Google’s new AMP Stories for news, Netflix’s Stories for mobile movie previews, and YouTube’s new Stories feature.

Facebook CEO Mark Zuckerberg even admitted on last week’s earnings call that the company is focused on “making sure that ads are as good in Stories as they are in feeds. If we don’t do this well, then as more sharing shifts to Stories, that could hurt our business.” When asked, Facebook confirmed that it’s now working on monetization for Facebook Stories.

From Invention To Standard

“They deserve all the credit”, Instagram CEO Kevin Systrom told me about Snapchat when his own app launched its clone of Stories. But what sprouted as Snapchat CEO Evan Spiegel and his team reimagining the Facebook News Feed through the lens of its 10-second disappearing messages has blossomed into the dominant way to see life from someone else’s perspective. And just as Facebook and Twitter took FriendFeed and refined it with relevancy sorting, character constraints, and all manners of embedded media, the Stories format is still being perfected. “This is about a format, and how you take it to a network and put your own spin on it” Systrom followed up.

Snapchat is trying to figure out if Stories from friends and professional creators should be separate, and if they should be sorted by relevancy or reverse chronologically. Instagram and Facebook are opening Stories up to posts from third-party apps like Spotify that makes them a great way to discover music. WhatsApp is pushing the engineering limits of Stories, figuring out ways to make the high-bandwidth videos play on slow networks in the developing world.

Messenger is removing its camera from the navigation menu, and settling in as a place to watch Stories shared from Facebook and Instagram. Meanwhile, Messenger is merging augmented reality, commerce, and Stories so users can preview products in AR and then either share or buy them. Facebook created a Stories carousel ad that lets businesses share a slideshow of three photos or videos together to string together a narrative. And perhaps most tellingly, Facebook is testing a new post composer for its News Feed that actually shows an active camera and camera roll preview to coerce you into sharing Stories instead of a text status. Companies who refuse the trend may be left behind.

Social Media Bedrock

As I wrote two years ago when Snapchat with the only app with Stories:

“Social media creates a window through which your friends can watch your life. Yet most social networks weren’t designed that way, because phones, screen sizes, cameras, and mobile network connections weren’t good enough to build a crystal-clear portal.

With all its text, Twitter is like peering through a crack in a fence. There are lots of cracks next to each other, but none let you see the full story. Facebook is mostly blank space. It’s like a tiny jail-cell window surrounded by concrete. Instagram was the closest thing we had. Like a quaint living room window, you can only see the clean and pretty part they want you to see.

Snapchat is the floor-to-ceiling window observation deck into someone’s life. It sees every type of communication humans have invented: video, audio, text, symbols, and drawings. Beyond virtual reality and 360 video — both tough to capture or watch on the go — it’s difficult to imagine where social media evolves from here.” It turns out that over the next two years, social media would not evolve, but instead converge on Stories. 

What comes next is a race for more decorations, more augmented reality, more developers, and more extendability beyond native apps and into the rest of the web. Until we stop using cell phones all together, we’ll likely see most of sharing divided between private messaging and broadcasted Stories.

The medium is a double-edged sword for culture, though. While a much more vivid way to share and engender empathy, they also threaten to commodify life. When Instagram launched Stories, Systrom said it was because otherwise you “only get to see the highlights”.

But he downplayed how a medium for capturing more than the highlights would pressure people around the world to interrupt any beautiful scene or fit of laughter or quiet pause with their camera phone. We went from people shooting and sharing once or a few times a day to constantly. In fact, people plan their activities not just around a picture-perfect destination, but turning their whole journey into success theater.

If Stories are our new favorite tool, we must learn to wield them judiciously. Sometimes a memory is worth more than an audience. When it’s right to record, don’t get in the way of someone else’s experience. And after the Story is shot, return to the moment and save captioning and decoration for down time. Stories are social media bedrock. There’s no richer way to share, so they’re going to be around for a while. We better learn to gracefully coexist.

02 May 2018

Soft Robotics raises $20 million to expand operations

Massachusetts-based Soft Robotics announced this week that it’s raised $20 million in funding, courtesy of Scale Venture Partners, Calibrate Ventures, Honeywell Ventures and Tekfen Ventures, along with existing investors like robotics giant, ABB. The round follows a $5 million Series A the company closed back in late-2015.

The investment interest is pretty clear on this one. Picking and placing is the de rigueur industrial robotics challenge at the moment, and the company’s soft, air-filled hands offer a novel approach to the issue. The rubbery materials that comprise the company’s robotic grippers make them much more compliant and therefore more capable of picking up a variety of objects with minimal pre-programming and on-board vision systems.

Thus far, Soft has primarily found a spot for itself in the food industry, serving factories with delicate products like produce and pizza dough. It’s also been adopted by Just Born Quality Confections, the people who bring you Peeps.

According to the company, the new round will help push Soft even further into the food and beverage categories, along with a larger presence in retail and logistics. The involvement of Honeywell and Yamaha’s investment wings could also signal interest from those companies’ own warehouses. With the right air pressure applied, the system should be strong enough to pick up more solid objects. 

Warehouse fulfillment has become increasingly strained in recent years, due to expectations from companies like Amazon, opening a space for robotics companies to address fast-paced but repetitive jobs like moving product onto and off of conveyer belts. Late last month, Soft showed off a low-cost, AI-driven warehouse system designed to retrieve products from bins to sort and fulfill retail orders with little oversight from its human counterparts.

02 May 2018

Facebook animates photo-realistic avatars to mimic VR users’ faces

Facebook wants you to look and move like you in VR, even if you’ve got a headset strapped to your face in the real world. That’s why it’s building a new technology that uses a photo to map someone’s face into VR, and sensors to detect facial expressions and movements to animate that avatar so it looks like you without an Oculus on your head.

CTO Mike Schroepfer previewed the technology during his day 2 keynote at Facebook’s F8 conference. Eventually, this technology could let you bring your real-world identity into VR so you’re recognizable by friends. That’s critical to VR’s potential to let us eradicate the barriers of distance and spend time in the same “room” with someone on the other side of the world. These social VR experiences will fall flat without emotion that’s obscured by headsets or left out of static avatars. But if Facebook can port your facial expressions alongside your mug, VR could elicit similar emotions to being with someone in person.

Facebook has been making steady progress on the avatar front over the years. What began as a generic blue face eventually got personalized features, skin tones and life-like features, and became a polished and evocative digital representation of a real person. Still, they’re not quite photo-realistic.

Facebook is inching closer, though, by using hand-labeled characteristics on portraits of people’s faces to train its artificial intelligence how to turn a photo into an accurate avatar.

Meanwhile, Facebook has tried to come up with new ways to translate emotion into avatars. Back in late 2016, Facebook showed off its “VR emoji gestures,” which let users shake their fists to turn their avatar’s face mad, or shrug their shoulders to adopt a confused expression.

Still, the biggest problem with Facebook’s avatars is that they’re trapped in its worlds of Oculus and social VR. In October, I called on Facebook to build a competitor to Snapchat’s wildly popular Bitmoji avatars, and we’re still waiting.

VR headsets haven’t seen the explosive user adoption some expected, in large part because they lack enough compelling experiences inside. There are zombie shooters and puzzle rooms and shipwrecks to explore, but most tire of them quickly. Games and media lose their novelty in a way social networking doesn’t. Imagine what you were playing or watching 14 years ago, yet we’re still using Facebook.

That’s why the company needs to nail emotion within VR. It’s the key to making the medium impactful and addictive.

02 May 2018

Facebook engineer and ‘professional stalker’ reportedly fired over creepy Tinder messages

There’s no shortage of Facebook news this week on account of F8, but this creepy Facebook-adjacent event with a good outcome seems worth noting. An engineer accused of abusing his access to data at the company in Tinder messages has been fired, Facebook confirmed to TechCrunch today.

The issue arose over the weekend: Jackie Stokes, founder of Spyglass Security, explained on Twitter that someone she knew had received some rather creepy messages from someone she personally confirmed was a Facebook engineer.

The engineer described themselves as a “professional stalker,” which however accurate it may be (they attempt to unmask hackers) is probably not the best way to introduce yourself to a potential partner. They then implied that they had been employing their professional acumen in pursuit of identifying their new quarry.

Note that the above isn’t the whole exchange, just an excerpt.

Facebook employees contacted Stokes for more information and began investigating. Alex Stamos, Facebook’s chief security officer, offered the following statement:

We are investigating this as a matter of urgency. It’s important that people’s information is kept secure and private when they use Facebook. It’s why we have strict policy controls and technical restrictions so employees only access the data they need to do their jobs – for example to fix bugs, manage customer support issues or respond to valid legal requests. Employees who abuse these controls will be fired.

And fired he was, Facebook spokesperson confirmed. The company has not yet responded to a question regarding what those controls were that should ostensibly have prevented the person from accessing the data of a prospective date.

It’s disturbing that someone in such a privileged position would use it for such tawdry and selfish purposes, but not really surprising. It is, however, also heartening that the person was fired promptly for doing so, and while everyone was busy at a major conference, at that.

(Updated with Facebook’s full statement and confirmation.)

02 May 2018

Facebook’s open-source Go bot can now beat professional players

Go is the go-to game for machine learning researchers. It’s what Google’s DeepMind team famously used to show off its algorithms, and Facebook, too, recently announced that it was building a Go bot of its own. As the team announced at the company’s F8 developer conference today, the ELF OpenGo bot has now achieved professional status after winning all 14 games it played against a group of top 30 human Go players recently.

“We salute our friends at DeepMind for doing awesome work,” Facebook CTO Mike Schroepfer said in today’s keynote. “But we wondered: Are there some unanswered questions? What else can you apply these tools to.” As Facebook notes in a blog post today, the DeepMind model itself also remains under wraps. In contrast, Facebook has open-sourced its bot.

“To make this work both reproducible and available to AI researchers around the world, we created an open source Go bot, called ELF OpenGo, that performs well enough to answer some of the key questions unanswered by AlphaGo,” the team writes today.

It’s not just Go that the team is interested in, though. Facebook’s AI Research group has also developed a StarCraft bot that can handle the often chaotic environment of that game. The company plans to open-source this bot, too. So while Facebook isn’t quite at the point where it can launch a bot that can learn any game (with the right amount of training), the team is clearly making quite a bit of progress here.

02 May 2018

Only 24 hours left on Disrupt SF super early-bird ticket prices

The final 24-hour shot clock is ticking, startup fans. That means you have one last day to get the best pricing on passes to Disrupt San Francisco 2018, which takes place September 5-7 at Moscone Center West. Come May 3, prices increase and your chance to save up to $1,800 disappears. Don’t miss your shot. Buy your passes today.

There are so many great reasons to attend TechCrunch Disrupt SF. It’s the essential tech conference for anyone who’s anyone in the startup scene. It’s where founders meet investors, movers meet shakers, ideas are born and partnerships are made. This year, we’ve supersized our flagship event and it promises to be epic.

That begins with more 10,000 attendees, more than 1,200 startups and exhibitors and a special (but not exclusive) focus on these tech categories: AI, AR/VR, Blockchain, Biotech, Fintech, Gaming, Healthtech, Privacy/Security, Space, Mobility, Retail and Robotics.

What’s more, we’ve doubled the prize money for Startup Battlefield, the top startup pitch competition. This year, 15-30 of the best pre-Series A startups will launch their companies and strut their stuff on the Disrupt SF Main Stage — and compete for a $100,000 non-equity cash prize. We’re still accepting applications, so if you think you’ve got what it takes, you can apply right here.

You can also exhibit in Startup Alley and get your company in front of thousands of prospective investors, partners, collaborators, technologists and media — more than 400 media outlets attend Disrupt SF. Who knows, you might even get to exhibit for free if TechCrunch editors designate your company as a TC Top Pick. Be sure to click on that link and apply.

Some of the leading founders, technologists and industry disruptors will speak at Disrupt SF, including Dr. Joseph DeSimone, co-founder and CEO of Carbon, the 3D printing startup. He’ll join Eric Liedtke, Adidas executive board member, onstage to discuss a range of topics, including upending traditional manufacturing and the relationship between incumbents and disruptive startups.

And speaking of disrupting, we’ve done a bit of that ourselves. For the first time, we’re launching the Virtual Hackathon. And what’s more, we’re offering a $10,000 prize to the top hack team. Of course, you can count on lots of other great sponsor prizes — and score free passes to Disrupt, too. Be sure to sign up for the Virtual Hackathon here for more information and to receive updates on how you can participate.

We’re still just scratching the surface of what Disrupt SF 2018 has to offer. If you’re a founder or an investor, be sure to check out CrunchMatch, our platform designed to simplify the vetting-and-setting meeting process. And you’ll find plenty of opportunity for more networking at the TechCrunch after parties.

Disrupt SF 2018 is three programming-packed days of speakers, workshops, exhibits, networking, demos — and opportunity. You have 24 hours left for your shot to get the best price on passes.

02 May 2018

Facebook is using your Instagram photos to train its image recognition AI

In the race to continue building more sophisticated AI deep learning models, Facebook has a secret weapon: billions of images on Instagram .

In research the company is presenting today at F8, Facebook details how it took what amounted to billions of public Instagram photos that had been annotated by users with hashtags and used that data to train their own image recognition models. They relied on hundreds of GPUs running around the clock to parse the data, but were ultimately left with deep learning models that beat industry benchmarks, the best of which achieved 85.4 percent accuracy on ImageNet.

If you’ve ever put a few hashtags onto an Instagram photo, you’ll know doing so isn’t exactly a research-grade process. There is generally some sort of method to why users tag an image with a specific hashtag; the challenge for Facebook was sorting what was relevant across billions of images.

When you’re operating at this scale — the largest of the tests used 3.5 billion Instagram images spanning 17,000 hashtags — even Facebook doesn’t have the resources to closely supervise the data. While other image recognition benchmarks may rely on millions of photos that human beings have pored through and annotated personally, Facebook had to find methods to clean up what users had submitted that they could do at scale.

The “pre-training” research focused on developing systems for finding relevant hashtags; that meant discovering which hashtags were synonymous while also learning to prioritize more specific hashtags over the more general ones. This ultimately led to what the research group called the “large-scale hashtag prediction model.”

The privacy implications here are interesting. On one hand, Facebook is only using what amounts to public data (no private accounts), but when a user posts an Instagram photo, how aware are they that they’re also contributing to a database that’s training deep learning models for a tech mega-corp? These are the questions of 2018, but they’re also issues that Facebook is undoubtedly growing more sensitive to out of self-preservation.

It’s worth noting that the product of these models was centered on the more object-focused image recognition. Facebook won’t be able to use this data to predict who your #mancrushmonday is and it also isn’t using the database to finally understand what makes a photo #lit. It can tell dog breeds, plants, food and plenty of other things that it’s grabbed from WordNet.

The accuracy from using this data isn’t necessarily the impressive part here. The increases in image recognition accuracy only were a couple of points in many of the tests, but what’s fascinating are the pre-training processes that turned noisy data that was this vast into something effective while being weakly trained. The models this data trained will be pretty universally useful to Facebook, but image recognition could also bring users better search and accessibility tools, as well as strengthening Facebook’s efforts to combat abuse on their platform.