Year: 2019

11 Feb 2019

Hands-on with an Alpha build of Google Maps’ Augmented Reality mode

I think most of us have had this experience, especially when you’re in a big city: you step off of public transit, take a peek at Google Maps to figure out which way you’re supposed to go… and then somehow proceed to walk two blocks in the wrong direction.

Maybe the little blue dot wasn’t actually in the right place yet. Maybe your phone’s compass was bugging out and facing the wrong way because you’re surrounded by 30-story buildings full of metal and other things that compasses hate.

Google Maps’ work-in-progress augmented reality mode wants to end that scenario, drawing arrows and signage onto your camera’s view of the real world to make extra, super sure you’re heading the right way. It compares that camera view with its massive collection of Street View imagery to try to figure out exactly where you’re standing and which way you’re facing, even when your GPS and/or compass might be a little off. It’s currently in alpha testing, and I spent some hands-on time with it this morning.

A little glimpse of what it looks like in action:

Google first announced AR walking directions about nine months ago at its I/O conference, but has been pretty quiet about it since. Much of that time has been spent figuring out the subtleties of the user interface. If they drew a specific route on the ground, early users tried to stand directly on top of the line when walking, even if it wasn’t necessary or safe. When they tried to use particle effects floating in the air to represent paths and curves, a Google UX designer tells us, one user asked why they were ‘following floating trash’.

The Maps team also learned that no one wants to hold their phone up very long. The whole experience has to be pretty quick, and is designed to be used in short bursts — in fact, if you hold up the camera for too long, the app will tell you to stop.

Firing up AR mode feels like starting up any other Google Maps trip. Pop in your destination, hit the walking directions button… but instead of “Start”, you tap the new “Start AR” button.

A view from your camera appears on screen, and the app asks you to point the camera at buildings across the street. As you do so, a bunch of dots will pop up as it recognizes building features and landmarks that might help it pinpoint your location. Pretty quickly — a few seconds, in our handful of tests — the dots fade away, and a set of arrows and markers appear to guide your way. A small cut-out view at the bottom shows your current location on the map, which does a pretty good job of making the transition from camera mode to map mode a bit less jarring.

When you drop the phone to a more natural position – closer to parallel with the ground, like you might hold it if you’re reading texts while you walk — Google Maps will shift back into the standard 2D map view. Hold up the phone like you’re taking a portrait photo of what’s in front of you, and AR mode comes back in.

In our short test (about 45 minutes in all), the feature worked as promised. It definitely works better in some scenarios than others; if you’re closer to the street and thus have a better view of the buildings across the way, it works out its location pretty quick and with ridiculous accuracy. If you’re in the middle of a plaza, it might take a few seconds longer.

Google’s decision to build this as something that you’re only meant to use for a few seconds is the right one. Between making yourself an easy target for would-be phone thieves or walking into light poles, no one wants to wander a city primarily through the camera lens of their phone. I can see myself using it the first step or two of a trek to make sure I’m getting off on the right foot, at which point an occasional glance at the standard map will hopefully suffice. It’s about helping you feel more certain, not about holding your hand the entire way.

Google did a deeper dive on how the tech works here, but in short: it’s taking the view from your camera and sending a compressed version up to the cloud, where it’s analyzed for unique visual features. Google has a good idea of where you are from your phones’ GPS signal, so it can compare the Street View data it has for the surrounding area to look for things it thinks should be nearby — certain building features, statues, or permanent structures — and work backwards to your more precise location and direction. There’s also a bunch of machine learning voodoo going on here to ignore things that might be prominent but not necessarily permanent (like trees, large parked vehicles, and construction.)

The feature is currently rolling out to “Local Guides” for feedback. Local Guides are an opt-in group of users who contribute reviews, photos, and places while helping Google fact check location information in exchange for early access to features like this.

Alas, Google told us repeatedly that it has no idea when it’ll roll out beyond that group.

11 Feb 2019

Lyft puts $700K toward Oakland’s underserved communities

Lyft is donating $700,000 to TransForm, an organization focused on improving access to transportation in underserved areas throughout California. In partnership with Oakland Mayor Libby Schaaf, Lyft and TransForm will invest in a free bike library and community parklets in Oakland, Calif. Lyft will also offer free rides to low-income residents of Oakland.

“At Lyft, we believe it is our civic responsibility to make cities better places to live,” Lyft co-founder and President John Zimmer said in a statement. “We know that transportation is a major key to unlocking economic mobility. We also know that transportation has been inaccessible for far too many communities for far too long. We have a unique opportunity to set a new standard for transportation equity and inclusiveness.”

Lyft, which bought bike-share behemoth Motivate for about $250 million last July, is working with these organizations to figure out the best places to put bike-share stations. In addition to Lyft offering free rides and a $5 per month pass for bikes and scooters, the money will also enable East Oakland Collective to offer subsidized AC Transit passes for qualifying residents.

The $700,000 is specifically going toward TransForm partner organizations East Oakland Collective and Scraper Bikes. Those two organizations will focus on improving bike-share station placement and launching a community-run bike-lending library.

“The Scraper Bike Team is excited to be a partner on this program to empower black and brown residents and get more butts on bikes!” Scraper Bikes founder and Executive Director Champ Stevenson said in a statement.  “We’ll help make sure the programs and infrastructure reflect the existing community and culture here in deep East Oakland. We plan on employing folks from the community to assist with some of the equity work that will move Oakland in the direction we the people believe it needs to be moving in.”

Update 2:08pm PT: While Lyft is donating $700,000 upfront, it will work with TransForm to raise an additional $300,000 from foundations.

11 Feb 2019

Lenovo Watch X was riddled with security bugs, researcher says

Lenovo’s Watch X was widely panned as “absolutely terrible.” As it turns out, so was its security.

The low-end $50 smartwatch was one of Lenovo’s cheapest smartwatches. Available only for the China market, anyone who wants one has to buy one directly from the mainland. Lucky for Erez Yalon, head of security research at Checkmarx, an application security testing company, he was given one from a friend. But it didn’t take him long to find several vulnerabilities that allowed him to change user’s passwords, hijack accounts and spoof phone calls.

Because the smartwatch wasn’t using any encryption to send data from the app to the server, Yalon said he was able to see his registered email address and password sent in plain text, as well as data about how he was using the watch, like how many steps he was taking.

“The entire API was unencrypted,” said Yalon in an email to TechCrunch. “All data was transferred in plain-text.”

The API that helps power the watch was easily abused, he found, allowing him to reset anyone’s password simply by knowing a person’s username. That could’ve given him access to anyone’s account, he said.

Not only that, he found that the watch was sharing his precise geolocation with a server in China. Given the watch’s exclusivity to China, it might not be a red flag to natives. But Yalon said the watch had “already pinpointed my location” before he had even registered his account.

Yalon’s research wasn’t just limited to the leaky API. He found that the Bluetooth-enabled smartwatch could also be manipulated from nearby, by sending crafted Bluetooth requests. Using a small script, he demonstrated how easy it was to spoof a phone call on the watch.

Using a similar malicious Bluetooth command, he could also set the alarm to go off — again and again. “The function allows adding multiple alarms, as often as every minute,” he said.

Lenovo didn’t have much to say about the vulnerabilities, besides confirming their existence.

“The Watch X was designed for the China market and is only available from Lenovo to limited sales channels in China,” said spokesperson Andrew Barron. “Our [security team] team has been working with the [original device manufacturer] that makes the watch to address the vulnerabilities identified by a researcher and all fixes are due to be completed this week.”

Yalon said that encrypting the traffic between the watch, the Android app and its web server would prevent snooping and help reduce manipulation.

“Fixing the API permissions eliminates the ability of malicious users to send commands to the watch, spoof calls, and set alarms,” he said.

11 Feb 2019

US iPhone users spent $79 last year, up 36% from 2017

Apple’s push to get developers to build subscription-based apps is now having a notable impact on App Store revenues. According to a new report from Sensor Tower due out later this week, revenue generated per U.S. iPhone grew 36 percent, from $58 in 2017 to $79 last year. As is typical, much of that increase can be attributed to mobile gaming, which accounted for more than half of this per-device average. However, more substantial growth took place in the categories outside of gaming — including those categories where subscription-based apps tend to rule the top charts, the firm found.

According to the report’s findings, per-device app spending in the U.S. grew more over the past year than it did in 2017.

From 2017 to 2018, iPhone users spent an average of $21 or more on in-app purchases and paid app downloads — a 36 percent increase compared with the 23 percent increase from 2016 to 2017, when revenue per device grew from $47 to $58.

However, 2018’s figure was slightly lower than the 42 percent increase in average per-device spending seen between 2015 and 2016, when revenue grew from $33 to $47, noted Sensor Tower.

As usual, mobile gaming continued to play a large role in iPhone spending. In 2018, gaming accounted for nearly 56 percent of the average consumer spend — or $44 out of the total $79 spent per iPhone.

But what’s more interesting is how the non-gaming categories fared this past year.

Some categories — including those where subscription-based apps dominate the top charts — saw even higher year-over-year growth in 2018, the firm found.

For example, Entertainment apps grew their spend per device increase by 82 percent to $8 of the total in 2018. Lifestyle apps increased by 86 percent to reach $3.90, up from $2.10.

And though it didn’t make the top five, Health & Fitness apps also grew 75 percent year-over-year to account for an average of $2.70, up from $1.60 in 2017.

Other categories in the top five included Music and Social Networking apps, which both grew by 22 percent.

This data indicates that subscription apps are playing a significant role in helping drive iPhone consumer spending higher.

The news comes at a time when Apple has reported slowing iPhone sales, which is pushing the company to lean more on services to continue to boost its revenue. This includes not just App Store subscriptions, but also things like Apple Music, Apple Pay, iCloud, App Store Search ads, AppleCare and more.

As subscriptions become more popular, Apple will need to remain vigilant against those who would abuse the system.

For example, a number of sneaky subscription apps were found plaguing the App Store in recent weeks. They were duping users into paid memberships with tricky buttons, hidden text, instant trials that converted in days and the use of other misleading tactics.

Apple later cracked down by removing some of the apps, and updated its developer guidelines with stricter rules about how subscriptions should both look and operate.

A failure to properly police the App Store or set boundaries to prevent the overuse of subscriptions could end up turning users off from downloading new apps altogether — especially if users begin to think that every app is after a long-term financial commitment.

Developers will need to be clever to convert users and retain subscribers amid this shift away from paid apps to those that come with a monthly bill. App makers will need to properly market their subscription’s benefits, and even consider offering bundles to increase the value.

But in the near-term, the big takeaway for developers is that there is still good money to be made on the App Store, even if iPhone sales are slowing.

11 Feb 2019

LinkedIn debuts LinkedIn Live, a new live video broadcast service

LinkedIn — the social network for the working world with close to 600 million users globally — says that video is the fastest-growing format on its platform alongside original written work, shared news and other content. Now it’s taking its next step in the medium in earnest.

Today, the company is launching live video, giving people and organizations the ability to broadcast real-time video to select groups, or to the LinkedIn world at large.

Launching in beta first in the US, LinkedIn Live (as the product is called) will be invite-only. In coming weeks, LinkedIn will also post a contact form for others who want to get in on the action. It’s not clear when and if LinkedIn will make it possible for everyone to create LinkedIn Live videos, but if you consider how it developed its publishing features for written work, that will come later. too.

Initial live content that LinkedIn hopes to broadcast lines up with the kind of subject matter you might already see in LinkedIn’s news feed: the plan is to cover conferences, product announcements, Q&As and other events led by influencers and mentors, office hours from a big tech company, earnings calls, graduation and awards ceremonies, and more.

And to underscore how LinkedIn is keen to develop this — especially in its first phase — not as rough-and-ready user-generated content, but as streams of the kinds of videos that fit with its wider ethos, it has selected several third-party developers of live broadcasting streaming services that creators will work with to create and post more polished live video on LinkedIn.

These include Wirecast, Switcher Studio, Wowza Media Systems, Socialive, and Brandlive, “with more to come in the following weeks,” LinkedIn said.

There is another technical partner for LinkedIn’s live video effort, too: Microsoft, whose Azure Media Services, part of its cloud division, is providing encoding. Although Microsoft acquired LinkedIn in 2016, it’s mostly kept a distance in terms of knitting together product development between the two, so this is a notable exception. Skype, incidentally, is not part of this video effort.

Better late than never?

Compared to its competitors in the social networking sphere, LinkedIn has been a late bloomer when it comes to video.

Amid developments from competitors like Twitter and Facebook going back years to bring more engagement to its platforms with the use of moving pictures, the Microsoft-owned LinkedIn introduced its first native video features only in the summer of 2017.

But in the 17 months since launching video features, LinkedIn has seen a big boost in traffic and revenues from (non-live) video on its platform.

“Video is the fastest growing format on our platform right now, and the one most likely to get people talking,” said Pete Davies, the head of consumer products at LinkedIn. He and LinkedIn declined to give specific figures in terms of how many video creators or viewers there are, except to note that “millions” of LinkedIn members have used the feature.

Davies said that live video has been a big request — not least, I’d wager, because it is such a prominent part of how video is being used on other social platforms like YouTube, Facebook and Twitter, putting the functionality front of mind.

“Live has been the most requested feature,” he said. These other social platforms are serving as a template of sorts: as with these other platforms, users can “like” videos as they are being broadcast, with the likes floating along the screen. Viewers can ask questions or make suggestions in the comments in real-time. Hosts can moderate those comments in real-time, too, to remove harassing or other messages, Davies added.

There may be another reason beyond user requests for why LinkedIn is expanding video: it’s proving to be a strong engine for engagement and revenue growth at the company.

So far, the only monetization that LinkedIn has introduced around video is for video advertising. While Microsoft does not break out how much LinkedIn brings in in advertising revenues, much less video advertising, Microsoft reported in its last quarterly earnings that revenues at LinkedIn were up 29 percent with a reference to growing its ads business specifically: “with record levels of engagement highlighted by LinkedIn sessions growth of 30 percent.”

That it seems, is directly coming from its video products: LinkedIn tells me that video ads earn 30 percent more comments per impression than non-video ads and that LinkedIn members spend almost 3 times more time watching video ads compared to time spent with static Sponsored Content.

With LinkedIn looking at tapping into unique content with LinkedIn Live, there is a clear opportunity for the company to explore other ways of monetizing the content beyond ads. For example, it could charge viewers for unique experiences like conferences, or make certain Live events part of the company’s paid tier to lure in more premium subscribers. On the part of the broadcasters, it could potentially provide fee-based services to provide a platform to broadcast certain content like video-based earnings reports.

LinkedIn wouldn’t comment on future monetization plans and for now isn’t even putting in video ads into LinkedIn Live videos. “That will come down the road but for right now we are focused on awesome use cases,” said Peter Roybal, head of video product management, in an interview. “This could even be a way to try out some new ideas.”

11 Feb 2019

Russia plans to test a kill switch that disconnects the country from the internet

As a cyber-defensive measure, the Russian government will reportedly perform a trial run of a measure that would effectively cut the country off from the rest of the world’s web.

Last year, Russia introduced its Digital Economy National Program, a plan that would require Russian internet providers to remain functional in the event the country was cut off from worldwide internet. Under this plan, Russian ISPs would redirect web traffic to routing points within the country and rely on its own copy of the Domain Name System (DNS), the directory of domains and addresses that underpins the global internet.

The test run could be useful to the country for a few reasons. Primarily, Russia aims to simulate the drastic measures it would take in the case of some kind of cyber threat to its national security. But for a country notorious for its restrictive environment for individual and press freedom, the test may also be a useful way to see how the country could wield a more closely held internet to control its own people and guard against foreign interests.

The extreme measure, if successful, would allow Russia to effectively operate its own state-controlled internet and cut itself off from the world as it sees fit. While the test date is not yet known, it’s expected to happen before April 1 of this year, the last day for lawmakers to propose amendments to the Digital Economy National Program.

11 Feb 2019

Russia plans to test a kill switch that disconnects the country from the internet

As a cyber-defensive measure, the Russian government will reportedly perform a trial run of a measure that would effectively cut the country off from the rest of the world’s web.

Last year, Russia introduced its Digital Economy National Program, a plan that would require Russian internet providers to remain functional in the event the country was cut off from worldwide internet. Under this plan, Russian ISPs would redirect web traffic to routing points within the country and rely on its own copy of the Domain Name System (DNS), the directory of domains and addresses that underpins the global internet.

The test run could be useful to the country for a few reasons. Primarily, Russia aims to simulate the drastic measures it would take in the case of some kind of cyber threat to its national security. But for a country notorious for its restrictive environment for individual and press freedom, the test may also be a useful way to see how the country could wield a more closely held internet to control its own people and guard against foreign interests.

The extreme measure, if successful, would allow Russia to effectively operate its own state-controlled internet and cut itself off from the world as it sees fit. While the test date is not yet known, it’s expected to happen before April 1 of this year, the last day for lawmakers to propose amendments to the Digital Economy National Program.

11 Feb 2019

Lilt is building a machine translation business with humans at the core

The ability to quickly and automatically translate anything you see using a web service is a powerful one, yet few expect much from it other than a tolerable version of a foreign article, menu, or street sign. Shouldn’t this amazing tool be put to better use? It can be, and a company called Lilt is quietly doing so — but crucially, it isn’t even trying to leave the human element behind.

By combining the expertise of human translators with the speed and versatility of automated ones, you get the best of both worlds — and potentially a major business opportunity.

The problem with machine translation, when you really get down to it, is that it’s bad. Sure, it won’t mistake “tomato” for “potato,” but it can’t be trusted to do anything beyond accurately translate the literal meaning of a series of words. In many cases that’s all you need — for instance, on a menu — but for a huge amount of content it simply isn’t good enough.

This is much more than a convenience problem; for many language provides serious professional and personal barriers.

“Information on a huge number of topics is only available in English,” said Lilt co-founder and CEO Spence Green; he encountered this while doing graduate work in the Middle East, simultaneously learning Arabic and the limitations placed on those who didn’t speak English.

Much of this information is not amenable to machine translation, he explained. Imagine if you were expected to operate heavy machinery using instructions run through Google Translate, or perform work in a country where immigration law is not available in your language.

“Books, legal information, voting materials… when quality is required, you need a human in the loop,” he said.

Working on translation projects there and later at Google, where he interned in 2011, Green found himself concerned with how machine translation could improve access to information without degrading it — as most of the systems do.

His realization, which he pursued with co-founder John DeNero, was that machine learning systems worked well not simply as a tool for translation, but as tool for translators. Working in concert with a translation system makes them faster and better at their work, lightening the cognitive load.

The basic idea of Lilt’s tool is that the system provides translations for the next sentence or paragraph, as a reference for structure, tense, idiom, and so on that the translator can consult and, at least potentially, work faster and better. Lilt claims a 5x increase in words per hour translated, and says the results are as good or better than a strictly human translation.

“We published papers — we knew the technology worked. We’d worked with translators and had done some large-scale experiments,” Green said, but the question was how to proceed.

Talk to a big company and get them interested? “We went through this process of realizing that the big companies are really focused on the consumer applications — not anywhere there’s a quality threshold, which is really the entire translation industry,” Green said.

Stay in academic research, get a grant and open-source it? “The money kind of dried up,” Green explained: money was lavishly allocated after 9/11 with the idea of improving intelligence and communication, but a decade later the sense of urgency had departed, and with it much of the grant cash.

Start a company? “We knew the technology was inevitable,” he said. “The question was who would bring it to market.” So they decided it would be them.

Interestingly, a major change in language translation took place around the time they were really getting to work on it. Statistical neural network systems gave way to attention-based ones; these have a natural sort of affinity to efficiently and effectively parsing things like sentences, where each word exists not like a pixel in an image, but is dependent on the words nearby it in a structured way. They basically had to reinvent their core translation system, but it was ultimately for the better.

“These systems have much better fluency — they’re just a better model of language. Second, they learn much faster; you need fewer updates to adapt to a domain,” Green said. That is to say, as far as domains, that the system can quickly accommodate jargon and special rules found in, say, technical writing or real estate law.

Of course, you can’t just sprint into the midst of the translation business, which spans publishing, real-time stuff, technical documents, and a dozen other verticals, and say “here, use AI!”

“There’s enormous structural resistance in the industry to automating in any real way,” Green said. There was no way a major publishing house was going to change the way it worked.

“We tried several business models before we found one that works. There really hasn’t been a company that has decided ‘Okay, this human-in-the-loop method is the fundamental way to solve this problem, let’s just build a company around that.’ So we’re vertically integrated, we work with big enterprises and governments, and we just own the entire translation workflow for them.”

A faster method that doesn’t adversely affect translation quality is basically an efficiency multiplier — catnip for organizations that have a lot of content that needs accurate translation but needs to get the most for their money.

Think about it like this: if you’re a company that puts out products in 20 countries that speak as many languages, translation of packaging, advertising, documentation, and so on is a task that’s essentially never done. The faster and cheaper you can get it done, the better, and if you have a single company that can handle it all, that’s just a cherry on top.

“We work with Zendesk, Snap, Sprinklr… we just take over the whole localization workflow for them. That helps with international go to market.” said Green. If a company’s translation budget and process before using Lilt limited it to targeting 5 or 6 new markets in a given period, that could double or triple for the same price and staff, depending on efficiency gains.

Right now the working on acquiring customers, naturally. “In Q4 last year we built our first sales team,” Green admitted. But initial work with governments especially has been heartening, since they have “more idiosyncratic language needs” and a large volume of text. The 29 languages Lilt supports right now will be 43 by the end of the year. A proofreading feature is in the works to improve the efficiency of editors as well as translators.

They’re also working hard on connecting with academics and building the translation community around Lilt. Academics are both a crucial source of translators and language experts and a major market. A huge majority of scientific literature is only published in English because it would be onerous to translate this highly technical text for others.

Green’s pet peeve seems to be that brilliant researchers are being put to work on boring consumer stuff: “Tech companies are kind of sucking up all the talent and putting them on Assistant or Alexa or something.” It’s a common refrain in frontier tech like AI and robotics.

Finally, Green said, “it’s my great hope that we can close this circle and get into book translation as we go on. It’s less lucrative work but it’s the third part of the vision. If we’re able to, it’s a choice where we’ll feel like we’ve done something meaningful.”

Although it may start out as support documents for apps and random government contracts, the types of content and markets amenable to Lilt’s type of human-in-the-loop process seem likely to only increase. And a future where AI and people work in cooperation is certainly more reassuring than one where humans are replaced. With translation at least, the human touch is nowhere near ready to be excluded.

11 Feb 2019

Lilt is building a machine translation business with humans at the core

The ability to quickly and automatically translate anything you see using a web service is a powerful one, yet few expect much from it other than a tolerable version of a foreign article, menu, or street sign. Shouldn’t this amazing tool be put to better use? It can be, and a company called Lilt is quietly doing so — but crucially, it isn’t even trying to leave the human element behind.

By combining the expertise of human translators with the speed and versatility of automated ones, you get the best of both worlds — and potentially a major business opportunity.

The problem with machine translation, when you really get down to it, is that it’s bad. Sure, it won’t mistake “tomato” for “potato,” but it can’t be trusted to do anything beyond accurately translate the literal meaning of a series of words. In many cases that’s all you need — for instance, on a menu — but for a huge amount of content it simply isn’t good enough.

This is much more than a convenience problem; for many language provides serious professional and personal barriers.

“Information on a huge number of topics is only available in English,” said Lilt co-founder and CEO Spence Green; he encountered this while doing graduate work in the Middle East, simultaneously learning Arabic and the limitations placed on those who didn’t speak English.

Much of this information is not amenable to machine translation, he explained. Imagine if you were expected to operate heavy machinery using instructions run through Google Translate, or perform work in a country where immigration law is not available in your language.

“Books, legal information, voting materials… when quality is required, you need a human in the loop,” he said.

Working on translation projects there and later at Google, where he interned in 2011, Green found himself concerned with how machine translation could improve access to information without degrading it — as most of the systems do.

His realization, which he pursued with co-founder John DeNero, was that machine learning systems worked well not simply as a tool for translation, but as tool for translators. Working in concert with a translation system makes them faster and better at their work, lightening the cognitive load.

The basic idea of Lilt’s tool is that the system provides translations for the next sentence or paragraph, as a reference for structure, tense, idiom, and so on that the translator can consult and, at least potentially, work faster and better. Lilt claims a 5x increase in words per hour translated, and says the results are as good or better than a strictly human translation.

“We published papers — we knew the technology worked. We’d worked with translators and had done some large-scale experiments,” Green said, but the question was how to proceed.

Talk to a big company and get them interested? “We went through this process of realizing that the big companies are really focused on the consumer applications — not anywhere there’s a quality threshold, which is really the entire translation industry,” Green said.

Stay in academic research, get a grant and open-source it? “The money kind of dried up,” Green explained: money was lavishly allocated after 9/11 with the idea of improving intelligence and communication, but a decade later the sense of urgency had departed, and with it much of the grant cash.

Start a company? “We knew the technology was inevitable,” he said. “The question was who would bring it to market.” So they decided it would be them.

Interestingly, a major change in language translation took place around the time they were really getting to work on it. Statistical neural network systems gave way to attention-based ones; these have a natural sort of affinity to efficiently and effectively parsing things like sentences, where each word exists not like a pixel in an image, but is dependent on the words nearby it in a structured way. They basically had to reinvent their core translation system, but it was ultimately for the better.

“These systems have much better fluency — they’re just a better model of language. Second, they learn much faster; you need fewer updates to adapt to a domain,” Green said. That is to say, as far as domains, that the system can quickly accommodate jargon and special rules found in, say, technical writing or real estate law.

Of course, you can’t just sprint into the midst of the translation business, which spans publishing, real-time stuff, technical documents, and a dozen other verticals, and say “here, use AI!”

“There’s enormous structural resistance in the industry to automating in any real way,” Green said. There was no way a major publishing house was going to change the way it worked.

“We tried several business models before we found one that works. There really hasn’t been a company that has decided ‘Okay, this human-in-the-loop method is the fundamental way to solve this problem, let’s just build a company around that.’ So we’re vertically integrated, we work with big enterprises and governments, and we just own the entire translation workflow for them.”

A faster method that doesn’t adversely affect translation quality is basically an efficiency multiplier — catnip for organizations that have a lot of content that needs accurate translation but needs to get the most for their money.

Think about it like this: if you’re a company that puts out products in 20 countries that speak as many languages, translation of packaging, advertising, documentation, and so on is a task that’s essentially never done. The faster and cheaper you can get it done, the better, and if you have a single company that can handle it all, that’s just a cherry on top.

“We work with Zendesk, Snap, Sprinklr… we just take over the whole localization workflow for them. That helps with international go to market.” said Green. If a company’s translation budget and process before using Lilt limited it to targeting 5 or 6 new markets in a given period, that could double or triple for the same price and staff, depending on efficiency gains.

Right now the working on acquiring customers, naturally. “In Q4 last year we built our first sales team,” Green admitted. But initial work with governments especially has been heartening, since they have “more idiosyncratic language needs” and a large volume of text. The 29 languages Lilt supports right now will be 43 by the end of the year. A proofreading feature is in the works to improve the efficiency of editors as well as translators.

They’re also working hard on connecting with academics and building the translation community around Lilt. Academics are both a crucial source of translators and language experts and a major market. A huge majority of scientific literature is only published in English because it would be onerous to translate this highly technical text for others.

Green’s pet peeve seems to be that brilliant researchers are being put to work on boring consumer stuff: “Tech companies are kind of sucking up all the talent and putting them on Assistant or Alexa or something.” It’s a common refrain in frontier tech like AI and robotics.

Finally, Green said, “it’s my great hope that we can close this circle and get into book translation as we go on. It’s less lucrative work but it’s the third part of the vision. If we’re able to, it’s a choice where we’ll feel like we’ve done something meaningful.”

Although it may start out as support documents for apps and random government contracts, the types of content and markets amenable to Lilt’s type of human-in-the-loop process seem likely to only increase. And a future where AI and people work in cooperation is certainly more reassuring than one where humans are replaced. With translation at least, the human touch is nowhere near ready to be excluded.

11 Feb 2019

Gmail gets a useful right-click menu

Google is giving Gmail a new right-click menu. And it’s about time. While you’ve long been able to right-click on any email in your inbox, your options were always limited. You could archive an email, mark it as read/unread and delete it, but that was about it. Now, as the company announced today, that’s changing and you’re about to get a fully featured right-click menu that lets you do most of the things that Gmail’s top bar menu lets you do, plus a few extra features.

Soon, when you right-click on a message in your inbox view, you’ll see a long list of features with options to reply to messages and forward them, search for all emails from a sender or with the same subject and open multiple emails in multiple windows at the same time. You’ll also be able to add labels to emails, mute conversations and use Gmail’s snooze feature, all from the same menu.

All of this is pretty straightforward stuff and none of it is especially groundbreaking, which makes you wonder why it took Google so long to implement it.

As usual, Google only tells us that it is rolling out this feature to G Suite users now (starting today for those on the rapid release schedule and on February 22 for those that follow the slower scheduled release cycle). But free users typically see these new features pop up somewhere around that same time frame, too.