Year: 2018

24 Jul 2018

Watchmaker Doxa resurrects its most famous dive watch

Doxa is a storied dive watch company and their most popular watch, the Sub, has just gotten a 2018 overhaul. The watches were made famous by writer Clive Cussler whose character, Dirk Pitt, consulted his beefy Doxa on multiple occasions.

This new model is made in collaboration with gear manufacturer Aqua Lung and features a 42mm steel case with 300 meters of water resistance, a Swiss ETA movement, and a unidirectional diving bezel. It will cost $2,190 when it ships in August.

The SUB 300 ‘Silver Lung’ continues the yearlong 50th anniversary celebration for DOXA Watches, whose pioneering SUB would first plumb the ocean depths in 1967 as the first purpose-built dive watch for the emerging recreational scuba diving market. Lauded for its bright orange dial and professional-grade build quality and dependability, the SUB quickly became the benchmark against which all other dive watches were measured, and ultimately won the approval of the pioneers of modern diving. This included those at Aqua Lung, who would soon distribute the watches under the US Divers name before consolidating into the singular name Aqua Lung in 1998.

Why is this important? First, it’s a cool-looking watch and priced low enough for a Swiss movement and case to be interesting. Further, it has real history and provenance and is a little known brand. If you’re a diver or just want to pretend to be one you could do worse than this beefy and very legible piece.

24 Jul 2018

Supermedium scores $1.1M in seed funding for its web-first vision for virtual reality

The world of VR has had it’s ups and downs and yet there’s still plenty of activity from the world’s largest technology companies to build platforms that ultimately define the medium.

Supermedium, a recent Y Combinator graduate, wants a more open future for virtual reality, one that’s built on the web. The company launched earlier this year and now it’s raising some seed funding as it looks to keep things moving along.

The company just raised $1.1 million in seed funding from Y Combinator, General Catalyst, Boost VC, Anorak Ventures, Candela Partners, Social Starts, M Ventures, Seraph Group, Taimatsu, Outpost VC, Colopl Next, Shrug VC, Andrew Ogawa and Cantos VC.

Supermedium was founded by a couple of Mozilla alums that were behind some of the company’s foundational work on the WebVR standard. WebVR has been admittedly slow to catch-on as major stakeholders have largely been looking to direct nascent attention towards their sandboxed platforms and stores rather than to something so democratic as a browser that can work with all major headsets.

The process of getting up-and-running in desktop WebVR might just mean navigating to a particular URL and tossing your headset on. It’s really not much more difficult than that, and Mozilla has already introduced some early projects that show just how simple it is. As more and more momentum aligns behind standalone systems, the Supermedium browser exists as a bespoke solution for VR devices that is structured around VR controls in VR environments.

The startup sees the shift to WebVR as an inevitability, now the company says their challenge is staying lean and keeping their noses to the ground in a bid to gather an early passionate base of users that can grow exponentially as more substantial hardware comes to market.

Right now, Supermedium is focusing on higher-end systems like the Rift and HTC Vive that support positional tracking and hand controllers. This focus on the high-end admittedly dwindles down the market size even further as headsets like the Oculus Go and Gear VR are unsupported, but the team sees this focus on quality as something that will only be controversial in the near-term as headset-makers like Oculus and other begin progressing more steadily towards releasing standalone devices with tracked controllers.

More important than what devices they support is what experiences and utility they offer users. While Supermedium has been putting a lot of effort into building a hub for “snackable” VR experiences that brings WebVR games and journeys into a nice interface that users can blaze through, the company tells me that the biggest chunk of their usage is still coming from people trying to find an easy way to navigate VR-optimized adult content. Recently, the company also introduced Supercraft, a set of quick-and-dirty content creation tools that can allow users to build inside the browser and share their work with a URL.

Supermedium certainly has some tough challenges ahead as the startup looks to work its way into a tight niche and find ways to promote WebVR development, but the company has some key cross-platform strengths of the web moving with it which shouldn’t be underestimated.

24 Jul 2018

G Suite now lets businesses choose whether their data is stored in the US or Europe

Data sovereignty is a major issue for many major companies, especially in Europe. So far, Google’s G Suite, which includes products like Gmail, Google Docs and Sheets, didn’t give users any control over where their data was stored at rest, but that’s changing today. As the company announced at its Cloud Next conference in San Francisco, G Suite users can now choose whether their primary data for select G Suite apps: in the U.S. or in Europe.

These new data regions are now available to all G Suite Business and Enterprise customers at no additional cost.

“What this means is that for organizations with data- or geo-control requirements, G Suite will now let them choose where a copy of their data for G Suite apps like Gmail should be stored at rest,” said G Suite VP of product management David Thacker.

Google is also adding a tool that makes it easy to move data to another region as employees move between jobs and organizations.

“Given PwC is a global network with operations in 158 countries, I am very happy to see Google investing in data regions for G Suite and thrilled by how easy and intuitive it will be to set up and manage multi-region policies for our domain,” said Rob Tollerton, director of IT at PricewaterhouseCoopers International Limited, in a canned statement about this new feature.

24 Jul 2018

Outlier raises $6.2 M Series A to change how companies use data

Traditionally, companies have gathered data from a variety of sources, then used spreadsheets and dashboards to try and make sense of it all. Outlier wants to change that and deliver a handful of insights that matter most for your job, company and industry right to your inbox. Today the company announced a $6.2 million Series A to further develop that vision.

The round was led by Ridge Ventures with assistance from 11.2 Capital, First Round Capital, Homebrew, Susa Ventures and SV Angel. The company has raised over $8 million.

The startup is trying to solve a difficult problem around delivering meaningful insight without requiring the customer to ask the right questions. With traditional BI tools, you get your data and you start asking questions and seeing if the data can give you some answers. Outliers wants to bring a level of intelligence and automation by pointing out insight without having to explicitly ask the right question.

Company founder and CEO Sean Byrnes says his previous company, Flurry, helped deliver mobile analytics to customers, but in his travels meeting customers in that previous iteration, he always came up against the same question: “This is great, but what should I look for in all that data?”

It was such a compelling question that after he sold Flurry in 2014 to Yahoo for more than $200 million, that question stuck in the back of his mind and he decided to start a business to solve it. He contends that the first 15 years of BI was about getting answers to basic questions about company performance, but the next 15 will be about finding a way to get the software to ask good questions for you based on the huge amounts data.

Byrnes admits that when he launched, he didn’t have much sense of how to put this notion into action, and most people he approached didn’t think it was a great idea. He says he heard “No” from a fair number of investors early on because the artificial intelligence required to fuel a solution like this really wasn’t ready in 2015 when he started the company.

He says that it took four or five iterations to get to today’s product, which lets you connect to various data sources, and using artificial intelligence and machine learning delivers a list of four or five relevant questions to the user’s email inbox that points out data you might not have noticed, what he calls “shifts below the surface.” If you’re a retailer that could be changing market conditions that signal you might want to change your production goals.

Outlier email example. Photo: Outlier

The company launched in 2015. It took some time to polish the product, but today they have 14 employees and 14 customers including Jack Rogers, Celebrity Cruises and Swarovski.

This round should allow them to continuing working to grow the company. “We feel like we hit the right product-market fit because we have customers [generating] reproducible results and really changing the way people use the data,” he said.

24 Jul 2018

Google brings support for custom translations and text categorization to AutoML

Pre-trained machine learning models are good enough for many use cases, but to get the most out of this technology, you need custom models. Given that it’s not exactly easy to get started with machine learning, Google (and others) have opted for a hybrid approach that allows users to upload their own data to customize the existing models. Google’s version of this is AutoML, which until now only provided this capability for machine vision tasks under the AutoML Vision moniker.

Starting today, the company is adding two new capabilities to AutoML: AutoML Natural Language for predicting text categories and AutoML Translation, which allows users to upload their own language pairs to achieve better translations for texts in highly specialized fields, for example. In addition, Google is launching AutoML Vision out of preview and into its private beta.

Rajen Sheth, the director of product management for Google Cloud AI, said that this extension of AutoML is yet another step toward the company’s vision of democratizing AI. “What we are trying to do with Cloud AI is to make it possible for everyone in the world to use AI and build models for their purposes,” he said. For most of its customers, though, pre-trained models aren’t good enough, yet for most businesses, it’s hard to find the machine learning experts that would allow them to build their own custom models. Given this demand, it’s maybe no surprise that about 18,000 users have signed up for the preview of AutoML Vision so far.

“Natural language is something that is really the next frontier of this,” Sheth noted when he discussed the new Natural Language API. “It’s something that’s very useful to the customers. Because more than 90 percent of our customers’ information within their enterprise is unstructured and free information. And a lot of this is textual documents or emails or whatever it may be. Many customers are trying to find ways to get meaning and information out of those documents.”

As for AutoML Translation, the benefits of this kind of customization are pretty obvious, given that translating highly specialized texts remains the domain of experts. As an example, Sheth noted that “driver” in a technical document could be about a device driver for Windows 10, for example, while in another text it could simply be about somebody who is driving a car (until computers take over that task, too).

24 Jul 2018

Google makes it easier for G Suite admins to investigate security breaches

Google is announcing a fair number of updates to G Suite at its Next conference today, most of which focus on the user experience. In addition to those, though, the company also launched a new security investigation tool for admins that augments the existing tools for preventing and detecting potential security issues. The new tool builds on those and adds remediation features to the G Suite security center.

“The overall goal of the security center in G Suite is to provide administrators with the visibility and control they need to prevent, detect and remediate security issues,” said David Thacker, Google’s VP of product management for G Suite. “Earlier this year, we launched the first major components of this security center that help admins prevent and detect issues.”

Now with this third set of tools in line, G Suite admins can get a better understanding of the threats they are facing and how to remediate them. To do this, Thacker said, analysts and admins will be able to run really advanced queries over many different data sources to identify the users who have been impacted by a breach and then investigate what exactly happened. The tool also makes it easy for admins to remove access to certain files or to delete malicious emails “without having to worry about analyzing logs, which can be time-consuming or require complex scripting,” as Thacker noted.

This new security tool is now available as an Early Adopter Program for G Suite Enterprise customers.

24 Jul 2018

Uber self-driving cars back on public roads, but in manual mode

Uber is putting its autonomous vehicles back on Pittsburgh’s city streets, four months after a fatal accident involving one of its self-driving cars prompted the ride-hailing company to halt testing on public roads. But for now, Uber’s modified self-driving Volvo XC90 vehicles will only be driven manually by humans and under a new set of safety standards that includes real-time monitoring of its test drivers and efforts to beef up simulation. 

The sensors, including light detection and ranging radar known as LiDAR, will be operational on these self-driving vehicles. They won’t be operated in autonomous mode, however. Uber will use these manually operated self-driving vehicles to update its HD maps of Pittsburgh.

This manual-first rollout is a step toward Uber’s ultimate goal to relaunch its autonomous vehicle testing program in Pittsburgh, according to Eric Meyhofer, head of Uber Advanced Technologies Group, who published a post Tuesday on Medium.

Uber halted all of its autonomous vehicle operations March 19, the day after one of its vehicles struck and killed pedestrian Elaine Herzberg in the Phoenix suburb of Tempe. Uber was testing its self-driving vehicles on public roads in Tempe, Ariz., where the accident occurred, as well as in Pittsburgh, San Francisco and Toronto.

In the days and weeks following the fatal accident, it appeared the company’s self-driving vehicle program might end for good. Arizona Governor Doug Ducey, a proponent of autonomous-vehicle technology who invited Uber to the state, suspended the company from testing its self-driving cars following the accident. Last month, Uber let go all 100 of its self-driving car operators in Pittsburgh and San Francisco.

Those drivers affected by the layoffs, most of whom were in Pittsburgh, are being encouraged to apply for Uber’s new mission specialist positions. Uber is holding off on making these positions public until the laid-off drivers have a chance to apply and go through the interview process.

Even now, with the company beefing up its safety protocols and taking a slower approach to autonomous vehicle testing, the program’s future is still uncertain. Another accident would likely derail it for good.

These new safeguards aim to avoid such a scenario. Uber said Tuesday that all its self-driving vehicles, whether they’re driven manually or eventually in autonomous mode, will have two Uber employees inside. These “mission specialists” — a new name Uber has given to its test drivers — will have specific jobs. The person behind the wheel will be responsible for maintaining the vehicle safely, while the second “mission specialist” will ride shotgun and document events.

Uber is also equipping every self-driving vehicle with a driver monitoring system that will remain active whenever the vehicle is in use. The system will track driver behavior in real time. If it detects inattentiveness an audio alert will cue the driver. An alert is also sent to a remote monitor who will take appropriate action once they’ve assessed the situation, Uber said. 

The driver monitoring system, which an Uber spokesperson declined to share details about, is an off-the-shelf aftermarket product.

Investigators determined that Rafaela Vasquez, who was operating the Uber self-driving vehicle involved in the fatal crash, looked down at a phone that was streaming The Voice 204 times during a 43-minute test drive that ended when Herzberg was struck and killed, according to a 318-page police report released by the Tempe Police Department.

Based on the data, police reported that Vasquez could have avoided hitting Herzberg if her eyes were on the road. The case has been submitted to the Maricopa County Attorney’s office for review against Vasquez, who could face charges of vehicular manslaughter.

Uber has always had a policy prohibiting mobile device usage for anyone operating its self-driving vehicles, according to a spokesperson. However, without a proper driver monitoring system or another passenger in the vehicle, it was impossible for Uber to really know if that rule was being followed.

Now, the driver monitoring system can spot the behavior immediately. If the system detects the driver is looking at a phone, the remote monitor will call the team immediately back, a spokesperson said, adding it was grounds for dismissal.

Other safeguards include a defensive and distracted driving course conducted on a test track and fatigue management program that requires the two mission specialists in each vehicle to periodically switch between driver and data logger roles, according to Uber.

The National Transportation Safety Board is also investigating the accident. A preliminary report by the NTSB found Uber’s modified Volvo XC90’s LiDAR and radar first spotted an object in its path about six seconds before the crash. The self-driving system first classified the pedestrian as an unknown object, then as a vehicle and then as a bicycle. At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision, according to the NTSB. But to reduce the potential for “erratic behavior,” Uber had disabled Volvo’s emergency braking system so it didn’t work when the vehicle was under computer control.

Uber said it will keep Volvo’s emergency braking and vehicle collision warning systems enabled while the vehicle is in manual mode. Engineers are examining whether the Volvo’s safety system can work in conjunction with its own self-driving technology while the vehicle is in autonomous mode.

24 Jul 2018

Google’s Smart Compose is now ready to write emails for G Suite users

At its Cloud Next conference, Google today announced that Smart Compose, a new feature in Gmail that essentially autocompletes sentences for you, will become available to all G Suite users in the coming weeks.

Smart Compose is part of the new Gmail, where it has been available for the last few months as an experimental feature for those who opt in to using it. In my experience, it can occasionally save you a few keystrokes, though don’t think that it’ll automatically write your emails for you. It’s mostly useful for greetings, addresses and finishing relatively standard phrases for you. To be fair, that’s what most emails consist of, and when it works, it works really well.

Over time, the system trains itself to learn more about how you write and what you write about. “It gets smarter over time by learning your colleagues’ names, your favorite phrases and specific jargon,” Google’s VP for product management for G Suite David Thacker explained during a press briefing.

To use Smart Compose, you simply type your emails and when it thinks that it can help you complete the sentence, the Smart Compose feature writes the next few words for you and you can hit tab to accept them.

It’s worth nothing that the launch of Smart Compose goes against one of Google’s most cherished traditions: announcing features at I/O that won’t launch for another 10 months. It’s only been two months or so since Google first announced this new feature.

24 Jul 2018

Google Docs gets an AI grammar checker

You probably don’t want to make grammar errors in your emails (or blog posts), but every now and then, they do slip in. Your standard spell-checking tool won’t catch them unless you use an extension like Grammarly. Well, Grammarly is getting some competition today in the form of a new machine learning-based grammar checker from Google that’s soon going live in Google Docs.

These new grammar suggestions in Docs, which are now available through Google’s Early Adopter Program, are powered by what is essentially a machine translation algorithm that can recognize errors and suggest corrections as you type. Google says it can catch anything from wrongly used articles (“an” instead of “a”) to more complicated issues like incorrectly used subordinate clauses.

“We’ve adopted a highly effective approach to grammar correction that is machine translation-based,” Google’s VP for G Suite product management David Thacker said in a press briefing ahead of the announcement. “For example, in language translation, you take a language like French and translate it into English. Our approach to grammar is similar. We take improper English and use our technology to correct or translated it into proper English. What’s nice about this is that the language translations is a technology that we have a long history of doing well.”

Because we haven’t seen this new tool in person, it’s impossible to know how well it will do in the real world, of course. It’s not clear to me whether Google’s service will find issues with punctuation or odd word choices, something that tools like Grammarly can check for.

It’s interesting that Google is opting for this translation-based approach, though, which once again shows the company’s bets on artificial intelligence and how it plans to bring these techniques to virtually all of its products over time.

It’d be nice if Google also made this new grammar checker available as an API for other developers, too, though it doesn’t seem to have any plans to do so for the time being.

24 Jul 2018

Google Cloud goes all-in on hybrid with its new Cloud Services Platform

The cloud isn’t right for every business, be that because of latency constraints at the edge, regulatory requirements or because it’s simply cheaper to own and operate their own data centers for their specific workloads. Given this, it’s maybe no surprise that the vast majority of enterprises today use both public and private clouds in parallel. That’s something Microsoft has long been betting on as part of its strategy for its Azure cloud, and Google, too, is now taking a number of steps in this direction.

With the open-source Kubernetes project, Google launched one of the fundamental building blocks that make running and managing applications in hybrid environments easier for large enterprises. What Google hadn’t done until today, though, is launch a comprehensive solution that includes all of the necessary parts for this kind of deployment. With its new Cloud Services Platform, though, the company is now offering businesses an integrated set of cloud services that can be deployed on both the Google Cloud Platform and in on-premise environments.

As Google Cloud engineering director Chen Goldberg noted in a press briefing ahead of today’s announcement, many businesses also simply want to be able to manage their own workloads on-premise but still be able to access new machine learning tools in the cloud, for example. “Today, to achieve this, use cases involve a compromise between cost, consistency, control and flexibility,” she said. “And this all negatively impacts the desired result.”

Goldberg stressed that the idea behind the Cloud Services Platform is to meet businesses where they are and then allow them to modernize their stack at their own pace. But she also noted that businesses want more than just the ability to move workloads between environments. “Portability isn’t enough,” she said. “Users want consistent experiences so that they can train their team once and run anywhere — and have a single playbook for all environments.”

The two services at the core of this new offering are the Kubernetes container orchestration tool and Istio, a relatively new but quickly growing tool for connecting, managing and securing microservices. Istio is about to hit its 1.0 release.

We’re not simply talking about a collection of open-source tools here. The core of the Cloud Services Platform, Goldberg noted, is “custom configured and battle-tested for enterprises by Google.” In addition, it is deeply integrated with other services in the Google Cloud, including the company’s machine learning tools.

GKE On-Prem

Among these new custom-configured tools are a number of new offerings, which are all part of the larger platform. Maybe the most interesting of these is GKE On-Prem. GKE, the Google Kubernetes Engine, is the core Google Cloud service for managing containers in the cloud. And now Google is essentially bringing this service to the enterprise data center, too.

The service includes access to all of the usual features of GKE in the cloud, including the ability to register and manage clusters and monitor them with Stackdriver, as well as identity and access management. It also includes a direct line to the GCP Marketplace, which recently launched support for Kubernetes-based applications.

Using the GCP Console, enterprises can manage both their on-premise and GKE clusters without having to switch between different environments. GKE on-prem connects seamlessly to a Google Cloud Platform environment and looks and behaves exactly like the cloud version.

Enterprise users also can get access to professional services and enterprise-grade support for help with managing the service.

“Google Cloud is the first and only major cloud vendor to deliver managed Kubernetes on-prem,” Goldberg argued.

GKE Policy Management

Related to this, Google also today announced GKE Policy Management, which is meant to provide Kubernetes administrators with a single tool for managing all of their security policies across clusters. It’s agnostic as to where the Kubernetes cluster is running, but you can use it to port your existing Google Cloud identity-based policies to these clusters. This new feature will soon launch in alpha.

Managed Istio

The other major new service Google is launching is Managed Istio (together with Apigee API Management for Istio) to help businesses manage and secure their microservices. The open source Istio service mesh gives admins and operators the tools to manage these services and, with this new managed offering, Google is taking the core of Istio and making it available as a managed service for GKE users.

With this, users get access to Istio’s service discovery mechanisms and its traffic management tools for load balancing and routing traffic to containers and VMs, as well as its tools for getting telemetry back from the workloads that run on these clusters.

In addition to these three main new services, Google is also launching a couple of auxiliary tools around GKE and the serverless computing paradigm today. The first of these is the GKE serverless add-on, which makes it easy to run serverless workloads on GKE with a single-step deploy process. This, Google says, will allow developers to go from source code to container “instantaneously.” This tool is currently available as a preview and Google is making parts of this technology available under the umbrella of its new native open source components. These are the same components that make the serverless add-on possible.

And to wrap it all up, Google also today mentioned a new fully managed continuous integration and delivery service, Google Cloud Build, though the details around this service remain under wraps.

So there you have it. By themselves, all of those announcements may seem a bit esoteric. As a whole, though, they show how Google’s bet on Kubernetes is starting to pay off. As businesses opt for containers to deploy and run their new workloads (and maybe even bring older applications into the cloud), GKE has put Google Cloud on the map to run them in a hosted environment. Now, it makes sense for Google to extend this to its users’ data centers, too. With managed Kubernetes from large and small companies like SUSE, Platform 9, containership is starting to become a big business. It’s no surprise the company that started it all wants to get a piece of this pie, too.