Year: 2018

11 Oct 2018

CrunchMatch connects attendees at Disrupt Berlin 2018

On 29-30 November, thousands of early-stage startups across Europe and beyond will attend Disrupt Berlin 2018 and spend two program-packed days exhibiting and exploring the very latest in tech innovations. In a crowd that size, it helps to have a tool to find and connect with the right people.

That’s why we’re making our CrunchMatch platform available to all Disrupt Berlin attendees. Last year, our free business match-making service connected investors and founders to discuss potential funding opportunities based on similar goals and interests. Now CrunchMatch can help everyone network more efficiently.

We’re talking founders and investors looking to connect, developers in search of employment, founders hunting for collaborators or startups recruiting tech talent — the list goes on. CrunchMatch can save you valuable time and help you make valuable connections.

Luke Heron, CEO of TestCard, has first-hand experience with the power of CrunchMatch, which he used to secure meetings with multiple VCs at Disrupt Berlin 2017. Those connections, and the relationships he built, paid off.

In a recent email, Heron told us that TestCard “just closed $1.7 million in funding (which is thanks to you and your team, bless you!) You guys are fantastic — the lifeblood of the startup scene.”

And several founders who attended Disrupt San Francisco this past September used CrunchMatch and walked away from their meetings with term sheets.

Representing the investment point of view, here’s what Michael Kocan, managing partner at Trend Discovery, said about his CrunchMatch experience.

“It makes vetting deals extremely efficient. I scheduled more than 35 meetings with startups using CrunchMatch, and we made a significant investment in one, who came to our attention through Startup Battlefield.”

Ready to simplify your networking at Disrupt Berlin? Here’s what you need to know. When we open CruntchMatch, all registered attendees will receive an email explaining how to access the platform and fill out their profiles. Your profile spells out your role and the type of connections you want to make. CrunchMatch kicks into gear and makes suggested connections and then — subject to your approval — the platform handles all the scheduling details.

Disrupt Berlin 2018 takes place 29-30 November. Still need a ticket? Buy your pass right here. We can’t wait to see you in Berlin! And be sure to use the CrunchMatch advantage — it’s the most efficient way to find your people and fuel your dream.

11 Oct 2018

Google+ for G Suite lives on and gets new features

You thought Google+ was dead, didn’t you? And it is — if you’re a consumer. But the business version of Google’s social network will live on for the foreseeable future — and it’s getting a bunch of new features today.

Google+ for G Suite isn’t all that different from the Google+ for consumers, but its focus is very much on allowing users inside a company to easily share information. Current users include the likes of Nielsen and French retailer Auchan.

The new features that Google is announcing today give admins more tools for managing and reviewing posts, allow employees to tag content and provide better engagement metrics to posters.

Recently Google introduced the ability for admins to bulk-add groups of users to a Google+ community, for example. And soon, those admins will be able to better review and moderate posts made by their employees. Soon, admins will also be able to define custom streams so that employees could get access to a stream with all of the posts from a company’s leadership team, for example.

But what’s maybe more important in this context is that tags now make it easy for employees to route content to everybody in the company, no matter which group they work in. “Even if you don’t know all employees across an organization, tags makes it easier to route content to the right folks,” the company explains in today’s blog post. “Soon you’ll be able to draft posts and see suggested tags, like #research or #customer-insights when posting customer survey results.”

As far as the new metrics go, there’s nothing all that exciting going on here, but G Suite customers who keep their reporting structure in the service will be able to provide analytics to employees so they can see how their posts are being viewed across the company and which teams engage most with them.

At the end of the day, none of these are revolutionary features. But the timing of today’s announcement surely isn’t a coincidence, given that Google announced the death of the consumer version of Google+ — and the data breach that went along with that — only a few days ago. Today’s announcement is clearly meant to be a reminder that Google+ for the enterprise isn’t going away and remains in active development. I don’t think all that many businesses currently use Google+, though, and with Hangouts Chat and other tools, they now have plenty of options for sharing content across groups.

11 Oct 2018

Google’s Apigee officially launches its API monitoring service

It’s been about two years since Google acquired API management service Apigee. Today, the company is announcing new extensions that make it easier to integrate the service with a number of Google Cloud services, as well as the general availability of the company’s API monitoring solution.

Apigee API monitoring allows operations teams to get more insight into how their APIs are performing. The idea here is to make it easy for these teams to figure out when there’s an issue and what’s the root cause for it by giving them very granular data. “APIs are now part of how a lot of companies are doing business,” Ed Anuff, Apigee’s former SVP of product strategy and now Google’s product and strategy lead for the service, told me. “So that tees up the need for API monitoring.”

Anuff also told me that he believes that it’s still early days for enterprise API adoption — but that also means that Apigee is currently growing fast as enterprise developers now start adopting modern development techniques. “I think we’re actually still pretty early in enterprise adoption of APIs,” he said. “So what we’re seeing is a lot more customers going into full production usage of their APIs. A lot of what we had seen before was people using it for maybe an experiment or something that they were doing with a couple of partners.” He also attributed part of the recent growth to customers launching more mobile applications where APIs obviously form the backbone of much of the logic that drives those apps.

API Monitoring was already available as a beta, but it’s now generally available to all Apigee customers.

Given that it’s now owned by Google, it’s no surprise that Apigee is also launching deeper integrations with Google’s cloud services now — specifically services like BigQuery, Cloud Firestore, Pub/Sub, Cloud Storage and Spanner. Some Apigee customers are already using this to store every message passed through their APIs to create extensive logs, often for compliance reasons. Others use Cloud Firestore to personalize content delivery for their web users or to collect data from their APIs and then send that to BigQuery for analysis.

Anuff stressed that Apigee remains just as open to third-party integrations as it always was. That is part of the core promise of APIs, after all.

11 Oct 2018

Google introduces dual-region storage buckets to simplify data redundancy

Google is playing catch-up in the cloud, and as such it wants to provide flexibility to differentiate itself from AWS and Microsoft. Today, the company announced a couple of new options to help separate it from the cloud storage pack.

Storage may seem stodgy, but it’s a primary building block for many cloud applications. Before you can build an application you need the data that will drive it, and that’s where the storage component comes into play.

One of the issues companies have as they move data to the cloud is making sure it stays close to the application when it’s needed to reduce latency. Customers also require redundancy in the event of a catastrophic failure, but still need access with low latency. The latter has been a hard problem to solve until today when Google introduced a new dual-regional storage option.

As Google described it in the blog post announcing the new feature, “With this new option, you write to a single dual-regional bucket without having to manually copy data between primary and secondary locations. No replication tool is needed to do this and there are no network charges associated with replicating the data, which means less overhead for you storage administrators out there. In the event of a region failure, we transparently handle the failover and ensure continuity for your users and applications accessing data in Cloud Storage.”

This allows companies to have redundancy with low latency, while controlling where it goes without having to manually move it should the need arise.

Knowing what you’re paying

Companies don’t always require instant access to data, and Google (and other cloud vendors) offer a variety of storage options, making it cheaper to store and retrieve archived data. As of today, Google is offering a clear way to determine costs, based on customer storage choice types. While it might not seem revolutionary to let customers know what they are paying, Dominic Preuss, Google’s director of product management says it hasn’t always been a simple matter to calculate these kinds of costs in the cloud. Google decided to simplify it by clearly outlining the costs for medium (Nearline) and long-term (Coldline) storage across multiple regions.

As Google describes it, “With multi-regional Nearline and Coldline storage, you can access your data with millisecond latency, it’s distributed redundantly across a multi-region (U.S., EU or Asia), and you pay archival prices. This is helpful when you have data that won’t be accessed very often, but still needs to be protected with geographically dispersed copies, like media archives or regulated content. It also simplifies management.”

Under the new plan, you can select the type of storage you need, the kind of regional coverage you want and you can see exactly what you are paying.

Google Cloud storage pricing options. Chart: Google

Each of these new storage services has been designed to provide additional options for Google Cloud customers, giving them more transparency around pricing and flexibility and control over storage types, regions and the way they deal with redundancy across data stores.

11 Oct 2018

Google expands its identity management portfolio for businesses and developers

Over the course of the last year, Google has launched a number of services that bring to other companies the same BeyondCorp model for managing access to a company’s apps and data without a VPN that it uses internally. Google’s flagship product for this is Cloud Identity, which is essentially Google’s BeyondCorp, but packaged for other businesses.

Today, at its Cloud Next event in London, it’s expanding this portfolio of Cloud Identity services with three new products and features that enable developers to adopt this way of thinking about identity and access for their own apps and that make it easier for enterprises to adopt Cloud Identity and make it work with their existing solutions.

The highlight of today’s announcements, though, is Cloud Identity for Customers and Partners, which is now in beta. While Cloud Identity is very much meant for employees at a larger company, this new product allows developers to build into their own applications the same kind of identity and access management services.

“Cloud Identity is how we protect our employees and you protect your workforce,” Karthik Lakshminarayanan, Google’s product management director for Cloud Identity, said in a press briefing ahead of the announcement. “But what we’re increasingly finding is that developers are building applications and are also having to deal with identity and access management. So if you’re building an application, you might be thinking about accepting usernames and passwords, or you might be thinking about accepting social media as an authentication mechanism.”

This new service allows developers to build in multiple ways of authenticating the user, including through email and password, Twitter, Facebook, their phones, SAML, OIDC and others. Google then handles all of that authentication work. Google will offer both client-side (web, iOS and Android) and server-side SDKs (with support for Node.ja, Java, Python and other languages).

“They no longer have to worry about getting hacked and their passwords and their user credentials getting compromised,” added Lakshminarayanan, “They can now leave that to Google and the exact same scale that we have, the security that we have, the reliability that we have — that we are using to protect employees in the cloud — can now be used to protect that developer’s applications.”

In addition to Cloud Identity for Customers and Partners, Google is also launching a new feature for the existing Cloud Identity service, which brings support for traditional LDAP-based applications and IT services like VPNs to Cloud Identity. This feature is, in many ways, an acknowledgment that most enterprises can’t simply turn on a new security paradigm like BeyondCorp/Cloud Identity. With support for secure LDAP, these companies can still make it easy for their employees to connect to these legacy applications while still using Cloud Identity.

“As much as Google loves the cloud, a mantra that Google has is ‘let’s meet customers where they are.’ We know that customers are embracing the cloud, but we also know that they have a massive, massive footprint of traditional applications,” Lakshminarayanan explained. He noted that most enterprises today run two solutions: one that provides access to their on-premise applications and another that provides the same services for their cloud applications. Cloud Identity now natively supports access to many of these legacy applications, including Aruba Networks (HPE), Itopia, JAMF, Jenkins (Cloudbees), OpenVPN, Papercut, pfSense (Netgate), Puppet, Sophos and Splunk. Indeed, as Google notes, virtually any application that supports LDAP over SSL can work with this new service.

Finally, the third new feature Google is launching today is context-aware access for those enterprises that already use its Cloud Identity-Aware Proxy (yes, those names are all a mouthful). The idea here is to help enterprises provide access to cloud resources based on the identity of the user and the context of the request — all without using a VPN. That’s pretty much the promise of BeyondCorp in a nutshell, and this implementation, which is now in beta, allows businesses to manage access based on the user’s identity and a device’s location and its security status, for example. Using this new service, IT managers could restrict access to one of their apps to users in a specific country, for example.

 

11 Oct 2018

Google Cloud expands its networking feature with Cloud NAT

It’s a busy week for news from Google Cloud, which is hosting its Next event in London. Today, the company used the event to launch a number of new networking features. The marquee launch today is Cloud NAT, a new service that makes it easier for developers to build cloud-based services that don’t have public IP addresses and can only be accessed from applications within a company’s virtual private cloud.

As Google notes, building this kind of setup was already possible, but it wasn’t easy. Obviously, this is a pretty common use case, though, so with Cloud NAT, Google now offers a fully managed service that handles all the network address translation (hence the NAT) and provides access to these private instances behind the Cloud NAT gateway.

Cloud NAT supports Google Compute Engine virtual machines as well as Google Kubernetes Engine containers, and offers both a manual mode where developers can specify their IPs and an automatic mode where IPs are automatically allocated.

Also new in today’s release is Firewall Rules Logging, which is now in beta. Using this feature, admins can audit, verify and analyze the effects of their firewall rules. That means when there are repeated connection attempts that the firewall blocked, you can now analyze those and see whether somebody was up to no good or whether somebody misconfigured the firewall. Because the data is only delayed by about five seconds, the service provides near real-time access to this data — and you can obviously tie this in with other services like Stackdriver Logging, Cloud Pub/Sub and BigQuery to create alerts and further analyze the data.

Also new today is managed TLS certificated for HTTPS load balancers. The idea here is to take the hassle out of managing TLS certificates (the kind of certificates that ensure that your user’s browser creates a secure connection to your app) when there is a load balancer in play. This feature, too, is now in beta.

11 Oct 2018

Apple is paying $300M in cash to buy a part of Dialog Semiconductor and expand its chipmaking in Europe

Apple has quietly been putting considerable effort into building faster and more efficient chips that can help differentiate its hardware from the rest of the consumer electronics pack, and today it’s taking its next (and possibly largest) step in that strategy. Apple is paying $300 million in cash to purchase a portion of Dialog Semiconductor, a chipmaker based out of Europe that it has been working with since the first iPhone. On top of the main acquisition, Apple is also committing $300 million to make further purchases from the remaining part of Dialog’s business.

This will be Apple’s biggest acquisition by far in terms of people: 300 people will be joining Apple as part of the deal, or about 16 percent of Dialog’s total workforce. From what we understand, those who are joining have already been working tightly with Apple up to now. The teams joining are based across Livorno in Italy, Swindon in England, and Nabern and Neuaubing in Germany, near Munich, where Apple already has an operation.

In some cases, Apple will be taking over entire buildings that had been owned by Dialog, and in others they will be colocating in buildings where Dialog will continue to develop its own business — another sign of how closely the two have and will continue to work together. The Dialog employees Apple is picking up in this acquisition will report to Apple’s SVP of hardware technologies, Johny Srouji. 

“Dialog has deep expertise in chip development, and we are thrilled to have this talented group of engineers who’ve long supported our products now working directly for Apple,” said Srouji, in a statement. “Our relationship with Dialog goes all the way back to the early iPhones, and we look forward to continuing this long-standing relationship with them.”

Apple’s acquisition will also include IP and licenses for further IP, we understand.

The deal — which is expected to close in the first half of 2019, pending regulatory approvals — comes at a time when many expect Apple to release a VR headset in the future, and while our sources haven’t told us specifically about this, what we do know is that one big, more general focus for the company is to continue working on power management and chips that are more efficient in that regard, particularly considering the newest devices that Apple has added to its range: AirPods headphones and the Watch — wireless, high-performing hardware.

In September, at the same time that it announced its latest generation of iPhone devices, Apple announced a new chip of its own design, the A12 Bionic. Apple claims the A12 Bionic is the industry’s first 7nm chip (although as we’ve said before different companies measure these differently).

With 6.96 billion transistors, the A12 Bionic features a 6-core CPU and a 4-Core GPU, along with Apple’s Neural Engine for running machine learning workloads. The chip’s two high-performance cores and four efficiency cores, with the high-performance cores up to 15 percent faster and 40 percent more power efficient than previous chips, and the efficiency cores using up to 50 percent less power.

Apple also says that the Neural Engine is capable processing 5 trillion operations per second, up from 600 billion for its predecessor, the A11.

“This transaction reaffirms our long-standing relationship with Apple, and demonstrates the value of the strong business and technologies we have built at Dialog,” said Jalal Bagherli, CEO of Dialog, in a statement. “Going forward, we will have a clear strategic focus, building on our custom and configurable mixed-signal IC expertise and world-class power-efficient design. Our execution track record, deep customer relationships, and talented employees give us great confidence in our future growth prospects… We believe that this transaction is in the best interests of our employees and shareholders who will benefit from a business with enhanced focus, strong growth prospects and additional financial flexibility to invest in strategic growth initiatives.”

Interestingly, you might recall that Apple once eyed up buying another chipmaker acquisition in Europe, Imagination Technologies, which had been a close partner of the company. That deal ultimately did not come to pass, Apple started work on its own graphics chips, and more recently has even been in some disputes with Imagination.

It also comes at a time when Apple has been in the spotlight for another kind of chip story: the company was named in a controversial Bloomberg report alleging that there have been “spy chips” secretly implanted on Apple hardware by way of Supermicro motherboards — a report that Apple and others have strongly denied and that hasn’t been corroborated so far. This should shift the focus on what people are talking about when they think of Apple and chips.

Dialog is holding a conference call later this morning to talk more about the deal and we will update this story as we learn more.

More to come.

11 Oct 2018

ServiceNow to acquire FriendlyData for its natural language search technology

Enterprise cloud service management company ServiceNow announced today that it will acquire FriendlyData and integrate the startup’s natural language search technology into apps on its Now platform. Founded in 2016, FriendlyData’s natural language query (NLQ) technology enables enterprise customers to build search tools that allow users to ask technical questions even if they don’t know the right jargon.

FriendlyData’s NLQ tech figures out what they are trying to say and then answers with text responses or easy-to-understand data visualizations. ServiceNow said it will integrate FriendlyData’s tech into the Now Platform, which includes apps for IT, human resources, security operations, and customer service management. It will also be available in products for developers and ServiceNow’s partners.

In a statement, Pat Casey, senior vice president of development and operations at ServiceNow, said “ServiceNow is bringing NLQ capabilities to the Now Platform, enabling companies to ask technical questions in plain English and receive direct answers. With this technical enhancement, our goal is to allow anyone to easily make data driven decisions, increasing productivity and driving businesses forward faster.”

The acquisition of FriendlyData is the latest in ServiceNow’s initiative to reduce the friction of support requests within organizations with AI-based tools. For example, it launched a chatbot-building tools called Virtual Agent in May, which enables companies to create custom chatbots for services like Slack or Microsoft Teams to automatically handle routine inquiries such as equipment requests. It also announced the acquisition of Parlo, a chatbot startup, around the same time.

11 Oct 2018

Razer soups up its gaming smartphone

Razer is quick to refute any suggestions that its second phone is little more than an iterative update. Sure, the thing looks remarkably identical to its predecessor from the front, but the innards are certainly souped up — and there’s a snazzy new back to match.

As the company puts in the Razer Phone 2 press materials, “we wanted to keep the core Razer industrial design intact with a CNC aluminum frame flanked by powerful dual front-firing stereo speakers.”

Fair enough. The first Razer Phone wasn’t the prettiest handset on the market, but that was never the point. The gaming peripheral company entered the mobile market with one very clear motive in mind: helping usher in a new age of serious smartphone gaming. It follows, then, that the Razer Phone 2 sports some beefy specs to match.

Razer’s not quite at the point in its mobile story where custom silicon makes sense, so the company’s relying on the the latest Snapdragon (845), instead. What is custom, however, is the vapor-chamber cooling system inside, which dissipates surface heat for intense game play. In all, the company says it’s able to eke out a 30 percent bump in performance over gen one. 

The battery is the same size, at a still impressive 4,000mAh — though this time coupled with Qi for fast wireless charging. It’s a beefy battery in a beefy phone. It’s not the slickest design out there, compared to flagships by Apple and Samsung, but it’s built like a damn tank. It’s also IP67 rated water-resistant and dust proof. 

As mentioned above, the front-facing speakers are still intact from the first generation, and they can get plenty loud, as evidenced by the demo Razer gave us ahead of today’s event. Those are tuned with Dolby Atmos. 

At 5.7 inches, the screen is the same size as the first generation. I’m a bit surprised the company didn’t go a bit larger this generation — gaming is one of the stronger arguments for large screens on mobile devices. That said, Razer’s increased the brightness by half and improved color accuracy.

While, as expected, the front looks pretty much exactly like the first gen’s, the back’s been souped up a bit. The familiar tri-headed snake logo lights up now, with 16.8 million color options. There are different settings for the light, including the ability to have it light up with notifications based on different apps — so, light blue for Twitter, red for Gmail. You get the picture.

Of course, having a light-up logo on the back would be silly, so the company’s created a case with a cutout, specifically to showcase the new lighting rig.

Razer’s managed to maintain a decent price point here. At $799, it’s not cheap, but it’s a couple hundred bucks below the latest from Apple and Samsung. Preorders start tomorrow.

11 Oct 2018

Elon Musk hits back: James Murdoch is not the lead candidate for Tesla chairman spot

It’s 4:20 p.m., which means Tesla CEO Elon Musk might be tweeting.

This time, the billionaire entrepreneur’s tweet debunked a Financial Times article from Wednesday that reported Twenty-First Century Fox CEO and Tesla board member James Murdoch was the lead candidate to take Musk’s chairman spot. Musk agreed in a September 29th settlement with the U.S. Securities and Exchange Commission to step down as chairman within 45 days. Musk, who didn’t not admit wrongdoing under the settlement, was also fined $20 million. Tesla was also fined and the company agreed to other conditions, including adding two independent board members. 

Musk tweeted that the report is “incorrect.”

The FT report, which cited unnamed people who were briefed on the chairman discussions, also noted Musk favored Antonio Gracias, Tesla’s lead independent director. Musk was reportedly advised that Gracias may not be considered independent enough.

The Musk tweet was sent at 4:20 p.m. PT, which is obviously a complete coincidence and has absolutely nothing to do with marijuana or a “wink wink nudge nudge” reference to what kicked off the SEC investigation and securities fraud complaint.

tesla elon musk james murdoch

 

The SEC complaint alleged that Musk lied when he tweeted on August 7 that he had “funding secured” for a private takeover of the company at $420 per share. Federal securities regulators reportedly served Tesla with a subpoena just a week after the tweet. Investigations can take years before any action is taken, if at all. In this case, charges were filed just six weeks later.

The complaint contains a number of eye-browing raising details, including that he had talked to the board about an offer to take Tesla private as early as August 2 when he sent to Tesla’s board of directors, chief financial officer and general counsel an email with the subject, “Offer to Take Tesla Private at $420.”

According to the complaint, Musk calculated the $420 price per share based on a 20 percent premium over that day’s closing share price because he thought 20 percent was a “standard premium” in going-private transactions.

This calculation resulted in a price of $419. Musk stated that he rounded the price up to $420 because he had recently learned about the number’s significance in marijuana culture and thought his girlfriend “would find it funny, which admittedly is not a great reason to pick a price,” according to the complaint.

The judge presiding over the agreement has asked the SEC and Musk to submit a letter by Oct. 11 before approving the settlement.