Year: 2018

07 May 2018

Microsoft wants developers to embrace Microsoft 365 and the Microsoft Graph

The Microsoft Graph is an interesting but also somewhat amorphous idea. It’s core to the company’s strategy, but I’m not sure most developers understand its potential just yet. Maybe it’s no surprise that Microsoft is putting quite a bit of emphasis on the Graph during its Build developer conference this week. Unless developers make use of the Graph, which is the API that provides the connectivity between everything from Windows 10 to Office 365, it won’t reach its potential, after all.

Microsoft describes the Graph as “the API for Microsoft 365.” And indeed, Microsoft 365 is the second topic the company is really hammering home during its event. It’s a combined subscription service for Office 365, Windows 10 and the company’s enterprise mobility services.

“Microsoft 365 is where the world gets its best work done,” said Microsoft corporate vice president Joe Belfiore. “With 135 million commercial monthly active users of Office 365 and nearly 700 million Windows 10 connected devices, Microsoft 365 helps developers reach people how and where they work.”

Leaving the standard keynote hyperbole aside, that is indeed how Microsoft sees this service — and the connective tissue here is the Microsoft Graph.

The Graph is what powers features like the Windows 10 Timeline, which desperately needs buy-in from developers to succeed, but it also allows developers to send notifications when a file is added to a OneDrive folder or to kick off an onboarding workflow when a new person is added to a team in Azure Active Directory.

At Build, Microsoft is talking about the Graph quite a bit, but it’s not announcing all that many new features for it, beyond a new and updated Teams API in the Microsoft Graph. As Microsoft’s Director for Office 365 Ecosystem Marketing Rob Howard told me, though, the company now believes that all the engagement surfaces to highlight Graph data are in place.

“Developers now have a reason to put their data into the graph,” he said. Specifically, he expects developers to make use of the Windows 10 Timeline feature, which has now rolled out with the latest Windows 10 release. He also expects that the deep integration into the Office 365 apps will provide a bit of inspiration to third-party developers.

As for Microsoft 365, the company is emphasizing the developer opportunity here. Besides more consumer-facing features like ‘Your Phone’ for sending text messages and reacting to notifications from your phone, and integrating Timeline into the company’s Android Launcher, Microsoft is also launching things like .NET Core 3.0 today, an update to the MSIX packaging format for shipping large applications, Sets in Windows 10 and support for Adaptive Cards within Microsoft 365.

This last one may just be the most interesting of these (except for maybe Sets) because users will soon see these cards pop up across Microsoft’s applications. Adaptive Cards is a standard that Microsoft has proposed for allowing developers to describe their content and user interface in a simple JSON file. The general idea here is to enable developers to show these cards in applications like Teams and Outlook to allow their users to take actions right within those applications. That could mean paying a bill right inside of Outlook, for example, or accepting a pull request from GitHub in Teams.

07 May 2018

Microsoft and DJI team up to bring smarter drones to the enterprise

At the Microsoft Build developer conference today, Microsoft and Chinese drone manufacturer DJI announced a new partnership that aims to bring more of Microsoft’s machine learning smarts to commercial drones. Given Microsoft’s current focus on bringing intelligence to the edge, this is almost a logical partnership, given that drones are essentially semi-autonomous edge computing devices.

DJI also today announced that Azure is now its preferred cloud computing partner and that it will use the platform to analyze video data, for example. The two companies also plan to offer new commercial drone solutions using Azure IoT Edge and related AI technologies for verticals like agriculture, construction and public safety. Indeed, the companies are already working together on Microsoft’s FarmBeats solution, an AI and IoT platform for farmers.

As part of this partnership, DJI is launching a software development kit (SDK) for Windows that will allow Windows developers to build native apps to control DJI drones. Using the SDK, developers can also integrate third-party tools for managing payloads or accessing sensors and robotics components on their drones. DJI already offers a Windows-based ground station.

“DJI is excited to form this unique partnership with Microsoft to bring the power of DJI aerial platforms to the Microsoft developer ecosystem,” said Roger Luo, DJI president, in today’s announcement. “Using our new SDK, Windows developers will soon be able to employ drones, AI and machine learning technologies to create intelligent flying robots that will save businesses time and money and help make drone technology a mainstay in the workplace.”

Interestingly, Microsoft also stresses that this partnership gives DJI access to its Azure IP Advantage program. “For Microsoft, the partnership is an example of the important role IP plays in ensuring a healthy and vibrant technology ecosystem and builds upon existing partnerships in emerging sectors such as connected cars and personal wearables,” the company notes in today’s announcement.

07 May 2018

Microsoft launches a unified API for all of its AI speech services

At its Build developer conference in Seattle this week, Microsoft is putting a lot of emphasis on its AI and machine learning services. For the most part, that means launching new services and bringing some existing services to new places. With the plethora of different AI services the company now offers, it’s maybe also no surprise that we’re now seeing some consolidation and so today, Microsoft announced that it is now bringing its four speech-related Cognitive Services AI tools under a single umbrella (and API).

The unified speech service will combine Microsoft speech recognition service, text-to-speech API, customized voice models and translation service. Currently, these are all available as separate services (the Bing Speech API, the Speaker Recognition API, the Custom Speech Service and the Translator Speech API), all with their own pricing models.

This update joins a number of other Cognitive Services announcements at Build, including the launch of the company’s new handwriting recognition service and support for the Custom Vision service in Azure IoT edge.

07 May 2018

Microsoft’s new IntelliCode is a smarter IntelliSense

Microsoft today announced IntelliCode, a new experimental developer tool for Visual Studio users that brings more of the company’s artificial intelligence smarts to software development.

With IntelliSense, Visual Studio has long featured a smart code completion feature that makes writing code faster and less error-prone. With IntelliCode, Microsoft is going a step further. This new feature intelligently suggests the next API for a developer to use, for example, instead of simply providing an alphabetical list of code-completion suggestions. It’s basically the next generation of IntelliSense.

For now, this feature is only available for C# code in Visual Studio 2017. Over time, Microsoft plans to expand support to other languages, as well. Since the feature is still experimental, it’s not quite ready for developers yet. You can sign up for the upcoming private preview here, though.

Microsoft tells me that the feature works by understanding the context of the code you are currently working on. Thanks to a machine learning model that has been trained by looking at lots of code, the tool can understand what you’re trying to achieve and then (hopefully) make the right suggestions based on the context and patterns in your own code.

07 May 2018

Microsoft Kinect lives on as a new sensor package for Azure

Microsoft’s Kinect motion-sensing camera for its Xbox consoles was walking dead for the longest time. Last October, it finally passed away peacefully — or so we thought. At its Build developer conference today, Microsoft announced that it is bringing back the Kinect brand and its standout time-of-flight camera tech, but not for a game console. Instead, the company announced Project Kinect for Azure, a new package of sensors that combines the Kinect camera with an onboard computer and a small package that developers can integrate into their own projects.

The company says that Project Kinect for Azure can handle fully articulated hand tracking and that it can be used for high-fidelity spatial mapping. Based on these capabilities, it’s easy to imagine the use of Project Kinect for many robotics and surveillance applications.

“Project Kinect for Azure unlocks countless new opportunities to take advantage of Machine Learning, Cognitive Services and IoT Edge,” Microsoft’ technical fellow — and father of the HoloLens — Alex Kipman writes today. “We envision that Project Kinect for Azure will result in new AI solutions from Microsoft and our ecosystem of partners, built on the growing range of sensors integrating with Azure AI services.”

The camera will have a 1024×1024 resolution, the company says, and it’ll also use this same camera in the next generation of its HoloLens helmet. 

“Project Kinect for Azure brings together this leading hardware technology with Azure AI to empower developers with new scenarios for working with ambient intelligence,” Microsoft explains in today’s announcement. And indeed, it looks like the main idea here is to combine the company’s camera tech with its cloud-based machine learning tools — the pre-build and customized models from the Microsoft Cognitive Services suite and the IoT Edge platform for edge computing workloads.

07 May 2018

Microsoft launches Project Brainwave, its deep learning acceleration platform

Microsoft today announced at its Build conference the preview launch of Project Brainwave, its platform for running deep learning models in its Azure cloud and on the edge in real time.

While some of Microsoft’s competitors, including Google, are betting on custom chips, Microsoft continues to bet on FPGAs to accelerate its models, and Brainwave is no exception. Microsoft argues that FPGAs give it more flexibility than designing custom chips and that the performance it achieves on standard Intel Stratix FPGAs is at least comparable to that of custom chips.

Last August, the company first detailed some aspects of BrainWave, which consists of three distinct layers: a high-performance distributed architecture; a hardware deep neural networking engine that has been synthesized onto the FPGAs; and a compiler and runtime for deploying the pre-trained models.

Microsoft is attaching the FPGAs right to its overall data center network, which allows them to become something akin to hardware microservices. The advantage here is high throughput and a large latency reduction because this architecture allows Microsoft to bypass the CPU of a traditional server to talk directly to the FPGAs. Indeed, Microsoft argues that Brainwave offers 5x less latency than Google’s TPUs.

When Microsoft first announced BrainWave, the software stack supported both the Microsoft Cognitive Toolkit and Google’s TensorFlow frameworks.

Brainwave is now in preview on Azure and Microsoft also promises to bring support for it to Azure Stack and the Azure Data Box appliance.

07 May 2018

Microsoft brings its Visual Studio App Center lifecycle management tool to GitHub

Late last year, Microsoft launched Visual Studio App Center, its new unified mobile app development lifecycle solution for developers who want to write iOS, Android, Windows and macOS apps. The service allows developers to automate the building, testing, distribution and monitoring of their Objective-C, Swift, Java, C#, Xamarin and React Native apps through a single service.

As the company announced today, it is now partnering with GitHub to make Visual Studio App Center natively available in GitHub through the GitHub marketplace.

Microsoft notes that this new partnership will allow GitHub developers to automate many of their DevOps processes right from within the GitHub experience.

App center already allowed developers to connect the service to their GitHub repositories (as well as Visual Studio Team Service and Bitbucket). Developers can use the service to build their apps for the support platforms and then use the service’s testing tools to run automated UI tests of their iOS and Android apps on actual physical devices, similar to Microsoft’s existing Xamarin Test Cloud service. Once tested, developers can use the service to distribute their apps to the various app stores or to their internal and external beta testers using HockeyApp, which Microsoft acquired back in 2014.

In addition to these test and build services, App Center also allows developers to get some insights into how users interact with their apps, as well as crash reports when all the testing ended up missing a potential issue.

Like similar services, App Center also includes support for sending push notifications to users.

The integration with GitHub is now live in the GitHub marketplace.

07 May 2018

Microsoft brings more AI smarts to the edge

At its Build developer conference this week, Microsoft is putting a lot of emphasis on artificial intelligence and edge computing. To a large degree, that means bringing many of the existing Azure services to machines that sit at the edge, no matter whether that’s a large industrial machine in a warehouse or a remote oil-drilling platform. The service that brings all of this together is Azure IoT Edge, which is getting quite a few updates today. IoT Edge is a collection of tools that brings AI, Azure services and custom apps to IoT devices.

As Microsoft announced today, Azure IoT Edge, which sits on top of Microsoft’s IoT Hub service, is now getting support for Microsoft’s Cognitive Services APIs, for example, as well as support for Event Grid and Kubernetes containers. In addition, Microsoft is also open sourcing the Azure IoT Edge runtime, which will allow developers to customize their edge deployments as needed.

The highlight here is support for Cognitive Services for edge deployments. Right now, this is a bit of a limited service as it actually only supports the Custom Vision service, but over time, the company plans to bring other Cognitive Services to the edge as well. The appeal of this service is pretty obvious, too, as it will allow industrial equipment or even drones to use these machine learning models without internet connectivity so they can take action even when they are offline.

As far as AI goes, Microsoft also today announced that it will bring its new Brainwave deep neural network acceleration platform for real-time AI to the edge.

The company has also teamed up with Qualcomm to launch an AI developer kit for on-device inferencing on the edge. The focus of the first version of this kit will be on camera-based solutions, which doesn’t come as a major surprise given that Qualcomm recently launched its own vision intelligence platform.

IoT Edge is also getting a number of other updates that don’t directly involve machine learning. Kubernetes support is an obvious one and a smart addition, given that it will allow developers to build Kubernetes clusters that can span both the edge and a more centralized cloud.

The appeal of running Event Grid, Microsoft’s event routing service, at the edge is also pretty obvious, given that it’ll allow developers to connect services with far lower latency than if all the data had to run through a remote data center.

Other IoT Edge updates include the planned launch of a marketplace that will allow Microsoft partners and developers to share and monetize their edge modules, as well as a new certification program for hardware manufacturers to ensure that their devices are compatible with Microsoft’s platform. IoT Edge, as well as Windows 10 IoT and Azure Machine Learning, will also soon support hardware-accelerated model evaluation with DirextX 12 GPU, which is available in virtually every modern Windows PC.

07 May 2018

Microsoft says 700M devices now run Windows 10

At its Build developer conference, Microsoft today announced that over 700 million devices now run Windows 10. Almost exactly a year ago, that number stood at 500 million. In addition, the company also today noted that Office 365 now has 135 million monthly active commercial users, up from 120 million last October.

Back in 2015, when Windows 10 launched, Microsoft’s original goal was to hit a billion devices by 2018. It quickly became clear that this was a bit too optimistic. While Windows 10 usage clearly continues to grow at a decent speed, we’re not likely to see it hit a billion users soon.

What Microsoft is probably more excited about anyway is the fact that its Office 365 and related Microsoft 365 subscription plans seem to be doing quite well. Over the course of the last couple of financial quarters, Office 365 seat growth typically increased around 30 percent year over year, with revenue well outpacing that number.

We don’t yet have numbers for Microsoft 365, a relatively new subscription service that combines access to Windows 10, Office 365 and a number of mobile device management and security tools for businesses. But at this year’s Build conference, Microsoft is strongly emphasizing this new service and we’ll likely hear some subscriber numbers soon.

07 May 2018

Microsoft commits $25M to its AI for Accessibility program

If you’ve read Microsoft CEO Satya Nadella’s recent book, you know that making technology more accessible to people with disabilities is personal for him. Over the course of the last few years, large parts of the company have dedicated themselves to building tools that enable people with disabilities and mental health conditions to do more. Today, the company announced that it is putting more money behind these efforts through its $25 million, five-year AI for Accessibility project.

“Around the world, only one in 10 people with disabilities has access to assistive technologies and products,” Microsoft president Brad Smith writes in today’s announcement. “By making AI solutions more widely available, we believe technology can have a broad impact on this important community.”

Like the company’s AI for Earth project, which launched last year, AI for Accessibility aims to provide seed grants to developers, universities, inventors and NGOs. The focus of their projects needs to be on “creating solutions that will create new opportunities and assist people with disabilities with work, life, and human connections.” Then, the company will take the projects that show the most promise and connect their teams with its own experts to help them scale.

In addition to all of this, Microsoft also pledges to bring inclusive design to its products.

“Disabilities can be permanent, temporary or situational. By innovating for people with disabilities, we are innovating for us all,” Smith writes. “By ensuring that technology fulfills its promise to address the broadest societal needs, we can empower everyone – not just individuals with disabilities – to achieve more.”