With only two days left for its annual Build Developer Conference, Microsoft is giving a face-lift to Azure Machine Learning, its publicly available cloud service that enables developers and data scientists to quickly and easily build, train and deploy machine learning models.

There will be a slew of new ML products as well as tweaks to some of its existing services, all of which highlight Microsoft’s continuing commitment to democratize access to AI by building the most productive AI platform.

Here is a brief snapshot of the ML enhancements announced:

Bringing AI to Azure Search

With the general availability of cognitive search capability, Microsoft is bringing AI to Azure Search. This will allow customers to apply Cognitive Services algorithms to extract new insights from their structured and unstructured content. Microsoft will also preview a new feature that enables developers to store AI insights gained from cognitive search, using which they can deliver knowledge-rich experiences by leveraging Power BI visualizations or machine learning models.

Integrating MLOps capabilities with Azure DevOps

A suite of MLOps capabilities will tie into Azure DevOps. This integration will promote reproducibility, auditability and automation of the end-to-end machine learning lifecycle.

Automated and intuitive visual ML interface

Azure Machine Learning already provides support for leading AI frameworks such as Facebook’s PyTorch, Google’s TensorFlow, and Scikit-learn. It will now feature a more intuitive and automated visual machine learning interface that provides a simple, zero-code model creation and deployment experience with drag-and-drop capabilities. The new intuitive UI will simplify the development of high-quality models.

Hardware-accelerated models

Another major announcement from Microsoft is the general availability of hardware–accelerated machine learning models that run on specialized FPGAs (field-programmable gate arrays). This will enable extremely low latency and cost-effective inferencing.

The ONNX (Open Neural Network Exchange) Runtime, primarily meant for ONXX-compliant machine learning models will now get support for Nvidia TensorRT and Intel nGraph models. ONXX is an open source AI ecosystem developed by Microsoft in collaboration with Facebook, IBM, Huawei, Intel, AMD, ARM, and Qualcomm to allow developers to seamlessly switch between frameworks. Through hardware acceleration, ONXX runtime will support Nvidia’s TensorRT and Intel’s nGraph for high-speed inferencing on Nvidia and Intel hardware.

Scott Guthrie, Executive Vice President of Microsoft Cloud and AI Group says that the key new innovations in Azure Machine Learning will simplify the process of building, training, and deployment of machine learning models at scale.

Besides facilitating simplified model creation in Azure Machine Learning, Microsoft is introducing several new products and solutions to meet the increasing demands of hybrid cloud and edge computing, which you can read from Microsoft’s latest blog.

We can be sure that Azure will get a generous stage time at Build 2019 conference.

Related read: Microsoft’s AI Business School: Prepping Business Leaders to Take on AI