PdfPrep.com

What should you do?

You train a model and register it in your Azure Machine Learning workspace. You are ready to deploy the model as a real-time web service.

You deploy the model to an Azure Kubernetes Service (AKS) inference cluster, but the deployment fails because an error occurs when the service runs the entry script that is associated with the model deployment.

You need to debug the error by iteratively modifying the code and reloading the service, without requiring a re-deployment of the service for each code update.

What should you do?
A . Modify the AKS service deployment configuration to enable application insights and re-deploy to AK
C . Create an Azure Container Instances (ACI) web service deployment configuration and deploy the model on AC
E . Add a breakpoint to the first line of the entry script and redeploy the service to AK
G . Create a local web service deployment configuration and deploy the model to a local Docker container.
H . Register a new version of the model and update the entry script to load the new version of the model from its registered path.

Answer: B

Explanation:

How to work around or solve common Docker deployment errors with Azure Container Instances (ACI) and Azure Kubernetes Service (AKS) using Azure Machine Learning.

The recommended and the most up to date approach for model deployment is via the Model.deploy() API using an Environment object as an input parameter. In this case our service will create a base docker image for you during deployment stage and mount the required models all in one call.

The basic deployment tasks are:

Exit mobile version