Categories
Azure Internet of Things IoT Hub Microsoft

Azure IoT Hub deployment with ARM template and how to retrieve the primary connection string

In this article we will create an ARM template that will deploy an IoT Hub and output the primary connection string along with the Event Hub compatible connection string.

If you are building an IoT solution and need reliable and secure communications between your IoT devices and your cloud-hosted solution backend then you are probably looking at the following Azure product: Azure IoT Hub.

Today we will focus on creating an Azure Resource Manager template to easily setup and deploy an IoT Hub to a resource group. Our ARM template will be created in a new Azure Resource Group deployment project in Visual Studio.

 

Creation

Let’s declare the parameters of the ARM template:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "iotHubName": {
      "type": "string",
      "defaultValue": "[concat('myIoTHub', uniqueString(resourceGroup().id))]"
    },
    "iotHubSkuName": {
      "type": "string",
      "defaultValue": "F1",
      "allowedValues": [
        "F1",
        "B1",
        "B2",
        "B3",
        "S1",
        "S2",
        "S3"
      ]
    },
    "iotHubSkuCapacity": {
      "type": "int",
      "defaultValue": 1,
      "minValue": 1
    },
    "iotHubMessageRetentionInDays": {
      "type": "int",
      "defaultValue": 1,
      "allowedValues": [ 1, 2, 3, 4, 5, 6, 7 ]
    },
    "iotHubPartitionCount": {
      "type": "int",
      "defaultValue": 2,
      "minValue": 2
    }
  }
  ...
}
  • iotHubName: the name of the IoT Hub. If no parameter is provided a default name such as myIoTHubesf732tk64rr6 is generated.
  • iotHubSkuName: the pricing tier of the IoT Hub. If no parameter is provided the pricing tier will be free.
  • iotHubSkuCapacity: the pricing tier and the capacity determine the maximum daily quota of messages that you can send. If no parameter is provided the capacity is set to 1.
  • iotHubMessageRetentionInDays: specifies how long the IoT hub will maintain device-to-cloud events.
  • iotHubPartitionCount: the number of partitions relates the device-to-cloud messages to the number of simultaneous readers of these messages. If no parameter is provided the number is set to 2.

 

Now we will declare the resources for the IoT Hub:

{
  ...
  "resources": [
    {
      "apiVersion": "2018-04-01",
      "name": "[parameters('iotHubName')]",
      "type": "Microsoft.Devices/IotHubs",
      "location": "[resourceGroup().location]",
      "sku": {
        "name": "[parameters('iotHubSkuName')]",
        "capacity": "[parameters('iotHubSkuCapacity')]"
      },
      "properties": {
        "eventHubEndpoints": {
          "events": {
            "retentionTimeInDays": "[parameters('iotHubMessageRetentionInDays')]",
            "partitionCount": "[parameters('iotHubPartitionCount')]"
          },
          "operationsMonitoringEvents": {
            "retentionTimeInDays": "[parameters('iotHubMessageRetentionInDays')]",
            "partitionCount": "[parameters('iotHubPartitionCount')]"
          }
        }
      }
    }
  ]
  ...
}

We can pay attention to several things here:

  • The hub is declared with the following type: Microsoft.Devices/IotHubs.
  • The sku is composed by the SkuName and SkuCapacity.

 

And to finish, we will output the connection string and the Event Hub compatible connection string of the IoT hub:

{
  ...
  "outputs": {
    "IoTHubConnectionString": {
      "type": "string",
      "value": "[concat('HostName=', reference(resourceId('Microsoft.Devices/IoTHubs', parameters('iotHubName')), providers('Microsoft.Devices', 'IoTHubs').apiVersions[0]).hostName, ';SharedAccessKeyName=iothubowner;SharedAccessKey=', listKeys(resourceId('Microsoft.Devices/IotHubs', parameters('iotHubName')), providers('Microsoft.Devices', 'IoTHubs').apiVersions[0]).value[0].primaryKey)]"
    },
    "IoTHubEventHubCompatibleConnectionString": {
      "type": "string",
      "value": "[concat('Endpoint=', reference(resourceId('Microsoft.Devices/IoTHubs', parameters('iotHubName')), providers('Microsoft.Devices', 'IoTHubs').apiVersions[0]).eventHubEndpoints.events.endpoint, ';SharedAccessKeyName=iothubowner;SharedAccessKey=', listKeys(resourceId('Microsoft.Devices/IotHubs', parameters('iotHubName')), providers('Microsoft.Devices', 'IoTHubs').apiVersions[0]).value[0].primaryKey, ';EntityPath=', reference(resourceId('Microsoft.Devices/IoTHubs', parameters('iotHubName')), providers('Microsoft.Devices', 'IoTHubs').apiVersions[0]).eventHubEndpoints.events.path)]"
    }
  }
}

As you can notice, we take advantage of the ARM template function listKeys and the ARM template function providers.

The function providers is useful to get the latest API version for a specific namespace, whereas listkeys is the function that will allow us to get the properties for a specific key name.

By default when a new IoT hub is created, five access policies are created: iothubowner, service, device, registryRead and registryReadWrite. In our template we retrieve the primary connection string for the first one.

 

Example of use

The ARM template is now ready, let’s open a Windows PowerShell and try it:

.\Deploy-AzureResourceGroup.ps1 -ResourceGroupName 'MyResourceGroupName' -ResourceGroupLocation 'canadaeast' -TemplateFile '.\azuredeploy.json'

...

OutputsString      :
                     Name                                      Type                       Value
                     ===============                           =========================  ==========
                     ioTHubConnectionString                    String                     HostName=myIoTHubfkr54oehqy5pa.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=Vs38UdjTt5Wj1cV3Wji8e5bOx6nSsYJ/JA6ZrO6WOiA=
                     ioTHubEventHubCompatibleConnectionString  String                     Endpoint=sb://ihsuprodblres036dednamespace.servicebus.windows.net/;SharedAccessKeyName=iothubowner;SharedAccessKey=Vs38UdjTt5Wj1cV3Wji8e5bOx6nSsYJ/JA6ZrO6WOiA=;EntityPath=iothub-ehub-myiothubfk-1163803-9a6e8b6b6c

If everything goes well, you should see the same kind of output as above.

 

To go further

In the template we are outputting the connection string as an example. But in a more advance scenario with Azure Functions you could directly set the Function App application setting that require the connection string like the following:

{
  ...
  "resources": [
    {
      "apiVersion": "2016-08-01",
      "name": "[parameters('functionAppName']",
      "type": "Microsoft.Web/sites",
      "location": "[resourceGroup().location]",
      "kind": "functionapp",
      "dependsOn": [
        ...
        "[concat('Microsoft.Devices/IoTHubs', parameters('iotHubName'))]"
      ],
      "properties": {
        "siteConfig": {
          "appSettings": [
            {
              "name": "IoTHubConnectionString",
              "value": "[concat('Endpoint=', reference(resourceId('Microsoft.Devices/IoTHubs', parameters('iotHubName')), providers('Microsoft.Devices', 'IoTHubs').apiVersions[0]).eventHubEndpoints.events.endpoint, ';SharedAccessKeyName=iothubowner;SharedAccessKey=', listKeys(resourceId('Microsoft.Devices/IotHubs', parameters('iotHubName')), providers('Microsoft.Devices', 'IoTHubs').apiVersions[0]).value[0].primaryKey, ';EntityPath=', reference(resourceId('Microsoft.Devices/IoTHubs', parameters('iotHubName')), providers('Microsoft.Devices', 'IoTHubs').apiVersions[0]).eventHubEndpoints.events.path)]"
            }
          ]
        }
      }
    }
  ]
  ...
}

 

Summary

We have seen how to create an ARM template that will deploy an Azure IoT Hub and output the connection string and the Event Hub compatible connection string.

 

You can get the project source code here:

Browse the GitHub repository

 

Please feel free to comment or contact me if you have any question about this article.

Categories
Azure Cognitive Services Microsoft Vision

Use Azure Cognitive Services to recognize karting images with Custom Vision service

For this we will use the public preview of Custom Vision Service Prediction API.

In the previous article about Custom Vision Service we saw how to train Azure Cognitive Services to recognize karting images thanks to Custom Vision Service Training API in a .NET console application.

 

Today we will see how to actually recognize karting images using Custom Vision Service Prediction API relying on the model we trained with our own data.

 

To start we will create a project with the .NET Console App template in Visual Studio.

Before starting you will need to have a Prediction Key and a Training Key. You can get it from www.customvision.ai. If you have an Azure account get it by creating a Custom Vision Service in the Azure Portal or as seen in a previous article via ARM template.

 

Creation

The goal here is to build a console application that will allow us to predict karting and other racing sports images using a previously trained Custom Vision Service project.

The first thing you will need is to add the following NuGet packages to your project: Microsoft.Cognitive.CustomVision.Prediction and Microsoft.Cognitive.CustomVision.Training

 

Let’s create the main method of our console application and a start method:

using Microsoft.Cognitive.CustomVision.Prediction;
using Microsoft.Cognitive.CustomVision.Training;
...
using Microsoft.Rest;
using System;
using System.Linq;
using System.Threading.Tasks;
...

namespace VisionRacing.PredictingRacingImages
{
    class Program
    {
        static void Main(string[] args)
        {
            var trainingKey = "Your Custom Vision training key.";
            var predictionKey = "Your Custom Vision prediction key.";

            Start(trainingKey, predictionKey).Wait();
        }

        private static async Task Start(string trainingKey, string predictionKey)
        {
            var projectName = " ";
            var trainingApi = GetTrainingApi(trainingKey);
            var predictionEndpoint = GetPredictionEndpoint(predictionKey);

            while (!string.IsNullOrEmpty(projectName))
            {
                try
                {
                    Console.Clear();
                    await ListProjects(trainingApi);

                    Console.WriteLine("Please enter a project name or press enter to exit:");
                    projectName = Console.ReadLine();

                    if (!string.IsNullOrEmpty(projectName))
                    {
                        await WorkOnProject(trainingApi, predictionEndpoint, projectName);
                    }
                }
                catch (Exception ex)
                {
                    Console.ForegroundColor = ConsoleColor.Red;
                    Console.WriteLine($"An error occurred: {Environment.NewLine}{ex.Message}");

                    if (ex is HttpOperationException)
                    {
                        Console.WriteLine(((HttpOperationException)ex).Response.Content);
                    }

                    Console.ResetColor();
                    Console.WriteLine();
                    Console.WriteLine();
                    Console.WriteLine("Press any key to continue");
                    Console.ReadLine();
                }
            }
        }

        ...

        private static PredictionEndpoint GetPredictionEndpoint(string predictionKey)
        {
            return new PredictionEndpoint
            {
                ApiKey = predictionKey
            };
        }

        private static TrainingApi GetTrainingApi(string trainingKey)
        {
            return new TrainingApi
            {
                ApiKey = trainingKey
            };
        }

        private static async Task ListProjects(TrainingApi trainingApi)
        {
            var projects = await trainingApi.GetProjectsAsync();

            if (projects.Any())
            {
                Console.WriteLine($"Existing projects: {Environment.NewLine}{string.Join(Environment.NewLine, projects.Select(p => p.Name))}{Environment.NewLine}");
            }
        }

        ...
    }
}

The start method will allow us to list all the projects associated to our account. You should enter the name of an existing trained project.

We also create two other methods:

  • GetTrainingApi: this will return a TrainingApi object using the provided Training Key.
  • GetPredictionEndpoint: this will return a PredictionEndpoint object using the provided Prediction Key.
  • ListProjects: it will call the TrainingApi method GetProjectsAsync to list all the projects.

 

When a project name is entered, the WorkOnProject method is called:

using Microsoft.Cognitive.CustomVision.Prediction;
using Microsoft.Cognitive.CustomVision.Training;
using Microsoft.Cognitive.CustomVision.Training.Models;
...
using System;
using System.Linq;
using System.Threading.Tasks;
...

namespace VisionRacing.PredictingRacingImages
{
    class Program
    {
        ...
        private static async Task WorkOnProject(TrainingApi trainingApi, PredictionEndpoint predictionEndpoint, string name)
        {
            var option = " ";

            while (!string.IsNullOrEmpty(option))
            {
                Console.Clear();

                var project = await GetOrCreateProject(trainingApi, name);
                Console.WriteLine($"  --- Project {project.Name} ---");
                Console.WriteLine();

                await ListProjectTags(trainingApi, project.Id);

                Console.WriteLine("Type an option number:");
                Console.WriteLine("  1: Predict Karting images");
                Console.WriteLine("  2: Predict F1 images");
                Console.WriteLine("  3: Predict MotoGP images");
                Console.WriteLine("  4: Predict Rally images");
                Console.WriteLine("  5: Predict Test images");
                Console.WriteLine();
                Console.WriteLine($"Press any other key to exit project {name}");
                option = Console.ReadLine();

                switch (option)
                {
                    case "1":
                        await StartPrediction(predictionEndpoint, project.Id, ImageType.Karting);
                        break;
                    case "2":
                        await StartPrediction(predictionEndpoint, project.Id, ImageType.F1);
                        break;
                    case "3":
                        await StartPrediction(predictionEndpoint, project.Id, ImageType.MotoGP);
                        break;
                    case "4":
                        await StartPrediction(predictionEndpoint, project.Id, ImageType.Rally);
                        break;
                    case "5":
                        await StartPrediction(predictionEndpoint, project.Id, ImageType.Test);
                        break;
                    default:
                        option = string.Empty;
                        break;
                }
            }
        }
        
        ...

        private static async Task<Project> GetOrCreateProject(TrainingApi trainingApi, string name)
        {
            var projects = await trainingApi.GetProjectsAsync();

            var project = projects.Where(p => p.Name.ToUpper() == name.ToUpper()).SingleOrDefault();

            if (project == null)
            {
                project = await trainingApi.CreateProjectAsync(name);
            }

            return project;
        }

        ...

        private static async Task ListProjectTags(TrainingApi trainingApi, Guid projectId)
        {
            var tagList = await trainingApi.GetTagsAsync(projectId);

            if (tagList.Tags.Any())
            {
                Console.WriteLine($"Tags: {Environment.NewLine}{string.Join(Environment.NewLine, tagList.Tags.Select(t => $"  {t.Name} (Image count: {t.ImageCount})"))}{Environment.NewLine}");
            }
            else
            {
                Console.WriteLine($"No tags yet...{Environment.NewLine}");
            }
        }

        ...

        private enum ImageType
        {
            F1,
            Karting,
            MotoGP,
            Rally,
            Test
        }
    }
}

We will also use two other methods:

  • GetOrCreateProject: will try to retrieve an existing project. Create it if doesn’t exists.
  • ListProjectTags: list all the existing tags of a project.

The workOnProject method will get a project using the name provided, list the project tags and then show several options.

 

The five different options will allow to predict different kind of images: Karting, F1, Moto GP, Rally and Test images.

Here is the code that shows how to predict a type of image:

using Microsoft.Cognitive.CustomVision.Prediction;
...
using System;
using System.Linq;
using System.Threading.Tasks;
using ImageUrl = Microsoft.Cognitive.CustomVision.Prediction.Models.ImageUrl;

namespace VisionRacing.PredictingRacingImages
{
    class Program
    {
        ...
        private static int GetImageCountPerImageType(ImageType imageType)
        {
            switch (imageType)
            {
                case ImageType.F1:
                    return 7;
                case ImageType.Karting:
                    return 35;
                case ImageType.MotoGP:
                    return 7;
                case ImageType.Rally:
                    return 6;
                case ImageType.Test:
                    return 10;
                default:
                    return 0;
            }
        }

        private static string GetImageDescriptionPerImageTypeAndNumber(ImageType imageType, int imageNumber)
        {
            switch (imageType)
            {
                case ImageType.Test:
                    switch (imageNumber)
                    {
                        case 1:
                        case 2:
                            return "Solo kart racer on track";
                        case 3:
                        case 7:
                        case 10:
                            return "Multiple kart racers on track";
                        case 4:
                        case 9:
                            return "Solo kart racer on pre-grid";
                        case 5:
                            return "Kart racers on a podium";
                        case 6:
                            return "A tent in a karting paddock";
                        case 8:
                            return "A racing helmet";
                        default:
                            return string.Empty;
                    }
                case ImageType.F1:
                case ImageType.Karting:
                case ImageType.MotoGP:
                case ImageType.Rally:
                default:
                    return string.Empty;
            }
        }

        ...

        private static async Task StartPrediction(PredictionEndpoint predictionEndpoint, Guid projectId, ImageType imageType)
        {
            var imageTypeCount = GetImageCountPerImageType(imageType);

            for (int i = 1; i <= imageTypeCount; i++)
            {
                Console.Clear();
                Console.WriteLine($"Image {imageType} {i}/{imageTypeCount} prediction in progress...");

                var imageDescription = GetImageDescriptionPerImageTypeAndNumber(imageType, i);

                if (!string.IsNullOrEmpty(imageDescription))
                {
                    Console.WriteLine();
                    Console.WriteLine($"Description: {imageDescription}");
                }

                var imagePredictionResult = await predictionEndpoint.PredictImageUrlWithNoStoreAsync(projectId, new ImageUrl($"https://github.com/vivienchevallier/Article-AzureCognitive.Vision-Racing/raw/master/Images/{imageType}/{imageType}%20({i}).jpg"));

                Console.Clear();

                if (imagePredictionResult.Predictions.Any())
                {
                    Console.WriteLine($"Image {imageType} {i}/{imageTypeCount}: {imageDescription}{Environment.NewLine}{string.Join(Environment.NewLine, imagePredictionResult.Predictions.Select(p => $"  {p.Tag}: {p.Probability:P1}"))}{Environment.NewLine}");
                }
                else
                {
                    Console.WriteLine($"Image {imageType} {i}/{imageTypeCount}: no predictions yet...{Environment.NewLine}");
                }

                if (i < imageTypeCount)
                {
                    Console.WriteLine("Press enter to predict next image or any other key to stop predictions");

                    if (Console.ReadKey().Key != ConsoleKey.Enter)
                    {
                        break;
                    }
                }
                else
                {
                    Console.WriteLine("All images predicted... Press any key to continue");
                    Console.ReadLine();
                }
            }
        }
        ...
    }
}

Here are more details about the methods used:

  • GetImageCountPerImageType: get the number of images on GitHub for the specified ImageType.
  • GetImageDescriptionPerImageTypeAndNumber: providing an image type and its number, return a description of the image.
  • StartPrediction: predict all the images of an image type by sending a request to the service for each image.

 

Example of use

The console application is now ready to run, let’s execute it:

Existing projects:
Test Project

Please enter a project name or press enter to exit:

When starting it will list the existing projects and allow to enter a project name, new or existing. If you followed the previous article steps, your trained project should appear in the list.

 

  --- Project Test Project ---


Tags:
  Karting (Image count: 35)
  Racing (Image count: 35)

Type an option number:
  1: Predict Karting images
  2: Predict F1 images
  3: Predict MotoGP images
  4: Predict Rally images
  5: Predict Test images


Press any other key to exit project Test Project

Select option five to try to predict the karting images that are in the test images.

 

Image Test 1/10 prediction in progress...

Description: Solo kart racer on track

...

Image Test 1/10: Solo kart racer on track
  Racing: 99.9%
  Karting: 99.9%

Press enter to predict next image or any other key to stop predictions

The first image is a solo kart racer on track and is unknown from our previously trained project. The service is able to predict with a probability of 99% that this image can be recognized with tags Racing and Karting, amazing!

The same goes with image 2, images 3, 7 and 10 showing multiple kart racers on track, images 4 and 9 showing a solo kart racer on pre-grid.

However, image 5 showing kart racers on a podium and image 6 showing a tent in a karting paddock are predicted with a probability of 0%. Image 8 showing a racing helmet gets the same result.

Considering that only 35 images were used to train our project we can say that the results are really good.

 

One all the test images are predicted you’ll get the following output:

Image Test 10/10: Multiple kart racers on track
  Racing: 99.9%
  Karting: 99.9%

All images predicted... Press any key to continue

 

To go further

Now that you have predicted test images you could try to predict one of the four other types like F1.

You are then going to discover unexpected results. Karting and Formula 1 being very close, the service will sometimes predict an image with a Formula 1 as a karting image.

Image F1 1/7:
  Karting: 15.1%
  Racing: 13.9%

Press enter to predict next image or any other key to stop predictions

First F1 image is not predicted as a karting image, but images from 2 to 7 are with a probability close to 100%.

 

If it happens, you’ll have to train your project with new images. In our case we went to add F1 images and then train our project to refine it.

Once that done using the console application built in the previous article, here the results we get:

Image F1 1/7:
  F1: 99.6%
  Racing: 99.5%
  Karting: 0.0%

Press enter to predict next image or any other key to stop predictions
Image F1 2/7:
  Racing: 100.0%
  F1: 99.9%
  Karting: 0.2%

Press enter to predict next image or any other key to stop predictions

This is pretty impressive! We trained our project with only seven new images with the tags Racing and F1 and we are able to predict that all seven F1 images can be recognized with the tags Racing and F1 with a probability of almost 100%. But also, the probability of those images to be recognized with the tag Karting is close to 0%.

Then imagine how accurate a project can be when trained with thousands of images and multiples tags!

 

Summary

In this article and the previous one, we have built two NET console applications allowing us to train Azure Cognitive Services to recognize racing/karting images with Custom Vision Service Training API and accurately predict racing/karting images with Custom Vision Service Prediction API.

Now don’t forget that the Custom Vision service is still in preview, new features like object detection are added, APIs and SDKs will be updated.

Custom Vision Service Prediction is a really interesting and powerful service, I’ll definitely keep an eye on it to see how it evolves!

 

You can get the project source code here:

Browse the GitHub repository

(Note that the project uses .NET Framework 4.7)

   

Please feel free to comment or contact me if you have any question about this article.

Categories
Azure Microsoft

Global Azure Bootcamp Montréal and Ottawa 2018

All around the world user groups and communities want to learn about Azure and Cloud Computing!

On April 21, 2018, all communities will come together once again in the fifth great Global Azure Bootcamp event!

This free event is a great way to know more about Azure, and I’ll be at Lixar HQ in Ottawa to present and assist you all day.

Categories
Azure Cognitive Services Microsoft Vision

Training Azure Cognitive Services to recognize karting images with Custom Vision service

For this we will use the public preview of Custom Vision Service Training API.

We previously saw how to use the Azure Cognitive Services Vison API named Computer Vision to analyze a karting image.

Today thanks to the public preview of Custom Vision Service we will train Azure Cognitive Services to recognize karting images. It means that we take a different approach because here we will use our own data to recognize images as opposed to using Vison API Microsoft’s data.

 

To start we will create a project with the .NET Console App template in Visual Studio.

Before starting you will need to have a Training Key. You can get it from www.customvision.ai. If you have an Azure account get it by creating a Custom Vision Service in the Azure Portal or as seen in the previous article via ARM template.

 

Creation

The goal here is to build a console application allowing us to create, edit and delete Custom Vision Service projects. Then for each project we want to be able to add karting images and train the project.

The first thing you will need is to add the following NuGet package to your project: Microsoft.Cognitive.CustomVision.Training

 

Let’s create the main method of our console application and a start method:

...
using Microsoft.Cognitive.CustomVision.Training;
using Microsoft.Rest;
using System;
using System.Linq;
using System.Threading.Tasks;
...

namespace VisionRacing.TrainingRacingImages
{
    class Program
    {
        static void Main(string[] args)
        {
            var trainingKey = "Your Custom Vision training key.";

            Start(trainingKey).Wait();
        }

        private static async Task Start(string trainingKey)
        {
            var projectName = " ";
            var trainingApi = GetTrainingApi(trainingKey);

            while (!string.IsNullOrEmpty(projectName))
            {
                try
                {
                    Console.Clear();
                    await ListProjects(trainingApi);

                    Console.WriteLine("Please enter a project name or press enter to exit:");
                    projectName = Console.ReadLine();

                    if (!string.IsNullOrEmpty(projectName))
                    {
                        await WorkOnProject(trainingApi, projectName);
                    }
                }
                catch (Exception ex)
                {
                    Console.ForegroundColor = ConsoleColor.Red;
                    Console.WriteLine($"An error occurred: {Environment.NewLine}{ex.Message}");

                    if (ex is HttpOperationException)
                    {
                        Console.WriteLine(((HttpOperationException)ex).Response.Content);
                    }

                    Console.ResetColor();
                    Console.WriteLine();
                    Console.WriteLine();
                    Console.WriteLine("Press any key to continue");
                    Console.ReadLine();
                }
            }
        }

        ...

        private static TrainingApi GetTrainingApi(string trainingKey)
        {
            return new TrainingApi
            {
                ApiKey = trainingKey
            };
        }

        private static async Task ListProjects(TrainingApi trainingApi)
        {
            var projects = await trainingApi.GetProjectsAsync();

            if (projects.Any())
            {
                Console.WriteLine($"Existing projects: {Environment.NewLine}{string.Join(Environment.NewLine, projects.Select(p => p.Name))}{Environment.NewLine}");
            }
        }
        ...
    }
}

The start method will allow us to list all the projects associated to our account. You can then enter the name of an existing project or a new project.

We also create two other methods:

  • GetTrainingApi: this will return a TrainingApi object using the provided Training Key.
  • ListProjects: it will call the TrainingApi method GetProjectsAsync to list all the projects.

 

When a project name is entered, the WorkOnProject method is called:

...
using Microsoft.Cognitive.CustomVision.Training;
using Microsoft.Cognitive.CustomVision.Training.Models;
using System;
using System.Linq;
using System.Threading.Tasks;
...

namespace VisionRacing.TrainingRacingImages
{
    class Program
    {
        ...
        private static async Task WorkOnProject(TrainingApi trainingApi, string name)
        {
            var option = " ";

            while (!string.IsNullOrEmpty(option))
            {
                Console.Clear();

                var project = await GetOrCreateProject(trainingApi, name);
                Console.WriteLine($"  --- Project {project.Name} ---");
                Console.WriteLine();

                await ListProjectTags(trainingApi, project.Id);

                Console.WriteLine("Type an option number:");
                Console.WriteLine("  1: Create Karting images");
                Console.WriteLine("  2: Create F1 images");
                Console.WriteLine("  3: Create MotoGP images");
                Console.WriteLine("  4: Create Rally images");
                Console.WriteLine("  5: Train project");
                Console.WriteLine("  6: Delete project");
                Console.WriteLine();
                Console.WriteLine($"Press any other key to exit project {name}");
                option = Console.ReadLine();

                switch (option)
                {
                    case "1":
                        await CreateTagImages(trainingApi, project.Id, ImageType.Karting);
                        break;
                    case "2":
                        await CreateTagImages(trainingApi, project.Id, ImageType.F1);
                        break;
                    case "3":
                        await CreateTagImages(trainingApi, project.Id, ImageType.MotoGP);
                        break;
                    case "4":
                        await CreateTagImages(trainingApi, project.Id, ImageType.Rally);
                        break;
                    case "5":
                        await TrainProject(trainingApi, project.Id);
                        break;
                    case "6":
                        await DeleteProject(trainingApi, project.Id);
                        option = string.Empty;
                        break;
                    default:
                        option = string.Empty;
                        break;
                }
            }
        }

        ...

        private static async Task<Project> GetOrCreateProject(TrainingApi trainingApi, string name)
        {
            var projects = await trainingApi.GetProjectsAsync();

            var project = projects.Where(p => p.Name.ToUpper() == name.ToUpper()).SingleOrDefault();

            if (project == null)
            {
                project = await trainingApi.CreateProjectAsync(name);
            }

            return project;
        }

        ...

        private static async Task ListProjectTags(TrainingApi trainingApi, Guid projectId)
        {
            var tagList = await trainingApi.GetTagsAsync(projectId);

            if (tagList.Tags.Any())
            {
                Console.WriteLine($"Tags: {Environment.NewLine}{string.Join(Environment.NewLine, tagList.Tags.Select(t => $"  {t.Name} (Image count: {t.ImageCount})"))}{Environment.NewLine}");
            }
            else
            {
                Console.WriteLine($"No tags yet...{Environment.NewLine}");
            }
        }
        
        ...

        private enum ImageType
        {
            F1,
            Karting,
            MotoGP,
            Rally
        }
    }
}

We will also use two other methods:

  • GetOrCreateProject: will try to retrieve an existing project. Create it if doesn’t exists.
  • ListProjectTags: list all the existing tags of a project.

The workOnProject method will get a project using the name provided, list the project tags and then show several options.

 

The first four options will allow to add different kind of images to the project (Karting, F1, Moto GP, and Rally images). Option five is to train the project and six to delete it.

Here is the code that shows how to add images with tags to a project:

...
using Microsoft.Cognitive.CustomVision.Training;
using Microsoft.Cognitive.CustomVision.Training.Models;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
...

namespace VisionRacing.TrainingRacingImages
{
    class Program
    {
        ...
        private static async Task CreateTagImages(TrainingApi trainingApi, Guid projectId, ImageType imageType)
        {
            Console.Clear();

            var imageTag = await GetOrCreateTag(trainingApi, projectId, imageType.ToString());
            var racingTag = await GetOrCreateTag(trainingApi, projectId, "Racing");
            var imageTypeCount = GetImageCountPerImageType(imageType);

            if (imageTag.ImageCount != imageTypeCount)
            {
                Console.WriteLine($"{imageType} images creation in progress...");

                var images = new List<ImageUrlCreateEntry>();

                for (int i = 1; i <= imageTypeCount; i++)
                {
                    images.Add(new ImageUrlCreateEntry($"https://github.com/vivienchevallier/Article-AzureCognitive.Vision-Racing/raw/master/Images/{imageType}/{imageType}%20({i}).jpg"));
                }

                var tags = new List<Guid>() { imageTag.Id, racingTag.Id };
                var response = await trainingApi.CreateImagesFromUrlsAsync(projectId, new ImageUrlCreateBatch(images, tags));

                Console.Clear();
                Console.WriteLine($"{imageType} images successfully created.");
            }
            else
            {
                Console.WriteLine($"{imageType} images already created.");
            }

            Console.WriteLine();
            Console.WriteLine("Press any key to continue");
            Console.ReadLine();
        }

        private static int GetImageCountPerImageType(ImageType imageType)
        {
            switch (imageType)
            {
                case ImageType.F1:
                    return 7;
                case ImageType.Karting:
                    return 35;
                case ImageType.MotoGP:
                    return 7;
                case ImageType.Rally:
                    return 6;
                default:
                    return 0;
            }
        }

        ...

        private static async Task<Tag> GetOrCreateTag(TrainingApi trainingApi, Guid projectId, string name)
        {
            var tagList = await trainingApi.GetTagsAsync(projectId);

            var tag = tagList.Tags.Where(t => t.Name.ToUpper() == name.ToUpper()).SingleOrDefault();

            if (tag == null)
            {
                tag = await trainingApi.CreateTagAsync(projectId, name);
            }

            return tag;
        }
        ...
    }
}

Here are more details about the methods used:

  • CreateTagImages: using the provided ImageType, add images to the project. The images are public on GitHub. Each image is associated with two tags: Racing and the value of the ImageType.
  • GetImageCountPerImageType: get the number of images on GitHub for the specified ImageType.
  • GetOrCreateTag: will try to retrieve an existing tag. Create it if doesn’t exists.

For each image we associate two tags, this is the minimum required by the service to be able to train an image.

 

And to finish, here the code to delete and train a project:

...
using Microsoft.Cognitive.CustomVision.Training;
using System;
using System.Threading;
using System.Threading.Tasks;
...

namespace VisionRacing.TrainingRacingImages
{
    class Program
    {
        ...
        private static async Task DeleteProject(TrainingApi trainingApi, Guid projectId)
        {
            Console.Clear();

            await trainingApi.DeleteProjectAsync(projectId);

            Console.WriteLine("Project deleted... Press any key to continue");
            Console.ReadLine();
        }

        ...

        private static async Task TrainProject(TrainingApi trainingApi, Guid projectId)
        {
            var iteration = await trainingApi.TrainProjectAsync(projectId);

            while (iteration.Status == "Training")
            {
                Console.Clear();
                Console.WriteLine("Training in progress...");

                Thread.Sleep(1000);

                iteration = await trainingApi.GetIterationAsync(projectId, iteration.Id);
            }

            iteration.IsDefault = true;
            trainingApi.UpdateIteration(projectId, iteration.Id, iteration);

            Console.WriteLine();
            Console.WriteLine("Project successfully trained... Press any key to continue");
            Console.ReadLine();
        }
        ...
    }
}
  • DeleteProject: a straightforward method to delete a project.
  • TrainProject: send a request to the service to start training the project. Each time a project is trained is called an iteration. Training a project is not immediate, that’s why we loop to reach for the status of the iteration every second.

 

Example of use

The console application is now ready to run, let’s execute it:

Existing projects:
Test Project

Please enter a project name or press enter to exit:

When starting it will list the existing projects and allow to enter a project name, new or existing.

 

  --- Project Test Project ---

No tags yet...

Type an option number:
  1: Create Karting images
  2: Create F1 images
  3: Create MotoGP images
  4: Create Rally images
  5: Train project
  6: Delete project

Press any other key to exit project test project

At first no images have been created. Select option one to create karting images.

 

Karting images creation in progress...

Karting images successfully created.

Press any key to continue

 

Once the images created, the project will display the following:

  --- Project Test Project ---

Tags:
  Karting (Image count: 35)
  Racing (Image count: 35)

Type an option number:

...

 

Now you can train the project by selecting option five.

Training in progress...

Project successfully trained... Press any key to continue

 

Now that your project is trained with the Training API, you could start using the Prediction API.

 

To go further

If you try to train a project and don’t have at least two tags, here the error message you’ll get:

An error occurred:
Operation returned an invalid status code 'BadRequest'
{"Code":"BadRequestTrainingValidationFailed","Message":"Not enough tags for training"}

 

If you train your project and didn’t add anything since the previous training, you’ll get the following error:

An error occurred:
Operation returned an invalid status code 'BadRequest'
{"Code":"BadRequestTrainingNotNeeded","Message":"Nothing changed since last training"}

 

Summary

We have seen how to train Azure Cognitive Services to recognize karting images thanks to Custom Vision Service Training API in a .NET console application.

Now we want to use our trained service, how do to that?

Well in my next article about Azure Cognitive Services we will discover how to use the Custom Vision Service Prediction API!

 

You can get the project source code here:

Browse the GitHub repository

(Note that the project uses .NET Framework 4.7)

   

Please feel free to comment or contact me if you have any question about this article.

Categories
Azure Microsoft

In Ottawa presenting Azure Notification Hubs to the Ottawa Cloud Tech group on Wednesday, March 28, 2018

At Lixar HQ (373 Coventry Road, Ottawa, ON)

Thank you for attending this presentation!

You can find the source code of the demo project on GitHub:

I couldn’t present it to you, but if you want to take a look at it the project is built around the following technologies:

  • Azure Notification Hubs
  • Azure WebJobs
  • ASP.NET Web API
  • Xamarin Android

Download the PowerPoint presentation below:

Please feel free to contact me if you have any question about the presentation or the project.


Guys,

I’m glad to announce that I will play at home March 28, 2018 7:00 PM at Lixar HQ for a presentation of Azure Notification Hubs!

Together we’ll see how Azure Notification Hubs can make your life easier by giving you the ability to easily send millions of messages across multiple platforms.

In this 40 minutes session, we will discover Azure Notification Hubs through a presentation which will show you the main lines of the product, how to set it up with the Azure portal but also via ARM template.

Please note that in the same evening Eric Leonard will start the Meetup by giving a presentation about ARM Templates!

Doors open at 7 PM, and food and snacks will be served. Presentations will start around 7:30 PM.

If you are in Ottawa that night and want to learn more about Azure, Lixar will be the place to be, I hope to see you there!

Categories
Azure Cognitive Services Microsoft Vision

Custom Vision service deployment on Azure with ARM template

In this article we will create an ARM template that will deploy a Custom Vision Service and output the Training and Prediction API Keys.

Recently Custom Vision Service has been made available on the Azure Portal in public preview. Previously, it was only available via www.customvision.ai.

Today we will focus on creating an Azure Resource Manager template to easily setup and deploy a Custom Vision service to a resource group. Our ARM template will be created in a new Azure Resource Group deployment project in Visual Studio.

 

Creation

Let’s declare the parameters of the ARM template:

{
  "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "nameTraining": {
      "type": "string",
      "defaultValue": "myCustomVisionService"
    },
    "location": {
      "type": "string",
      "defaultValue": "southcentralus",
      "allowedValues": [
        "southcentralus"
      ]
    },
    "skuTraining": {
      "type": "string",
      "defaultValue": "F0",
      "allowedValues": [
        "F0",
        "S0"
      ],
      "metadata": {
        "description": "Describes the pricing tier for Training"
      }
    },
    "skuPrediction": {
      "type": "string",
      "defaultValue": "F0",
      "allowedValues": [
        "F0",
        "S0"
      ],
      "metadata": {
        "description": "Describes the pricing tier for Prediction"
      }
    }
  }
  ...
}
  • nameTraining: name of the training service. If no parameter is provided the default name myCustomVisionService is used.
  • location: the location where the service will be deployed. Currently in the preview only southcentralus is available.
  • skuTraining: the pricing tier of the training service. If no parameter is provided the pricing tier will be F0.
  • skuPrediction: the pricing tier of the prediction service. If no parameter is provided the pricing tier will be F0.

 

We will also need one variable in this ARM template:

{
  ...
  "variables": {
    "namePrediction": "[concat(take(replace(parameters('nameTraining'),'-',''), 18), '_Prediction')]"
  }
  ...
}
  • namePrediction: here we generate the name of the prediction service based on the name of the training service.

 

Now we will declare the resources necessary to deploy a Custom Vision Service:

{
  ...
  "resources": [
    {
      "apiVersion": "2016-02-01-preview",
      "name": "[parameters('nameTraining')]",
      "type": "Microsoft.CognitiveServices/accounts",
      "location": "[parameters('location')]",
      "kind": "CustomVision.Training",
      "sku": {
        "name": "[parameters('skuTraining')]"
      },
      "properties": {}
    },
    {
      "apiVersion": "2016-02-01-preview",
      "name": "[variables('namePrediction')]",
      "type": "Microsoft.CognitiveServices/accounts",
      "location": "[parameters('location')]",
      "kind": "CustomVision.Prediction",
      "sku": {
        "name": "[parameters('skuPrediction')]"
      },
      "properties": {}
    }
  ]
  ...
}

We can pay attention to two things here:

  • The training and prediction services are declared with the following type: Microsoft.CognitiveServices/accounts.
  • The two services are differentiated with the kind attribute: CustomVision.Training and CustomVision.Prediction.

 

And to finish, we will output the API keys for the training service and prediction service:

{
  ...
  "outputs": {
    "Training API Key": {
      "type": "string",
      "value": "[listKeys(resourceId('Microsoft.CognitiveServices/accounts', parameters('nameTraining')), providers('Microsoft.CognitiveServices', 'accounts').apiVersions[0]).key1]"
    },
    "Prediction API Key": {
      "type": "string",
      "value": "[listKeys(resourceId('Microsoft.CognitiveServices/accounts', variables('namePrediction')), providers('Microsoft.CognitiveServices', 'accounts').apiVersions[0]).key1]"
    }
  }
}

 

Example of use

The ARM template is now ready, let’s open a Windows PowerShell and try it:

.\Deploy-AzureResourceGroup.ps1 -ResourceGroupName 'MyResourceGroupName' -ResourceGroupLocation 'southcentralus' -TemplateFile '.\azuredeploy.json'

...

OutputsString      :
                     Name             Type                       Value
                     ===============  =========================  ==========
                     training API Key  String                     bcxx2e598139477e975d71d688549c7c
                     prediction API Key  String                     90b1f3b84xxx4dfca342fd56d42df1dc

If everything goes well, you should see the same kind of output as above.

 

To go further

If you try to deploy the service again you will get the following error message:

Resource Microsoft.CognitiveServices/accounts 'myCustomVisionService' failed with message '{
   "error": {
     "code": "CanNotCreateMultipleFreeAccounts",
     "message": "Operation failed. Only one free account is allowed for account type 'CustomVision.Training'."
   }
}'

As stated in the message, you can’t deploy multiple free Custom Vision service.

 

Summary

Today we have learned how to create an ARM template that will deploy a Custom Vision service and output the API keys for the training service and prediction service.

  

You can get the project source code here:

Browse the GitHub repository

 

Please feel free to comment or contact me if you have any question about this article.

Categories
Azure Cognitive Services Microsoft Vision

Custom Vision service now available on the Azure Portal in public preview

The announce was made March 1, 2018, great timing!

Great news guys!

And great timing as I’m building a series of articles about Cognitive Services vision as you saw in my previous article.

 

Part of the last announce made last Thursday by Joseph Sirosh (Corporate Vice President, Artificial Intelligence & Research at Microsoft), we learned that Custom Vision Service has been made available on the Azure Portal in public preview. See http://azure.microsoft.com/en-us/blog/announcing-new-milestones-for-microsoft-cognitive-services-vision-and-search-services-in-azure/

 

But what is Custom Vision service exactly?

Custom Vision service allows you to use your own data to recognize what matters in your scenario thanks to machine learning.

Previously, it was only available via www.customvision.ai. Now it is great because we can deploy it directly via the Azure Portal and/or ARM template.

Don’t miss my next articles as we will work with it in a concrete example!

Categories
Azure Cognitive Services Microsoft Vision

Analyzing a karting image with Azure Cognitive Services

And getting unexpected analysis result.

Do you know Azure Cognitive Services?

 

Azure Cognitive Services is a set of APIs Vision, Speech, Language, Knowledge, Search to make your applications more intelligent.

Today we will create a console application that will use the Vison API named Computer Vision to analyze a karting image. As a base, we will create a project with the .NET Core Console App template in Visual Studio.

Before starting please check to following documentation to obtain a Computer Vision API subscription key:

https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/vision-api-how-to-topics/howtosubscribe#obtaining-subscription-keys

 

 Creation

The first thing you will need is to add the following NuGet package to your project: Microsoft.ProjectOxford.Vision.DotNetCore

 

Let’s now declare the AnalyzeImage method. This method will call the AnalyzeImage method of the Computer Vision API:

...
using Microsoft.ProjectOxford.Vision;
using System.Threading.Tasks;
...

namespace VisionRacing.AnalyzeKartingImage
{
    class Program
    {
        ...
        private static async Task AnalyzeImage(string apiKey, string apiUrl, string imageUrl)
        {
            var vsc = new VisionServiceClient(apiKey, apiUrl);

            var visualFeatures = new VisualFeature[]
            {
                VisualFeature.Description, VisualFeature.Tags
            };

            var analysisResult = await vsc.AnalyzeImageAsync(imageUrl, visualFeatures);

            ShowAnalysisResult(analysisResult);
        }
        ...
    }
}

We provide three parameters:

As you can see the code to call the AnalyzeImage method is pretty simple. Once our VisionServiceClient created, we call AnalyzeImageAsync by providing the image Url and the VisualFeatures we want to get an analyze for. In our case we use Description to get a description of the image and Tags to get the tags associated to the image. Other tags available are ImageType, Color, Faces, Categories.

 

Once the image analyzed by the API we will show the analysis result via the following method:

...
using Microsoft.ProjectOxford.Vision.Contract;
using System;
using System.Linq;
...

namespace VisionRacing.AnalyzeKartingImage
{
    class Program
    {
        ...
        private static void ShowAnalysisResult(AnalysisResult result)
        {
            Console.ForegroundColor = ConsoleColor.White;
            Log("Image analysis result");
            Console.WriteLine();

            if (result.Description != null)
            {
                Console.ForegroundColor = ConsoleColor.Green;
                Log("1. Image description");

                Console.ForegroundColor = ConsoleColor.Gray;

                if (result.Description.Captions.Any())
                {
                    foreach (var caption in result.Description.Captions)
                    {
                        Log($"  Caption: {caption.Text} (Confidence {caption.Confidence.ToString("P0")})");
                    }
                }
                else
                {
                    Log("  No image caption");
                }

                Console.WriteLine();

                if (result.Description.Tags.Any())
                {
                    Log($"  Tags: {string.Join(", ", result.Description.Tags)}");

                    Console.WriteLine();
                }
            }

            if (result.Tags != null && result.Tags.Any())
            {
                Console.ForegroundColor = ConsoleColor.Green;
                Log("2. Image tags");

                foreach (var tag in result.Tags)
                {
                    Console.ForegroundColor = ConsoleColor.Gray;
                    Log($"  Name: {tag.Name} (Confidence {tag.Confidence.ToString("P0")}{(string.IsNullOrEmpty(tag.Hint) ? string.Empty : $" | Hint: {tag.Hint}")})");
                }

                Console.WriteLine();
            }
        }

        private static void Log(string message)
        {
            Console.WriteLine(message);
        }
    }
}

Here we go through the results to properly display the Description of the image and the Tags.

 

And finally, here is the main entry point of the console application:

...
using System;
...

namespace VisionRacing.AnalyzeKartingImage
{
    class Program
    {
        static void Main(string[] args)
        {
            try
            {
                var apiKey = "Your Cognitive Services Vision API Key.";
                var apiUrl = "Cognitive Services Vision API URL.";

                var kartingImageUrl = "https://github.com/vivienchevallier/Article-AzureCognitive.Vision-Racing/raw/master/Images/Karting/Karting%20(9).jpg";

                AnalyzeImage(apiKey, apiUrl, kartingImageUrl).Wait();
            }
            catch (Exception ex)
            {
                Console.ForegroundColor = ConsoleColor.Red;
                Console.WriteLine(ex.Message);
                Console.WriteLine();
            }

            Console.ForegroundColor = ConsoleColor.White;
            Console.WriteLine();
            Console.WriteLine("Press any key to exit...");
            Console.ReadLine();
        }
        ...
    }
}

You will need to provide:

As you can see we call the AnalyzeImage method previously created.

 

Example of use

The console application is now ready to run, let’s execute it:

Image analysis result

1. Image description
  Caption: a person riding a motorcycle down the road (Confidence 71 %)

  Tags: grass, road, outdoor, riding, racing, small, red, sitting, motorcycle, wearing, man, driving, track, runway, street, traffic, white, plane, people

2. Image tags
  Name: grass (Confidence 100 %)
  Name: road (Confidence 100 %)
  Name: outdoor (Confidence 100 %)
  Name: racing (Confidence 82 %)
  Name: turn (Confidence 22 %)


Press any key to exit...

If everything goes well, you should see the same kind of logs as above.

 

To go further

Now here is the karting image we analyzed with the Computer Vision API. If we check the Image tags are pretty accurate. However, the caption “a person riding a motorcycle down the road” is not accurate, I know because it is me racing my karting!

So, what’s the problem here? The analysis result is not what we expected as I’m obviously not riding a motorcycle here. I took this specific image on purpose because it may happen that the Computer Vision API does not return accurate results depending on what kind of images your analyzing. In our case it’s an auto racing sport, karting, and I guess for now Computer Vision hasn’t been trained enough with this kind of image. So, what to do in that case?

Well in my next article about Azure Cognitive Services we will discover how Custom Vision Service can help by building our own Vision service!

 

Summary

We have seen how to analyze an image with Azure Cognitive Services Computer Vision API in a .NET Core console application.

 

You can get the project source code here:

Browse the GitHub repository

(Note that the project uses .NET Core 2.0)

 

Please feel free to comment or contact me if you have any question about this article.

Categories
Azure Microsoft

Presenting Azure Notification Hubs to the Montreal Mobile .NET Developers on Wednesday, February 7, 2018

At Microsoft Montreal (2000 McGill College Ave, Suite 550)

Thank you for attending this presentation!

You can find the source code of the demo project on GitHub:

 

As a reminder, the project is built around the following technologies:

  • Azure Notification Hubs
  • Azure WebJobs
  • ASP.NET Web API
  • Xamarin Android

 

Download the PowerPoint presentation below:

 

Please feel free to contact me if you have any question about the presentation or the project.


Join me February 7, 2018 6:30 PM at Microsoft Montreal for a presentation of Azure Notification Hubs!

 

Together we’ll see how Azure Notification Hubs can make your life easier by giving you the ability to easily send millions of messages across multiple platforms.

In this 60 minutes session, we will discover Azure Notification Hubs through a presentation which will show you the main lines of the product, how to set it up with the Azure portal but also via ARM template, then ending with a practical part and a concrete demonstration of use.

Categories
Azure Microsoft Visual Studio Team Services

Building a Xamarin Android App with Bundle assemblies into native code option enabled on an Azure Visual Studio Team Services (VSTS) hosted agent

And getting the following error: “Error : Could not find a part of the path ‘d:\platforms’.”?

When working with Azure Visual Studio Team Services it is common and convenient to use Continuous Integration & Delivery Build and Release service with Hosted agents.

Here we want to build a Xamarin Android application with the VSTS task Xamarin.Android Build task. To do so we will create a build definition using this task.

 

In most cases everything should work smoothly and you’ll be able to build your Xamarin Android project. However, if you need to use the Bundle assemblies into native code option, also named BundleAssemblies in a Xamarin Android application csproj, you’ll probably run into some issues making it impossible to successfully build the project.

The errors you’ll probably get when building a project with an Hosted agent using the queue named Hosted VS2017 is the following:

C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\Xamarin\Android\Xamarin.Android.Common.targets (2176, 3)
C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\Xamarin\Android\Xamarin.Android.Common.targets(2176,3): Error : Could not find a part of the path 'd:\platforms'.
Process 'msbuild.exe' exited with code '1'.

 

Source of the errors

By checking in the build logs, we can see that the error occurs when the following build task is triggered: _BuildApkEmbed

This is happening because the Android NDK path is not found and not properly set at an earlier stage:

2017-11-08T20:26:51.8616357Z   Android NDK: \
2017-11-08T20:26:51.8616357Z   Android SDK: C:\Program Files (x86)\Android\android-sdk\

Howerver the Android NDK is the essential element in this build task.

 

Solution

In order to resolve this issue, we will have to manually set the Android NDK path. In your build definition, in the VSTS task Xamarin.Android Build task, go to the section named MSBuild Options.

Here we will use Additional Arguments to pass the following additional arguments to MSBuild:

/p:AndroidNdkDirectory="C:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r13b"

As you can see we are providing the proper Android NDK path for the VS2017 Hosted agent.

 

Save your build definition and queue a new build. You should now have a successful build, with the build task _BuildApkEmbed looking like the following in the logs:

2017-11-09T02:12:14.7934177Z   ...
2017-11-09T02:12:15.5271165Z _BuildApkEmbed:
2017-11-09T02:12:15.5281163Z   [mkbundle stderr] 
2017-11-09T02:12:16.6499807Z   [cc stderr] 
2017-11-09T02:12:16.6559798Z   [LD] C:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r13b\toolchains\arm-linux-androideabi-4.9\prebuilt\windows-x86_64\bin\arm-linux-androideabi-ld.exe --shared obj\Release\bundles\armeabi-v7a\temp.o obj\Release\bundles\armeabi-v7a\assemblies.o -o obj\Release\bundles\armeabi-v7a\libmonodroid_bundle_app.so -L C:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r13b\platforms\android-9\arch-arm\usr\lib -lc -lm -ldl -llog -lz
2017-11-09T02:12:16.9183355Z   [ld stderr] 
2017-11-09T02:12:17.8782862Z _CopyPackage:
2017-11-09T02:12:17.8782862Z   ...

 

To go further

If you try to build with the Hosted agent with VS2015, you’ll run into the same kind of issues but with different errors:

C:\Program Files (x86)\MSBuild\Xamarin\Android\Xamarin.Android.Common.targets (1873, 3)
C:\Program Files (x86)\MSBuild\Xamarin\Android\Xamarin.Android.Common.targets(1873,3): Error : java\lang\Object.class(java\lang : Object.class)
cannot access java.lang.Object
[object Object]
Process 'msbuild.exe' exited with code '1'.

 

Earlier in the year the build was properly working by using the JDK8 and setting the proper Android NDK path with version r10d:

/p:AndroidNdkDirectory="C:\java\androidsdk\android-ndk-r10d"

 

Later in the year the Android NDK version has been update on the Hosted agent, to use the version r13b:

/p:AndroidNdkDirectory="C:\java\androidsdk\android-ndk-r13b"

 

However, after this update it became impossible to generate an apk, worst the build is successful and the error swallowed. If you check at the logs you’ll obtain something like this:

2017-11-09T03:24:02.1347738Z   ...
2017-11-09T03:24:03.0720360Z _BuildApkEmbed:
2017-11-09T03:24:03.0720360Z   [mkbundle stderr] 
2017-11-09T03:24:03.1359376Z   Error: System.InvalidOperationException: Platform header files for target Arm and API Level 4 was not found. Expected path is "C:\java\androidsdk\android-ndk-r13b\platforms\android-4\arch-arm\usr\include"
2017-11-09T03:24:03.1359376Z      at Xamarin.Android.Tasks.NdkUtil.GetNdkPlatformIncludePath(String androidNdkPath, AndroidTargetArch arch, Int32 level)
2017-11-09T03:24:03.1359376Z      at Xamarin.Android.Tasks.MakeBundleNativeCodeExternal.DoExecute()
2017-11-09T03:24:03.1359376Z      at Xamarin.Android.Tasks.MakeBundleNativeCodeExternal.Execute()
2017-11-09T03:24:03.9240521Z _CopyPackage:
2017-11-09T03:24:03.9240521Z   ...

This error appears because of a bug in the Xamarin Android version on the Hosted agent when using the NDK r13b.

 

Summary

We have seen how to build a Xamarin Android App with the Bundle assemblies into native code option enabled on the Azure VSTS hosted agent VS2017.

Also we tried to cover the main errors we can have when trying to build with the Hosted agent with VS2015.

 

Please feel free to comment or contact me if you have any question about this article.