As promised, I've converted the quick training code I created in the CodeIT 2018 Hackathon into a console application. The code can be found here.

There are three key parts to the training process:

* Creating a people group
* Adding people and faces to the group
* Training

Then of course, after the training is complete, there is the identification process. Let's have a look at the application and each process.

Application configuration

After grabbing the code, the first thing to do is to configure the code to your machine and Azure settings. Update the subscription key and the location of the root folder in the code below.

    const string SubscriptionKey = "insert subscription key";
    const string UriBase = "https://northeurope.api.cognitive.microsoft.com/face/v1.0";
    private static readonly string RootFolder = "Update This Path/MicrosftFaceApiTest-Console";
    private static readonly string TrainingFolderPath = RootFolder + "/Data/PersonGroupTraining";

You will also need the Microsoft.ProjectOxford.Face Nuget package. This will allow us to use the FaceServiceClient which makes using the API service pretty simple:

var faceServiceClient = new  FaceServiceClient(SubscriptionKey, UriBase);

Creating a People Group

A person group is as exactly what it says it is, a collection on people. The first step is to create one using console option 2. All we need to do is to assign an ID to the group and create it. I've also provided console option 1, to set the "application reference" to an existing people group.

var personGroupId = "";

static async Task<string> CreatePersonGroup(FaceServiceClient faceServiceClient, string id, string description)
{
    try
    {
        await faceServiceClient.CreatePersonGroupAsync(id, description);
    }
    catch (Exception ex)
    {
        return "Error: " + ex.Message;
    }

    return id;
}

Adding People and Faces

This is where your project directories become important, the Data folder has been designed to hold other folders named after each person. Inside each person folder should be a number of .jpg images of the individual and only the individual. No images with multiple faces. Have a look at the example Data folder.

We then need to scan the Data directory to find each person to add:

var peopleFolders = GetDirectories(TrainingFolderPath);
        
// Lets add the people
foreach(var potentialPersonDir in peopleFolders)
{
    ....
}

There are two functions being called within the process above, CreatePerson which adds a person with the name specified (the folder name) and AddFacesToPerson. We can add faces to the person from the GUID returned by the CreatePerson function. It is also possible to add additional data to a person through the userData option such as a user ID from a local database. The two functions themselves really only call the relevant FaceClientService options:

static async Task<CreatePersonResult> CreatePerson(FaceServiceClient faceServiceClient, string groupId, string name,
        string userData)
    {
        try
        {
            CreatePersonResult person = await faceServiceClient.CreatePersonAsync(groupId, name, userData);
            return person;
        }
        catch (Exception ex)
        {
            Console.WriteLine("Failed to create person " + name);
            Console.WriteLine("Error: " + ex.Message);
        }

        return null;
    }
    
    static async Task<bool> AddFacesToPerson(FaceServiceClient faceServiceClient, string imagesDirectory,
        string personGroupId, Guid personId, bool limit = false)
    {
        var files = GetFiles(imagesDirectory, "*.JPG").Union(GetFiles(imagesDirectory, "*.jpg")).ToArray();;
        foreach (var imagePath in files)
        {
            using (Stream s = File.OpenRead(imagePath))
            {
                // Detect faces in the image and add to person
                try
                {
                    await faceServiceClient.AddPersonFaceAsync(personGroupId, personId, s);
                }
                catch (Exception ex)
                {
                    Console.WriteLine("An error has occured with image " + imagePath + " and has not been added to person");
                    Console.WriteLine("Error: " + ex.Message);
                }
            }

            if (limit)
            {
                System.Threading.Thread.Sleep(15000);
            }
        }

        return true;
    }

With that you know have your facial images ready for training.

Training

Training is the easy part, just make a call to train and wait for it to complete:

static async void TrainPersonGroup(FaceServiceClient faceServiceClient, string personGroupId)
    {
        await faceServiceClient.TrainPersonGroupAsync(personGroupId);
    }
    
    static async Task<bool> WaitForPersonGroupTraining(FaceServiceClient faceServiceClient, string personGroupId,
        bool limit = false)
    {
        while(true)
        {
            var trainingStatus = await faceServiceClient.GetPersonGroupTrainingStatusAsync(personGroupId);

            if (trainingStatus.Status != Status.Running)
            {
                return true;
            }

            if (limit)
            {
                System.Threading.Thread.Sleep(15000);
            }
        }    
    }

Identification

The final part is to test your training through the identification process. Firstly we pass in an image and look to see if there are any faces present:

var faces = await faceServiceClient.DetectAsync(s);
var faceIds = faces.Select(face => face.FaceId).ToArray();

We then pass the detected faces into the identify call:

var results = await faceServiceClient.IdentifyAsync(personGroupId, faceIds);

We will get a list of GUIDs for all identified individuals. If we require more information such as the name of the individual and the userData we passed into the creation process, we just need to Call GetPerson with the GUID:

var person = await faceServiceClient.GetPersonAsync(personGroupId, candidateId);

The identification process is the only part that you will need to build into any application using your newly trained API.

Conclusion

The Azure Face API is a powerful piece of technology that is surprisingly easy to implement. At its cost point its hard to imagine why you'd go about developing your own implementation. Have fun playing with it!