Azure Cognitive Services

ByDr. SubraMANI Paramasivam

Microsoft Flow with Face API

This article explains how to handle the azure cognitive service APIs within Microsoft Flow. Microsoft Flow team has released new connectors for Azure cognitive service API which are in preview now. It includes Computer Vision and Face API.

Each connector has a different set of actions. We can use those actions by passing the proper input to the connections.

To make it clear, I am explaining a scenario with Face API in Microsoft Flow. In this, I will explain how you can process the “Detect Faces “action and store the result on on-premises SQL Server table.

Requirements

  1. Face API URL & Key
  2. On-Premises Data Gateway – SQL Server
  3. Microsoft Flow – Free subscription or O365 subscription

Creating Face API\

To create a Face API, you need an Azure Subscription. If you don’t have a subscription, then you can get a free Azure subscription from here.

Visit portal.azure.com and click “Create a Resource”.

Under new, choose “Ai + Machine Learning” -> Face

Create a new face resource by providing the required details.

Once the resource is created, you need to get the key and URL (EndPoint).

Note down the endpoint and key and we will use it on Microsoft Flow.

On-Premises Data Gateway

As you know, Power BI can connect with on-premises data using on-premises data gateway. This gateway is not only for Power BI, it also for Logic Apps, Azure Analysis Services, Microsoft Flow and Power Apps. You can use the same data gateway to connect with on-premises data within Microsoft Flow.

On-premises SQL Server

You need to create two tables for this scenario.

Table 1 – It should hold the Image Path column. Example – https://www.sitename.com/image1.jpg

Table 2 – To store the API result. Use the below structure.

CREATE TABLE [dbo].[APIFaces](

       [id] [INT] IDENTITY(1,1) NOT NULL,

       [ImagePath] [NVARCHAR](MAX) NULL,

       [Gender] [NCHAR](10) NULL,

       [Glasses] [NVARCHAR](50) NULL,

       [Smile] [FLOAT] NULL,

 CONSTRAINT [PK_APIFaces] PRIMARY KEY CLUSTERED

(

       [id] ASC

) ON [PRIMARY]

)

GO

Microsoft Flow

You can create a free account on Microsoft Flow or if you have 0365 subscriptions then you will get flow by default as one of the features.

You can learn more about Microsoft flow here.

Follow the below steps.

As I mentioned, we are going to use SQL Server with Face API.

To create any flow, we need to set a trigger section. Here, I am using SQL Server as a trigger. SQL Server has 2 different trigger options, in that, I am using a trigger called “When an item is created”

Once added that trigger, you need to create and map the connection. When you click the “…” option on the right corner, you will get the form to fill the details to create a connection with your on-premises SQL Server.

Fill the required details and make sure the connection is created successfully.

If the connection is created successfully then you can see the tables list as like below otherwise you will get an error message.

The next step, add the Face API and choose the “Detect Face” action.

There also you need to create a connection with face API key and URL. You can provide any name to the connection name field.

Face API will ask you to provide the image URL.

You can easily choose the ImagePath from the dynamic content.

Next, add SQL Server and choose “Insert Row” action.

This time, you can use the same connection which you created above.

Select the table name. It will load the columns from the table. You need to map the dynamic content on each field.

Once all the fields are mapped then you can see the flow as same as like below. Sometimes, Apply to each condition will be added automatically.

The final flow would look like below. You save and test the flow.

You can check the flow history for flow status and check the result on SQL Server table.

ByDr. SubraMANI Paramasivam

Embed Face API Results in Power BI

As you know, the result of any APIs from Azure Cognitive services is a JSON file. The structure of the JSON file is not in a proper way to handle them effectively and easily inside Power BI.

In this article, I am explaining the easiest way to get the result in a proper way inside Power BI.

To accomplish the result, I am using a python script. As Power BI starts support python as one of the data sources we can easily pass the python script and get the API result.

Azure Cognitive services have a bunch of APIs and documentation and API reference for each API. Since I am using python script, I can easily get the python Face API reference and use it directly.

Requirements

  1. Python 3
  2. Power BI Desktop

Use the below python code. Update your Face API subscription key & url.

from urllib.request import urlopen

import json, os, io, requests

from io import BytesIO

import pandas as pd

subscription_key = "your_subscription_key"
base_url = "https://your_region.api.cognitive.microsoft.com/face/v1.0/"
detect_url=base_url+"detect"

headers    = {'Ocp-Apim-Subscription-Key': subscription_key,

              'Content-Type': 'application/octet-stream'}

params     = {'returnFaceId': 'true',

    'returnFaceLandmarks': 'false',

    'returnFaceAttributes': 'age,gender,smile,facialHair,headPose,glasses,emotion,hair,makeup,accessories,blur,exposure,noise'}

Image_Path="https://img.etimg.com/thumb/msid-61020784,width-643,imgsize-228069,resizemode-4/3-lessons-that-satya-nadella-took-from-the-cricket-field-to-the-ceos-office.jpg"

with urlopen(Image_Path) as url:

    image_data = io.BytesIO(url.read())
     
    response = requests.post(

          detect_url, headers=headers, params=params, data=image_data)
 
    face=json.loads(response.content)

    smile= [face[0]['faceAttributes']['smile']]

    gender = [str(face[0]['faceAttributes']['gender'])]

    age= [face[0]['faceAttributes']['age']]

    glass=[str(face[0]['faceAttributes']['glasses'])]

    anger=[face[0]['faceAttributes']['emotion']['anger']]

    contempt=[face[0]['faceAttributes']['emotion']['contempt']]

    disgust=[face[0]['faceAttributes']['emotion']['disgust']]

    fear=[face[0]['faceAttributes']['emotion']['fear']]

    happy=[face[0]['faceAttributes']['emotion']['happiness']]

    neutral = [face[0]['faceAttributes']['emotion']['neutral']]

    sad=[face[0]['faceAttributes']['emotion']['sadness']]

    surprise=[face[0]['faceAttributes']['emotion']['surprise']]

    eyemakeup=[face[0]['faceAttributes']['makeup']['eyeMakeup']]

    lipmakeup=[face[0]['faceAttributes']['makeup']['lipMakeup']]

    bald=[face[0]['faceAttributes']['hair']['bald']]

    haircolor=[face[0]['faceAttributes']['hair']['hairColor']]

    face_ds = pd.DataFrame({

        "smile": smile,

        "gender":gender,

        "age":age,

        "glass":glass,

        "anger":anger,

        "contempt":contempt,

        "disgust":disgust,

        "fear":fear,

        "happy":happy,

        "neutral":neutral,

        "sad":sad,

        "surprise":surprise,

        "eyemakeup":eyemakeup,

        "lipmakeup":lipmakeup,

        "bald":bald,

        "haircolor":haircolor

    })

You can test the above code on your python IDE and can see the result which will be in a table format.

Power BI Desktop Report

Follow the below steps.

Open Power BI Desktop and choose “Python script” as a data source.

Copy and paste the above code on the editor window.

Click ok and it will load and display the table as like below.

Load the data and you can use those fields on your report.

As of now, the image path is hardcoded by you can dynamically pass it by using the parameters.

The sample look and feel of the report.

ByDr. SubraMANI Paramasivam

Custom Vision API – Train and Test (No Coding is Required)

Custom vision is one of the API from Azure cognitive services. It is coming under vision category. We have a bunch of API under vision which are like a pre-built one and we can use them inside our application without modifying the algorithms.

In case, if we want to create our own vision API, we can start using the custom vision API. It has the capability to train your model and publish it. Like other APIs, you can easily integrate with your application. In simple terms, you have the control to the API start from train, test and publish.

Follow the below steps to create a custom vision API.

Visit, https://www.customvision.ai/

If you don’t have an account, then you can easily signup and get an account.

Once you logged in then you can create a new project.

Provide the name and category of the project that you want to start. Fill the details and click create a project.

Once the project is created then you can see the window as like below.

Scenario

As we are dealing with the vision API, we need to upload the images and tag them (group them). For example, if you are uploading some dog images and you want to test whether the new image is a dog then the system will say that is dog otherwise it will say it is not a dog. To achieve this solution, you need to upload different dog images and train the system.

Follow the below steps.

Click Add images button and upload all the image files as like below.

Once it is uploaded and click done.

While adding the images, you can tag them or tag later.

Select all the images and click “Tag Images” and tag them.

Once it is tagged then you can see the images under the tagged section.

Now click the “Train” button and train the model. It will take few seconds to train and you see the results. Also, you have the option to set up the probability threshold.

Now, click quick test button and upload some other image and see the outcome.

Embed Custom Vision API

Check settings page and you could see the training and prediction keys. Refer the below documentation to proceed further.

Ref: https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/python-tutorial

 

 

1