Delpin Susai Raj Wednesday, 5 December 2018

Xamarin.Forms - Emotion Recognition Using Cognitive Service

In this blog post,  you will learn how to Recognise emotions in images using Cognitive Service in Xamarin forms. 


Introduction



Xamarin.Forms code runs on multiple platforms - each of which has its own filesystem. This means that reading and writing files is most easily done using the native file APIs on each platform. Alternatively, embedded resources are a simpler solution to distribute data files with an app.

Cognitive Services




Infuse your apps, websites, and bots with intelligent algorithms to see, hear, speak, understand and interpret your user needs through natural methods of communication. Transform your business with AI today.

Use AI to solve business problems
  • Vision
  • Speech
  • Knowledge
  • Search
  • Language
Emotion API
  1. Emotion API takes a facial expression in an image as an input and returns the confidence across a set of emotions for each face in the image, as well as bounding box for the face, using the Face API. If a user has already called the Face API, they can submit the face rectangle as an optional input.
  2. Emotion API is emotions detected are anger, contempt, disgust, fear, happiness, neutral, sadness and surprise. These emotions are understood to be cross-culturally and universally communicated with particular facial expressions.
Prerequisites
  • Visual Studio 2017(Windows or Mac)
  • Emotion API Key
Setting up a Xamarin.Forms Project

Start by creating a new Xamarin.Forms project.  you’ll learn more by going through the steps yourself.
Choose the Xamarin.Forms App Project type under Cross-platform/App in the New Project dialog.


Name your app, select “Use .NET Standard” for shared code, and target both Android and iOS.


You probably want your project and solution to use the same name as your app. Put it in your preferred folder for projects and click Create.


You now have a basic Xamarin.Forms app. Click the play button to try it out.



Get Emotion API Key

In this step, get Emotion API Key. Go to the following link.

https://azure.microsoft.com/en-in/services/cognitive-services/

Click "Try Cognitive Services for free".



Now, you can choose Face under Vision APIs. Afterward, click "Get API Key".



Read the terms, and select your country/region. Afterward, click "Next".



Now, log in using your preferred account.



Now, the API Key is activated. You can use it now.



The trial key is available only 7 days. If you want a permanent key, refer to the following article.

Getting Started With Microsoft Azure Cognitive Services Face APIs


Setting up the User Interface

Go to MainPage.Xaml and write the following code.

MainPage.xaml

<ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:local="clr-namespace:XamarinCognitive" x:Class="XamarinCognitive.MainPage">
<StackLayout>
<StackLayout>
<StackLayout HorizontalOptions="Center" VerticalOptions="Start">
<Image x:Name="imgBanner" Source="banner.png" ></Image>
<Image Margin="0,0,0,10" x:Name="imgEmail" HeightRequest="100" Source="cognitiveservice.png" ></Image>
<Label Margin="0,0,0,10" Text="Emotion Recognition" FontAttributes="Bold" FontSize="Large" TextColor="Gray" HorizontalTextAlignment="Center" ></Label>
<Image Margin="0,0,0,10" x:Name="imgSelected" HeightRequest="150" Source="defaultimage.png" ></Image>
<Button x:Name="btnPick" Text="Pick" Clicked="btnPick_Clicked" />
<StackLayout HorizontalOptions="CenterAndExpand" Margin="10,0,0,10">
<Label x:Name="lblHappiness" ></Label>
<Label x:Name="lblAnger"></Label>
<Label x:Name="lblFear"></Label>
<Label x:Name="lblNeutral"></Label>
<Label x:Name="lblSadness"></Label>
<Label x:Name="lblSurprise"></Label>
<Label x:Name="lblDisgust"></Label>
<Label x:Name="lblContempt"></Label>
</StackLayout>
</StackLayout>
</StackLayout>
</StackLayout>
</ContentPage>
view raw MainPage.xaml hosted with ❤ by GitHub
Click the play button to try it out.



NuGet Packages

Now, add the following NuGet Packages.
  1. Xam.Plugin.Media
  2. Newtonsoft.Json
Add Xam.Plugin.Media NuGet

In this step, add Xam.Plugin.Media to your project. You can install Xam.Plugin.Media via NuGet, or you can browse the source code on GitHub.

Go to Solution Explorer and select your solution. Right-click and select "Manage NuGet Packages for Solution". Search "Xam.Plugin.Media" and add Package. Remember to install it for each project (PCL, Android, iO, and UWP).



Permissions

In this step give the following required permissions to your app:

Permissions - for Android
  1. CAMERA
  2. READ_EXTERNAL_STORAGE
  3. WRITE_EXTERNAL_STORAGE
Permissions - for iOS
  1. NSCameraUsageDescription
  2. NSPhotoLibraryUsageDescription
  3. NSMicrophoneUsageDescription
  4. NSPhotoLibraryAddUsageDescription
Create a Model

In this step, you can create a model for Deserialize your response.

ResponseModel.cs

using System;
using System.Collections.Generic;
namespace XamarinCognitive.Models
{
public class ResponseModel
{
public string faceId { get; set; }
public FaceRectangle faceRectangle { get; set; }
public FaceAttributes faceAttributes { get; set; }
}
public class FaceRectangle
{
public int top { get; set; }
public int left { get; set; }
public int width { get; set; }
public int height { get; set; }
}
public class HeadPose
{
public double pitch { get; set; }
public double roll { get; set; }
public double yaw { get; set; }
}
public class FacialHair
{
public double moustache { get; set; }
public double beard { get; set; }
public double sideburns { get; set; }
}
public class Emotion
{
public double anger { get; set; }
public double contempt { get; set; }
public double disgust { get; set; }
public double fear { get; set; }
public double happiness { get; set; }
public double neutral { get; set; }
public double sadness { get; set; }
public double surprise { get; set; }
}
public class Blur
{
public string blurLevel { get; set; }
public double value { get; set; }
}
public class Exposure
{
public string exposureLevel { get; set; }
public double value { get; set; }
}
public class Noise
{
public string noiseLevel { get; set; }
public double value { get; set; }
}
public class Makeup
{
public bool eyeMakeup { get; set; }
public bool lipMakeup { get; set; }
}
public class Occlusion
{
public bool foreheadOccluded { get; set; }
public bool eyeOccluded { get; set; }
public bool mouthOccluded { get; set; }
}
public class HairColor
{
public string color { get; set; }
public double confidence { get; set; }
}
public class Hair
{
public double bald { get; set; }
public bool invisible { get; set; }
public List<HairColor> hairColor { get; set; }
}
public class FaceAttributes
{
public double smile { get; set; }
public HeadPose headPose { get; set; }
public string gender { get; set; }
public double age { get; set; }
public FacialHair facialHair { get; set; }
public string glasses { get; set; }
public Emotion emotion { get; set; }
public Blur blur { get; set; }
public Exposure exposure { get; set; }
public Noise noise { get; set; }
public Makeup makeup { get; set; }
public List<object> accessories { get; set; }
public Occlusion occlusion { get; set; }
public Hair hair { get; set; }
}
}
Emotion Recognition

In this step, write the following code for Emotion Recognition.

MainPage.xaml.cs

using Plugin.Media;
using Xamarin.Forms;
using XamarinCognitive.Models;
using Newtonsoft.Json;
namespace XamarinCognitive
{
public partial class MainPage : ContentPage
{
public string subscriptionKey = "26d1b6941e3a422c880639fdcdcf069b";
public string uriBase = "https://southeastasia.api.cognitive.microsoft.com/face/v1.0/detect";
public MainPage()
{
InitializeComponent();
}
async void btnPick_Clicked(object sender, System.EventArgs e)
{
await CrossMedia.Current.Initialize();
try
{
var file = await CrossMedia.Current.PickPhotoAsync(new Plugin.Media.Abstractions.PickMediaOptions
{
PhotoSize = Plugin.Media.Abstractions.PhotoSize.Medium
});
if (file == null) return;
imgSelected.Source = ImageSource.FromStream(() => {
var stream = file.GetStream();
return stream;
});
MakeAnalysisRequest(file.Path);
}
catch (Exception ex)
{
string test = ex.Message;
}
}
public async void MakeAnalysisRequest(string imageFilePath)
{
HttpClient client = new HttpClient();
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
string requestParameters = "returnFaceId=true&returnFaceLandmarks=false" +
"&returnFaceAttributes=age,gender,headPose,smile,facialHair,glasses," +
"emotion,hair,makeup,occlusion,accessories,blur,exposure,noise";
string uri = uriBase + "?" + requestParameters;
HttpResponseMessage response;
byte[] byteData = GetImageAsByteArray(imageFilePath);
using (ByteArrayContent content = new ByteArrayContent(byteData))
{
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
response = await client.PostAsync(uri, content);
string contentString = await response.Content.ReadAsStringAsync();
List<ResponseModel> faceDetails = JsonConvert.DeserializeObject<List<ResponseModel>>(contentString);
if(faceDetails.Count!=0)
{
lblHappiness.Text = "Happiness : " + faceDetails[0].faceAttributes.emotion.happiness;
lblSadness.Text = "Sadness : " + faceDetails[0].faceAttributes.emotion.sadness;
lblAnger.Text = "Anger : " + faceDetails[0].faceAttributes.emotion.anger;
lblFear.Text = "Fear : " + faceDetails[0].faceAttributes.emotion.fear;
lblNeutral.Text = "Neutral : " + faceDetails[0].faceAttributes.emotion.neutral;
lblSurprise.Text = "Surprise : " + faceDetails[0].faceAttributes.emotion.surprise;
lblDisgust.Text = "Disgust : " + faceDetails[0].faceAttributes.emotion.disgust;
lblContempt.Text = "Contempt : " + faceDetails[0].faceAttributes.emotion.contempt;
}
}
}
public byte[] GetImageAsByteArray(string imageFilePath)
{
using (FileStream fileStream =
new FileStream(imageFilePath, FileMode.Open, FileAccess.Read))
{
BinaryReader binaryReader = new BinaryReader(fileStream);
return binaryReader.ReadBytes((int)fileStream.Length);
}
}
}
}
Click the play button to try it out.



Happy Coding...!😊

I hope you have understood how to Recognise emotions in images using Cognitive Service in Xamarin.Forms.

Thanks for reading. Please share comments and feedback.
Delpin Susai Raj Tuesday, 4 December 2018

Xamarin.Forms - Face Detection Using Cognitive Service

In this blog post,  you will learn how to detect human faces using Cognitive Service in Xamarin forms. 



Introduction


Xamarin.Forms code runs on multiple platforms - each of which has its own filesystem. This means that reading and writing files is most easily done using the native file APIs on each platform. Alternatively, embedded resources are a simpler solution to distribute data files with an app.


Cognitive Services




Infuse your apps, websites, and bots with intelligent algorithms to see, hear, speak, understand and interpret your user needs through natural methods of communication. Transform your business with AI today.

Use AI to solve business problems
  • Vision
  • Speech
  • Knowledge
  • Search
  • Language
Face API
  1. Face API provides developers with access to advanced face algorithms. Microsoft Face algorithms enable face attribute detection and face recognition. 
  2. Face API is a cognitive service that provides algorithms for detecting, recognizing, and analyzing human faces in images. 
  3. Face API can detect human faces in an image and return the rectangle coordinates of their locations. Optionally, face detection can extract a series of face-related attributes such as pose, gender, age, head pose, facial hair, and glasses.
Prerequisites
  • Visual Studio 2017(Windows or Mac)
  • Face API Key
Setting up a Xamarin.Forms Project

Start by creating a new Xamarin.Forms project.  you’ll learn more by going through the steps yourself.
Choose the Xamarin.Forms App Project type under Cross-platform/App in the New Project dialog.



Name your app, select “Use .NET Standard” for shared code, and target both Android and iOS.



You probably want your project and solution to use the same name as your app. Put it in your preferred folder for projects and click Create.


You now have a basic Xamarin.Forms app. Click the play button to try it out.


Get Face API Key

In this step, get Face API Key. Go to the following link.

https://azure.microsoft.com/en-in/services/cognitive-services/

Click "Try Cognitive Services for free".


Now, you can choose Face under Vision APIs. Afterward, click "Get API Key".



Read the terms, and select your country/region. Afterward, click "Next".



Now, log in using your preferred account.



Now, the API Key is activated. You can use it now.



The trial key is available only for 7 days. If you want a permanent key, refer to the following article.

Getting Started With Microsoft Azure Cognitive Services Face APIs


Setting up the User Interface

Go to MainPage.Xaml and write the following code.

MainPage.xaml

<?xml version="1.0" encoding="utf-8"?>
<ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:local="clr-namespace:XamarinCognitive" x:Class="XamarinCognitive.MainPage">
<StackLayout>
<StackLayout>
<StackLayout HorizontalOptions="Center" VerticalOptions="Start">
<Image Margin="0,50,0,0" x:Name="imgBanner" Source="banner.png" ></Image>
<Image Margin="0,0,0,10" x:Name="imgEmail" HeightRequest="100" Source="cognitiveservice.png" ></Image>
<Label Margin="0,0,0,10" Text="Emotion Recognition" FontAttributes="Bold" FontSize="Large" TextColor="Gray" HorizontalTextAlignment="Center" ></Label>
<Image Margin="0,0,0,10" x:Name="imgSelected" HeightRequest="150" Source="defaultimage.png" ></Image>
<Button x:Name="btnPick" Text="Pick" Clicked="btnPick_Clicked" />
<StackLayout HorizontalOptions="CenterAndExpand" Margin="10,0,0,10">
<Label x:Name="lblTotalFace" ></Label>
<Label x:Name="lblGender"></Label>
<Label x:Name="lblAge"></Label>
</StackLayout>
</StackLayout>
</StackLayout>
</StackLayout>
</ContentPage>
view raw MainPage.xaml hosted with ❤ by GitHub
Click the play button to try it out.



NuGet Packages

Now, add the following NuGet Packages.

Xam.Plugin.Media
Newtonsoft.Json

Add Xam.Plugin.Media NuGet

In this step, add Xam.Plugin.Media to your project. You can install Xam.Plugin.Media via NuGet, or you can browse the source code on GitHub.

Go to Solution Explorer and select your solution. Right-click and select "Manage NuGet Packages for Solution". Search "Xam.Plugin.Media" and add Package. Remember to install it for each project (PCL, Android, iO, and UWP).



Permissions

In this step give the following required permissions to your app:

Permissions - for Android
  1. CAMERA
  2. READ_EXTERNAL_STORAGE
  3. WRITE_EXTERNAL_STORAGE
Permissions - for iOS
  1. NSCameraUsageDescription
  2. NSPhotoLibraryUsageDescription
  3. NSMicrophoneUsageDescription
  4. NSPhotoLibraryAddUsageDescription
Create a Model

In this step, you can create a model for Deserialize your response.

ResponseModel.cs

using System;
using System.Collections.Generic;
namespace XamarinCognitive.Models
{
public class ResponseModel
{
public string faceId { get; set; }
public FaceRectangle faceRectangle { get; set; }
public FaceAttributes faceAttributes { get; set; }
}
public class FaceRectangle
{
public int top { get; set; }
public int left { get; set; }
public int width { get; set; }
public int height { get; set; }
}
public class HeadPose
{
public double pitch { get; set; }
public double roll { get; set; }
public double yaw { get; set; }
}
public class FacialHair
{
public double moustache { get; set; }
public double beard { get; set; }
public double sideburns { get; set; }
}
public class Emotion
{
public double anger { get; set; }
public double contempt { get; set; }
public double disgust { get; set; }
public double fear { get; set; }
public double happiness { get; set; }
public double neutral { get; set; }
public double sadness { get; set; }
public double surprise { get; set; }
}
public class Blur
{
public string blurLevel { get; set; }
public double value { get; set; }
}
public class Exposure
{
public string exposureLevel { get; set; }
public double value { get; set; }
}
public class Noise
{
public string noiseLevel { get; set; }
public double value { get; set; }
}
public class Makeup
{
public bool eyeMakeup { get; set; }
public bool lipMakeup { get; set; }
}
public class Occlusion
{
public bool foreheadOccluded { get; set; }
public bool eyeOccluded { get; set; }
public bool mouthOccluded { get; set; }
}
public class HairColor
{
public string color { get; set; }
public double confidence { get; set; }
}
public class Hair
{
public double bald { get; set; }
public bool invisible { get; set; }
public List<HairColor> hairColor { get; set; }
}
public class FaceAttributes
{
public double smile { get; set; }
public HeadPose headPose { get; set; }
public string gender { get; set; }
public double age { get; set; }
public FacialHair facialHair { get; set; }
public string glasses { get; set; }
public Emotion emotion { get; set; }
public Blur blur { get; set; }
public Exposure exposure { get; set; }
public Noise noise { get; set; }
public Makeup makeup { get; set; }
public List<object> accessories { get; set; }
public Occlusion occlusion { get; set; }
public Hair hair { get; set; }
}
}
Face Detection

In this step, write the following code for face detection.

MainPage.xaml.cs

using Plugin.Media;
using Xamarin.Forms;
using XamarinCognitive.Models;
using Newtonsoft.Json;
namespace XamarinCognitive
{
public partial class MainPage : ContentPage
{
public string subscriptionKey = "7c85c822a**********4886209ccbb3fb";
public string uriBase = "https://westcentralus.api.cognitive.microsoft.com/face/v1.0/detect";
public MainPage()
{
InitializeComponent();
}
async void btnPick_Clicked(object sender, System.EventArgs e)
{
await CrossMedia.Current.Initialize();
try
{
var file = await CrossMedia.Current.PickPhotoAsync(new Plugin.Media.Abstractions.PickMediaOptions
{
PhotoSize = Plugin.Media.Abstractions.PhotoSize.Medium
});
if (file == null) return;
imgSelected.Source = ImageSource.FromStream(() => {
var stream = file.GetStream();
return stream;
});
MakeAnalysisRequest(file.Path);
}
catch (Exception ex)
{
string test = ex.Message;
}
}
public async void MakeAnalysisRequest(string imageFilePath)
{
HttpClient client = new HttpClient();
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
string requestParameters = "returnFaceId=true&returnFaceLandmarks=false" +
"&returnFaceAttributes=age,gender,headPose,smile,facialHair,glasses," +
"emotion,hair,makeup,occlusion,accessories,blur,exposure,noise";
string uri = uriBase + "?" + requestParameters;
HttpResponseMessage response;
byte[] byteData = GetImageAsByteArray(imageFilePath);
using (ByteArrayContent content = new ByteArrayContent(byteData))
{
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
response = await client.PostAsync(uri, content);
string contentString = await response.Content.ReadAsStringAsync();
List<ResponseModel> faceDetails = JsonConvert.DeserializeObject<List<ResponseModel>>(contentString);
if(faceDetails.Count!=0)
{
lblTotalFace.Text = "Total Faces : " + faceDetails.Count;
lblGender.Text = "Gender : " + faceDetails[0].faceAttributes.gender;
lblAge.Text = "Total Faces : " + faceDetails[0].faceAttributes.age;
}
}
}
public byte[] GetImageAsByteArray(string imageFilePath)
{
using (FileStream fileStream =
new FileStream(imageFilePath, FileMode.Open, FileAccess.Read))
{
BinaryReader binaryReader = new BinaryReader(fileStream);
return binaryReader.ReadBytes((int)fileStream.Length);
}
}
}
}
Click the play button to try it out.



Happy Coding...!😍


I hope you have understood how to detect human faces using Cognitive Service in Xamarin.Forms.

Thanks for reading. Please share comments and feedback.