Face It - Week 6 Update

Face It is a mobile application that detects a person's facial structure as well as information about the person’s lifestyle and current trends, and utilizing that data, recommends the user a hair/beard style.

For this Early Innovation Project, our goal is to have the user scan his face to determine a face shape and then use this face shape along with other personal information such as information about the person’s hair and lifestyle to come up with personalized hair and beard style recommendations.

During the first two weeks, we identified the various stages of development for the project. The stages included creating a user Interface design, a preference selection algorithm, a facial detection algorithm, and a trained convolutional neural network. If you are interested you can find our first two-week update here: https://software.intel.com/en-us/blogs/2017/06/06/face-it-week-2-update

During the third and fourth week of this early innovation project, we focused on building our actual product and forming a simple demonstration. A lot of our back-end work had been taken care of and we aimed to focus on some front-end parts of our application. If you are interested you can find our third and fourth week update here: https://software.intel.com/en-us/blogs/2017/06/19/face-it-week-4-update

For the fifth and sixth week of our project, we focused on integrating our convolutional neural network (CNN) model with our user interface. We started by improving our model and making it more accurate. We then fixed our algorithm so that the front facing camera of a smartphone is activated rather than the outward facing camera. After that was done, we began incorporating all this into our user interface so that we could start testing it on a smartphone.

To make our model more accurate we trained it with a much larger dataset this time. We doubled the number of images we used to train it on and increased the iterations of steps that our model should use while training. We did see some improvements from our previous accuracies of about 55% and 86%.

We are still working on gathering a much cleaner and simpler dataset and incorporating our facial detection algorithm into our program so we will hopefully see even more accurate and consistent results after that is done.

To activate the front facing camera of a smartphone, we had to tweak our algorithm so that when it detects the various cameras within a device, it always connects to the front facing camera. Here is a before and after image after tweaking the algorithm:

We started to integrate our user interface with our application by first creating a proper launch logo to be displayed on any user’s app selection page. Here is how the launch logo looks on a user’s smartphone screen:

After this was taken care of, we started to integrate our actual user interface and ran into a few problems along the way. The main problem we ran into was that it was difficult to code our preferences screen and add all the options to it properly. We used a spinner button for every preference but it was difficult to put together the button with a .png image over it so we put the title of each preference to the left of the spinner buttons with the options. We edited the camera screen to tell you your top face shape match and added a button that takes you directly to the preferences page. We also added a button to the preferences page to take you to the hairstyle recommendations page. We connected all the pages together and created a smooth flowing user interface design incorporated with the code of our CNN model. Below are screenshots of each page of the user interface taken from a smartphone.

One of the last things that we have to do to have a fully functioning prototype is to integrate our code that gathers all the user’s preferences and outputs hairstyle recommendations. These outputs would appear on the last page of our application and would list various personalized hairstyles for the user.

To accomplish this task, we have already written the code that needs to be implemented. We have used Java to create various nested for-loops that compare the hairstyles that go with each selected preference. We will use this algorithm until we can create a more efficient one. Here is how a screenshot of the code looks:

Testing this code gives us an array of hairstyles that are in common with each preference that the user has selected. An example of this is given below:

These hairstyles were gathered from us doing copious amounts of research and finding various articles that talked about what hairstyles go with certain face shapes. We used all this information to create a large set of data for us to use with our application. Here is a screenshot of some of the data we have gathered:

After integrating this into our application we will finally have a functioning prototype that can be used. There are still various things that we would like to add and improve though. We would like to integrate our facial detection algorithm and try other methods to increase the accuracy of our CNN. We would also like to make the application more automatic and have the face shape automatically saved and inputted into the preferences selection page. I am very happy with how this application is coming along so far and I am very excited to keep working on it and improving it.

Read the previous Week 4 Update or continue with the Week 8 Technical Article: Face It - The Artificially Intelligent Hairstylist

For more complete information about compiler optimizations, see our Optimization Notice.