Implementing a Custom Cloud AI Solution
In an effort to enhance our kitchen intelligence solution, we created a custom cloud AI solution that can be implemented for a tablet format. Eventually, this can be further developed for future projects.
Custom AI solutions have made their way into the Smartbridge repertoire. Recently, during the companywide hackathon, an Artificial Neural Network was created and hosted on a cloud service to enhance KitchIntel, our solution accelerator that brings modern intelligence and automation to the back-of-house in restaurants. This new development has laid the groundwork for applying AI to varying use cases across different industries.
How the Solution Began
For the last few months, we have been developing a Neural Network (NN) that can predict food consumption of specific items at the store level of a brand. The network was built in Python using the NN framework, Keras. Keras is a library that sits on top of the machine learning framework, TensorFlow. Using these libraries and some sample data, we were able to create a NN that can accurately predict the future!
With initial test runs looking promising, it was time to move the solution into KitchIntel. However, KitchIntel is a tablet application and doesn’t have the power to run sophisticated AI solutions. What we needed was a cloud-based solution that would leverage the power of a backend server to do the heavy lifting while freeing up the tablet to simply request information from the server.
Making the Solution a Reality
When the quarterly hackathon came around, it was the perfect time to pool all the fantastic resources here at Smartbridge together to make our AI dreams a reality. We assembled a team of our finest engineers and set out with a task: Put AI in the cloud. There were just two questions to answer before we could start hacking:
- What cloud service should we use?
- How should our API function?
Smartbridge has an extensive history of using Microsoft Azure as a cloud service which made Azure the easy choice for our cloud platform. As for the API design, we decided we wanted to be able to ask our AI for predictions based on a given time stamp.
With the planning complete, we started work creating a Python web service on Azure. We created a Linux server on Azure and deployed a simple Flask app to it using Visual Studio Code’s Azure plugin. After some confirmation testing, we were live on the cloud with a simple “hello world” web app. Next, we framed up an API that would accept a datetime as a parameter. At this point we had a live server running on Azure’s cloud that could be accessed via HTTPS requests. It was finally time to add the AI into our server.
Since the Neural Network was already created, adding it to the server was straightforward. We moved the files that contained the saved NN into our server and added the code to load the network into memory when the API was accessed. The final piece of the puzzle to get a cloud NN fully functioning was to connect to a database for input data into our network. Again, using Azure’s cloud service, we spun up a test database and imported data to work with. Back in our app, we added the logic to connect to an Azure database, formatted the data correctly for Neural Network consumption, and connected the data to the Network. With a cloud hosted NN, accessing powerful AI was as simple as sending an HTTP request (seriously, you could do it in a web browser).
The beauty of an external database driven solution is that, while it gives us the option to update the current network on the fly, it can also load a completely different network using the same exact methods. In the future when we want to add AI into a new project, instead of going through all the work covered in this blog, we can simply use the existing code and make some minor changes for the new project. AI implementations are now easier, faster, and most importantly, cheaper than ever before.
Extending the Solution
Having a cloud-based AI solution is great, but at this point it could only be used for this one specific use case. What if we created a more efficient version of the existing network and wanted to load it into the app? In the current solution, we would have to alter the network files and redeploy the app service to Azure. That means taking the server offline, potentially hindering the performance of KitchIntel.
We needed a simple way to swap out the Networks without having to redeploy the entire web service. Due to the time constraints of a hackathon, we went with the most simple option; BLOB(Binary Large OBject) storage on a database. Again, we spun up a database on Azure to hold our network data. The only change left was to update the web service to retrieve the NN data from the database instead of loading it in from a file.
Looking for more on Restaurant Automation?
Explore more insights and expertise at smartbridge.com/restaurant
Keep Reading: Restaurant Group Uses RPA for Inventory Auditing
Originally published at https://smartbridge.com on November 13, 2019.