“Alexa what’s for Lunch” — an Alexa skill for lazy students of the FAU (pt.1)

At my university, there are two dining halls which serve food around noon. Their daily menu can be viewed online. Me and many of my friends check the menu on a daily basis, so I decided to automate the process and save some precious time.

Voice Controlled Applications consist of two parts:

  • the voice controlled User Interface defines the interaction dialog between Alexa and the user. This is done by defining so-called intents which are invoked by a set of specified utterances
  • a backend service. As developers, we need to and we need to provide a backend service, which contains the application logic. This can be anything from information about a topic to switching on a light-bulb to

Setting up the dialogue interface (voice — UI)

The first step you will want to do is to register as a Alexa Developer in the Developer Console.

The intent getTodaysMenu with some sample utterances

Designing the Backend (using AWS Lambda)

The backend service is where the Alexa skill get its intelligence from. When an intent is triggered from the defined voice patterns, a request is sent to the backend. This can be implemented in many ways. If you want full customisation you can even use a raspberry pi to host an http-server which listens to incoming requests. You just have to specify the address of your server under the sidebar tab called “endpoint”. The easiest way however is to fill in the lambda cloud functions generated by the template.

Testing the skill

After building the interactions model and deploying the lambda backend, lets test the skill to verify it is working. To so, click on the “test” tab in the Alexa console. Alternatively, if you have linked your echo to the developer account, you can just go ahead and test from your device.

After activating the skill by its name, the getTodaysMenu Intent is activated.

Conclusion

Alexa skills consist of two components. The voice-based user interface and the backend. We design the user interface by defining intents with corresponding utterances. For every intent we define, we create a handler in the backend, which processes the request and returns a spoken response.

Data Analytics Student from Munich, Germany. Interested in web apps, machine learning systems and medical AI norms and regulations.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store