NASA Space App 2019 Hackathon
Using VUI to help people prepare for bushfires
“NASA Space Apps is an international hackathon where teams engage with NASA’s free and open data to address real-world problems on Earth and in space.”
Well, I signed up.
There were lots of challenges to choose from. The hardest part was that they were all so interesting. It was easy to see how society could benefit from us solving those issues. It took us some time to go through the categories and every second someone popped up with a new idea. Among those ideas there was Spot That Fire V2 – and NASA was looking for an app that helped people with wild and bushfire mitigation process.
We grouped in teams of a maximum of 6 people, to try to generate a relevant product in 2 days. Our team was formed by 3 developers and 3 UX designers. We had people from Brazil, Russia and Australia – countries raided by wildfires. Not long ago, both the Amazon and the Siberian Taiga were burning uncontrollably with thousands of people being affected. Also, Australia already lost enough people to wildfires.
After a quick desk research, we found out that the majority of Australians didn’t have a fire plan. Furthermore, although they did keep an eye on the news, they didn’t know what to do when the fire actually hit. Our group aimed at turning this information into actions.
Get your priorities right.
Requirements Gathering and Feature Prioritisation In the guidelines there was a list of possible focus areas. They were all valid and we could see how they all could help the users to prepare and to take action when needed. We spent quite some time talking, debating and deciding amongst ourselves. In the end, we had a list that was based on what could be done as quickly as possible in the best way:
- Design rescue paths
- Build mashups: integrate geospatial data from various sources to provide innovative services to the public (e.g., local weather and local traffic), typically through their published APIs.
- Personalised support
- Notify related people
- Voice support
- Real-time fire status monitoring and reporting
When the fire hits, it’s all about surviving
Personas
Once we knew what to focus o, the devs went on to do research about APIs, json files, Nasa satellites and how to get the actual data from it. We, designers, also did some research but on fire survivors and anything we could get our hands on about fire plans.
We were able to find articles upon articles about experiences fire survivours. People described how agonising it was to wait for the fire to cease only to go back to their house that they’ve been living for 50 years to find it completely gone. Not only that, a bush fire had made victims again in the past few weeks.
From the interviews and fire plans from Australian states, we could see that groups were divided into three categories:
- People who had a fire plan and were prepared to leave early
- People who did not have a fire plan
- People who wanted to stay and defend their house
In this project we focused on the first two ones.
The majority of bush fire prone land is in the outskirts of major towns such as Sydney and Brisbane. Most bush suburbs have the bulk of their population in the 50-65 age range. People in this age gap are considered late adopters to the world of technology compared to their younger compatriots. Although they might own smartphones, they tend to shy away from anything other than phone calls and, sometimes, text messages.
The challenge here was: how to design for a lesser technologically skilled audience?
At this point, we started thinking of VUI as a lesser evil because it is possibly a much more natural interaction pattern for users who have not grown up with tablets, smartphones and other touch-sensitive devices.
A study conducted in 2017 suggested that “Voice User Interfaces, VUIs, may hold potential for increasing usability for seniors. Many voice systems are efficient, intuitive and do not require the fine motor skills that older users may find challenging.” It made sense for us to consider that growing older affects things such as movement and memory. Making VUI an alternative to the traditional interactions.
After finding the first study, we felt like we had set our foot on the right path. A bit more research revealed that the benefits of VUIs at first glance presented a vital advantage for senior citizens: it was quite intuitive and relied on the speech-interactions already known and practiced every day. Hence it is more efficient and up to three times faster in fulfilling a task than input from common devices such as keyboards.
Well, here we go.
This needs to be changed
Market research
We looked into actual fireplans. Although they varied slightly from source to source, their primary goal was to empower their users on how to act in case of a bush fire.
All of the fire plan templates we analysed were based on risk management levels. There were essentially two of them:
- Alert levels
Here you need to keep track of the following alert level so you know what you should do. - Fire danger ratings The higher the fire danger rating, the more dangerous a fire is likely to be. Here’s is when the user should act.
Another thing the fireplans had in common was the amount of open questions. Guidelines around what a good answer would look like and when to act/to leave were missing or unclear. Relying on the user to get these answers would definitely bring some new information to them, but not necessarily it would be efficient/accurate enough.After analysing 4 fireplans templates, we observed the following pattern:
Having central words around the questions made it simpler to remember. However, the user would need to remember the answer of these questions while they were panicking (or somewhat near it) around the house. There was room for improvement here.
Sculpting Davi
This needs to be changed
As a Product/UX designer, I was beyond myself with the rise of voice-driven experience design and I was thrilled with the idea of designing an Alexa skill. Voice user interfaces are helping to improve all kinds of different user experiences, and some believe that voice will power 50% of all searches by 2020.
Whether we’re talking about VUIs (Voice User Interfaces) for mobile apps or for smart home speakers, voice interactions are becoming more common in today’s technology, especially since screen fatigue is a concern. For me it was a challenge because I thought that designing for voice was an interfaceless design. Without anything to click on or type in, I’d be going into the unknown. However, later one, the realisation dawned on me – an interface is a way by which the user and a computer interact. As simple as that, the interface was there, it was just different.
I was a bit skeptical at first because I’ve been seeing a trend of certain new technologies being pushed as a solution just because it’s cool. For instance, while it is nice to see VR and AR in use, sometimes I can’t help but think: were they really necessary here?
So before we set foot on how to proceed with the project, we asked ourselves the following questions:
- Is the VUI really necessary here?
- Will it better than a digital interface?
We also had to validate the need for voice interface as well. Alexa’s skill is not a “faceless” version of a previously developed applications or mobile apps. VUI has a purpose and it fit well within our goals for this app.
We then considered the intents, the utterances, and the slots when we design an Alexa skill conversation.
What would we like Alexa to assist us with?
Conversation Set Up
Alexa is the AI assistant for voice-enabled Amazon devices like the Echo smart speaker and Kindle Fire tablet — Amazon is currently leading the way with voice technology (in terms of sales).
On the Alexa store, some of the trendiest apps (called “skills”) are focused on entertainment, translation, and news, although users can also perform actions like request a ride via the Uber skill, play some music via the Spotify skill, or even order a pizza via the Domino’s skill.
The existing activation plans relied only on the person filling it – both to fill it in correctly and to remember what to do when the time came. This time, we had a quite clever tool that could store information and use other applications to help you on your journey.
After determining the tasks we’d like Alexa to help us with, we needed to determine the essential information flow to complete these tasks.
Also, the future is not talking (nor giving orders) to machines, but machines with the ability to talk to us. Based on this, we had reactive and proactive behaviours. In the reactive one, Alexa would reply back to you when prompted. In the proactive approach, Alexa would let you know when your fire risk reached either Medium or High.
This way, you would be able to have enough time to act and not be surprised by the fire.
So based on our list of feature priorities and on the existing fire plans, we came up with a shorter version of it that would count on app like Google Maps to get you where you want to go and NASA satellite to tell you when you leave.
Our goal here was to make the flow as legitimate as possible. For every sentence that came out of Alexa, we looked for the most natural way which that piece of information could be exchanged.
Once the user gave us these information, they would be stored in the database and then be repeated back to the user once it was time to activate the fire plan. Although they used the same questions, the order they would be reproduced back to the user was reversed. This happened because we ordered them by importance in the set up and then by what was easier done in the activation process.
When activated, the Fire Escape Plan would inform your contact list that you activated the plan, followed by informing you what to take and then send you the best route to your safe place via Google Maps. This way, we could account for any roadblocks and offer the user the best and quickest way to safety.
Something we had to consider in this Information Architecture as well was human error and mistakes. This only came up to us a few interactions later when we thought “how can we edit this part or that one?” The IA for VUI only has one direction. With voice interface, users can’t simply hit the “back” button if they missed a step or wanted to edit any information. We had to find a way to loop the conversation back to allow space for human errors and mistakes.
Go over the notes before you start playing
Testing and model interaction
We aimed to build a clear system flow with intents and sentences. Since we didn’t have the visual feedback as in a screen, nothing would pop up to let us know that the action the user was trying to perform was successful or not. Also, even though it was a conversation, we didn’t have the visual queue of the other persons’ body language. We had to remember to design the confirmation of the previous message when it was Alexa’s turn to speak.
It was almost like going over the notes of a song before playing it. You reiterate and think and say it out loud until you go as further away from a recorded customer service phone call as you can.
It was around 11pm on Saturday, after we made the final adjustments, that we were ready to build our prototype.
This needs to be changed
Prototyping
We had different deadlines on the last day of the hackathon. The presentation was about 10 hours earlier than the final deadline to hand out all the deliverables. Based on that, we decided to let the devs improve the alexa skill as much as we could while we worked on the prototype.
At the beginning of this year I attended an Adobe XD workshop and it was mentioned that one of its upcoming features was VUI prototyping. Although my teammates and I had played around with XD before, none of us had used the voice features.
Translating the user journey map into artboards was smooth as we all were Adobe users. Learning how to get the prototype to perform all tasks we wanted was a long, exciting and sleepless journey.
Even though we had tested the prototype before by reading it out loud to ourselves, nothing prepared us for this stage. Alexa has its own voice tone and style of transmitting the message, which we had not accounted for. Things such as pauses and emphasis on certain words were necessary in our medium and high risk level. Something we did naturally when reading to our teammates.
Also, we came up with a cheat sheet on how to set up the prototype in order for Alexa always to have the last word.
Once we got to this magical formula, the prototype came much easier and we were able to set up the other tasks quicker as well.
Where we left it
- Although still under development, the devs in our team did an awesome work with the Alexa skill. We plan on getting the band together again and finalising the skill so it can go on the market.
- A mobile App would allow our users to keep track and better visualise such things as maps and record information as well. We want to collaborate with other designers and developers to create a seamless experience.
The Fire Wombats won ‘“Alexa Winner: Best use of Voice” at NASA Space Apps 2019 Hackathon!
Client:
NASA Space App 2019 HackathonLocation:
SydneySkills:
Usability research, personas, user journey, prototyping, testing and iterating.Team:
Anthony DoueihiDate:
January 23, 2023