Human-computer interactions in machine learning applications Part II

While in Part 1 of Human-computer interactions in machine-learning applications talked about how we might structure model outputs, this post discusses about the reverse: how we might process inputs from the user. Together, inputs and outputs (as shown in the chart below) make human-computer interaction a two-way, not one-way street. It’s important to consider both sides of dialogue.

Humans and machines act not in isolation, but in concert together

A big reason why we might care about how a user responds to our model’s output is that we care about how our model is performing. If our predictions successfully predicts a user’s intent, then we know we have built a good model. However, user feedback is useful not only for measuring our model’s performance. Feedback is also crucial for improving our model.

User feedback improves accuracy of models

For example, object detection models in the Google Pixel photos app not only identify elements within a photo, but also classify the event or activity that may be taking place — a summer barbecue, wedding, birthday party or family gathering.

Google photos curates photos by grouping them. image source:<a href=’https://www.freepik.com/vectors/woman'>Woman vector created by freepik — www.freepik.com</a>

Crucially, these groupings appear in the Assistant tab, where as the user, you are able to approve those groupings or dismiss them.

Not every grouping the Assistant comes up with will be a hit. Sometimes the you may not want to create an album out of a group of photos. At other times, these albums act as an automatic curator that saves you time. Regardless, by taking into account the groupings that a you approve or dismiss, the Assistant is able to improve it’s model of user preferences. It becomes a photo curator that more accurately reflects your tastes in photo organization.

Another example of using feedback to calibrate a model comes from Apple photos. To fine-tune it’s facial recognition capabilities, Apple’s photos asks you to give the model more examples of your face under different angles, lightings and settings so it’s facial recognition algorithm is more accurate at recognizing your face.

Allowing users to give feedback puts control back in their hands

Improved model accuracy is a direct and tangible side-effect of user feedback. However, user feedback mechanisms are also powerful because of the implicit effects they have on users. Specifically, these mechanisms are what create a positive experience, where users feel control over their devices and device preferences.

Memories, or collections of shots inside the Google pixel from a specific time, can trigger powerful feelings of nostalgia. This is a feature that moves from the app from being a utility (storing photos) to being a tool that is able to elicit emotions.

Memories can make you relive the past. <a href=’https://www.freepik.com/vectors/people'>People vector created by freepik — www.freepik.com</a>

Nostalgia, happiness, grief, embarrassment, joy, wistfulness — these are all emotions that arise in us when we browse photos that evoke phases and events from the past.

Not all these emotions are pleasant; user control is needed to hide memories that may be unwelcome. Hence, memories gives you the option to hide selected dates and certain people. By taking your preferences as a form of feedback / filter, the Photos app gives the user a sense of agency over the algorithm that curates collections of memories. This gift of agency is the first step of many that builds trust between a user and an application. It is this trust that leads to long-term user statisfaction and retention.

Managing user expectations through feedback

The expectations that people bring to smart applications are varied. Some people are more suspicious, approaching algorithms with a sense of “algorithm aversion”. They are more likely to trust a human forecaster compared to a statistical model. But there are just as many cases of people being too trusting. Ideally, an application would be able to calibrate a user’s sense of when it works and when it fails.

Stitchfix is a clothing subscription company that uses algorithms to help human stylists select clothing ensembles for its clients. Clients are sent a box of clothes based on their stated preferences and / or special requests. They can keep what they like and send the rest back. The cyclical process of deliveries and returns serves a dual purpose. Not only does Stitchfix learn about their customers perferences, the customer themself also learns about what to expect from the clothing recommendation service. Over time, they can share pinterest boards, give their stylist written feedback, and make special requests for an occasion (for example an upcoming job interview). The feedback process actively recruits customers into personalizing the service for themselves; this participation plays a role in educating their customers on what to expect from the styling algorithms and the humans in the loop.

The human-machine interaction interface is crucial for turning technology, whether machine learning applications or traditional software, into tools that encourage healthy feedback loops and that connect well with users. It was Imran Chowduri, the designer involved in the Apple Watch and iPad, who said,

digital touch was originally called E.T. for electronic touch. i called it that for its potential as a new form of emotional connection.

Ultimately, what good is technology if technology does not connect meaningfully.

I work with data in the little red dot