Gamification: Are we really enhancing or just manipulating?



es, we, as a society, are talking about ethics, but, like many things, we need to be talking about it more. This rings especially true in this era of mass data collection and the increasing power of machine learning to processdraw conclusions, and act on this raw data. And, as designers, like it or not, we are on the frontline of these discussions. We make design decisions every day as to how we’re allowed to convince, lead, or, in some cases, manipulate humans to act in a certain way.

It’s “not that product designers don’t care about the ethical ramifications of their work — far from it. It’s that, too often, they assume that such considerations fall outside of their job description.” (Sgarro, 2018). We care, it’s just easier to pass the responsibility to the futurists, technologist, scientists, or others that are seen as directly pushing these boundaries by developing the technology and visions for our future.

We need to take it seriously that all of these 'tiny' decisions, these quick choices, we make on a daily basis pile up and create a larger philosophy that we cannot be blind to our influence of. Small design decisions impact big design philosophies.

With great power comes great responsibility.

We need to step up and take — yes, even more — responsibility for our role in being ethical in the designs we create. Part of this is defining the boundaries for using — and developing — technology, but it's also in the day-to-day design decisions we make when creating our users' experiences.

It's simpler to shrug and use 'proven' or 'mainstream' functionality under the guise of bettering the user's experience. It's easier to argue it's acceptable via the infamous "everyone is doing it" argument.

But ask yourself why are these mechanics being used. Is it really to help your users achieve their goals or yours?

It is not the technology itself that's the problem, possibility is never the issue, it is the way we allow ourselves to use and apply this technology to ourselves, our environment, and each other that is the true risk.

Technology is unable to have impact and is inaccessible until we build the interfaces and experiences that let people reach out and interact with it.

Technology has changed the ways in which we can design immersive experiences.

And, don't get me wrong, this is an incredible thing. These advancements have allowed us, as designers, to make ‘impossible’ experiences possible for our users. It has supercharged our ability to do things like create individually tailored experiences, build imaginative layers on top of reality, connect people on opposite sides of the world, and increase accessibility to knowledge, content, and services.

“There are only two ways to influence human behavior: you can manipulate it or you can inspire it.” — Simon Sinek

However, often accidentally, but sometimes on purpose, we end up using these powers to prioritize our own goals and manipulate, instead of inspire, our user’s behaviour based on what we think is best without much thought going into the wellbeing, or desires of our end users.

We can't get lost in “how do we make people do X, Y, and Z” but rather “how do we create a meaningful experience” or “how do we tell our story” in a way that “people understand the impact of doing X, Y, and Z”.

Let's take designing in urgency as an example. Many hotel booking sites use urgency to add pressure, and scarcity, to their UX. Think of phrases like “3 other people looking at this hotel” or “1 room left”.

Now, is this helpful feedback, informing the user that they are at risk of losing this option, or is this manipulative behaviour, scaring the user that if they don’t act now they will miss out? It depends on a few things.

  1. The presentation of the message. How is it presented? Is the implied outcome (threat) as immediate or severe as communicated?
  2. The accuracy of the message being communicated. How true is it? Is it based on real time data, or estimations?
  3. The intention of the designer or company behind the feature. Why is it presented? Who is this feature really for? Who really benefits?

We’re given an ever-expanding toolbox, allowing us to design experiences that are more immersive, personal, and realistic than ever before. Users can't look behind the screen to assess if system feedback is "real" or not, they have to trust that we aren't providing false information. This trust is a vulnerability we should be sensitive to as designers.

We need to start designing by our ethics and values.

In order to stay true to our real mission, to design for the user, we cannot step away from these ethical discussions, even if it means our design processes becomes harder or more time consuming as we forgo many short cuts to engagement we have begun to depend on.

How can we start to be more mindful of our impact?

Stop assuming you always have your user's best interest in mind.

We like to act as though what we find valuable automatically aligns with our users find valuable. Always be asking why we're designing the experience this way. This, actually, helps you too. Too many designers get stuck designing for KPIs rather than objectives. Why design to get users to log in daily purely for the metric if it isn't necessary to provide value? When you design for untethered metrics you have lost track of the why and have stopped trying to create real value.

Stop calling lack of consent as “seamless integration” or “making life easier”.

We need to commit to always asking for consent. We should always make obvious when reality is artificially influenced or altered. We've got to always disclose how we use data we collect or are granted access to. We must never use this data to manipulate. This is the bare minimum we need to do as designers as we enter an era with an ever increasing amount of technology that allow us to further blur the line between truth and fiction.

Stop using engagement and fun as an excuse to manipulate people.

'Gamification' is not meant to take away people’s autonomy and manipulate them to satisfy your own interests. Just because you provide a 'fun' experience to your users does not make this approach ethical.

Games are engaging because they are well designed systems. Want to improve engagement and retention? Design a good system.

Stop ignoring diversity and be inclusive.

The world is only not inclusive because it's not designed to be. So, let's design it to be. When we don't, we're essentially saying, by being passive, that whoever we're excluding, doesn't matter. This is not a 'nice to have' it's an absolute 'must have'. We've got to stop skipping this by assuming this falls under ‘fringe cases’.

Taking on this responsibility definitely increases the workload, as adding responsibility to do right by others often does, but isn’t it our basic duty to consider the psychological wellbeing of our audiencesTo make sure technology is accessible for everyone?

Keeping the conversation open.

Of course, there are many more aspects to consider, my above list is just a few points. We need to continue talking about this. It's our responsibility to contribute to ongoing discussions, share ideas, observations, and perspectives, and, most importantly, act in accordance with our conclusions. Technology no longer affords us the space for error of poor micro-decisions or biases going undetected. We have to be vigilante that we are not designing against our values and ethics.

Yes, there are laws and policies like GDPR in place to help guide us, but there are still many overlooked issues in the practical application of this. There are also many voluntary ethics codes written by designers, futurists, technologists, politicians and more. For a few examples, you can check out the Open Gamification Ethics Code, or the Ethics for Designers website, as well as more company guidelines, like IBM’s Everyday Ethics for AI or Atlasssian’s New Rules of Ethical Design in Tech.

Let’s do a small thought experiment. Take all of the design micro-decisions we make each day. These decisions, when apart, seem small, insignificant. However, when brought together, we can see patterns. We can see that these decisions actually create and influence trends.

With every little design decision we make we continue to create and update unspoken guidelines for design. See it as though we are constantly voting, by taking decisions, on how it is — or is not — acceptable to treat our users.

And what do you think these conclusions would show if we did this right now? Would you trust an AI to modify or design your experience if it based its own choices on the results of this data?

Yes, maybe this is a little futuristic thinking, but we will have no one to blame but ourselves if technology, that learns through our own decisions and actions, looks at the data and concludes it's acceptable action to manipulate, bribe, and obscure reality, as we have in the past.

It will be an ongoing journey to grow and learn how to embed values and ethics into our design culture, as well as ensuring that we commit to more than just considering, but accommodating our (neuro)diverse, multicultural world. We need to be mindful that when we encourage the use of manipulation as a tool, we are adding it to our culture of design.

Yes, it’s hard to fight against our own privileges, against what we may be used to as the norm, and learn how to compromise — voluntarily — some of our interests in favour of the interest of others. But we must. Our future depends on the decisions we make right now.