Less than a decade ago, the word ‘design’ was mainly associated with aesthetics, color palettes, and the infamous Comic Sans font. Yet, over the course of a few years, design has gone from something that most people don’t really care about to a powerful influence on our daily lives. More and more books, design conferences, tv shows, and documentaries explore the destructive effects of design or its capabilities to tackle social, ecological, or economic issues. Still, it isn't always clear what issues we can address with design methodologies and what designers can potentially contribute to solve social problems and undo some of the issues that design itself created to some extent.
The Netflix documentary, Social Dilemma essentially reveals why you go from Googling healthy fast food one minute to finding food ads on your Instagram feed the next. The British dystopian series, Black Mirror envisions our worst nightmares about technology. The concern about how everyday digital experiences influence our lives has been growing louder than ever.
Relations between technology and design
The concepts of user experience and human-centered design have taken the business world by storm, but this power has come at a cost. Social Dilemma concludes that behind most digital products there were good intentions — to improve the quality of human lives. However, these well-intentioned products, created by brilliant programmers and designers, have become weaponized by ever-expanding algorithms and corporate profit maximization. Undoubtedly, as data-driven systems become a bigger part of our lives, we also notice more when they fail. This determinist narrative completely ignores the fact that technology is a product and a reflection of society and it is part of a complex socio-economic system. Technology, design, and society continuously influence each other, and the interaction between them comes as a three-way process.
Many of us assume that tech is neutral and is free of racism, sexism, or other “isms” plaguing human decision-making. But the truth is social hierarchies, like racism, are embedded in the logical layers of most products and technologies. Social media filtering is probably one of the most common examples of data-driven systems at work. This is when Facebook auto-tags your photos with friends’ names. Or, think about the apps that decide which news and advertisements to show you to increase the chances that you’ll click.
Technology is biased
The Social Dilemma minimizes the reality that the way humans design cannot only reproduce but amplify existing prejudices since the designers who create them in the first place have their own biases.
A disturbing example of this is the case of Beauty AI. The first-ever beauty contest judged by robots that used the most advanced machine-learning technology available in 2016. It was designed to assess over 6,000 contestants from more than 100 countries based on wrinkles, face symmetry, skin color, gender, age group, and ethnicity. People trained the deep learning software to code beauty using pre-labeled images. After that, the photos of contestants were judged against the algorithm’s embedded preferences, but the results were not so promising. Although the finalists were across various age groups, they were all Caucasian, and only one of the finalists had a darker skin tone.
Another famous case is what happened during the early days of Airbnb. Designers wanted to make hosts very comfortable because they invited people into their homes. It was a very new thing back then.
Design decisions were mainly about giving hosts as much information as possible to make them feel safe and happy about allowing guests into their homes. They made names and photos front and center on the platform. In addition, hosts at that time could reject guests whenever they wanted to, without having to explain why. This decision led to lots of reports of discrimination. People with foreign-sounding names and black people were more likely rejected. This was completely unforeseen by the design team. This example can really show how design can be used in ways you don’t anticipate.
Good design isn’t naturally ethical
Good design can meet user needs, but it isn’t naturally ethical. The tricky thing is that good products and services — just like in Airbnb’s case — can be very convenient and pleasing, and we might not feel any of the negative side effects until they get out of control or we’re personally affected.
Good design can have unintended negative consequences and resulting problems. For instance, the underrepresentation of marginalized or vulnerable groups in user research or unethically collected and used data.
Race After Technology by Ruha Benjamin is an essential reading on the topic since it decodes how digital tools predictably replicate and deepen racial hierarchies — all too often strengthening rather than undermining pervasive systems of racial and social control. At the same time, in The Ethical Algorithm, Michael Kearns and Aaron Roth introduce some promising methods based on socially aware algorithm design. The book argues that ethical constraints must be incorporated when designing models. It looks at how we can nudge away from a bad equilibrium in game-theoretic scenarios such as news filtering and dating apps.
The many layers of data
As social awareness grows about the unforeseen consequences that these new technologies have on the social, political, and emotional arenas of our lives, more and more people become skeptical towards big tech companies. The Social Dilemma does a good job at highlighting how targeted advertising has manipulated us by taking advantage of our psychological vulnerabilities. However, the documentary focuses exclusively on the psychological aspects of social media and its addictive effects. It fails to explore a form of capitalism that has been established through these newly developed technologies and the products that use these similar, unethical business models.
Technology and privacy
The Age of Surveillance Capitalism, Shoshana Zuboff’s sociological analysis of the digital era, describes how global tech companies such as Google and Facebook persuaded us to give up our privacy. The collected information is not only used to predict our behavior but also to influence and modify it. Needless to say, this has had disastrous consequences for democracy and freedom. We like to think that the only information tech companies have about us is what we’ve given them. What’s really happening is that the information we provide them intentionally is the least important information they collect about us.
Tech companies that work in unethical ways retrieve a lot of information from the digital traces we leave behind unknowingly. But they use only some of the data collected about us to improve the services or the user experience. Companies use most of our data to analyze patterns of human behavior in order to create predictive data models and see how people with certain characteristics typically behave and think.
At the simplest level, they can predict what you’re likely to do now or later and sell these predictions to businesses. Well, what’s the problem with that? You might say that “But I like those targeted ads” or “I have nothing to hide, so I don’t care what they take, and besides that, they already know everything about me.” Each one of these statements is a profound misconception of what’s actually happening. The root of the problem is not targeted ads or personalized user experience. For the most part, algorithms are tainted with racial and gender biases. And we now know that predictive models can also influence offline, real-world emotions and behavior in ways that bypass our awareness.
Ethics in UX
What do we mean when we say we’re ethical? Ethics, in general, is a broad discipline. In a philosophical sense, it studies what is right and what is wrong. Applied ethics is when we apply these concepts and ideas about morality.
We can look at design and research ethics as different disciplines connected and intertwined. Design ethics looks at the welfare of the people who use and interact with a design. It also deals with the long-term effects of how design is used or abused by people. Research ethics, on the other hand, focuses on the welfare of people involved in research activities.
Tunnel vision of success metrics
In an era dominated by the “big four” — Google, Amazon, Facebook, and Apple — we can have a first-hand experience of how some companies may cut corners or follow unethical practices that can do real harm to people.
Many product teams follow in their footsteps. In some cases, participants are willfully or unintentionally deceived about the purposes of how their data will be used. These tendencies usually occur when the team lacks a solid foundation of design and research ethics. Or, another case is when designers and researchers have to make decisions that exclusively serve business goals.
Often as designers or researchers, we don’t have power over these unethical decisions. And we can unintentionally inflict harm even if we want to design a good experience. Some of the benefits of good and enjoyable design might come with unforeseen trade-offs.
We can make users addicted to the product and services we create. We can make them become marginalized because of our own biases or overloaded or stressed by notifications. Moreover, bullying and hate speech can also be a result of our design.
The reality is that in fast-paced environments, we don’t usually have time to think about design ethics. It’s very easy to get tunnel vision on success metrics. When we become short-sided by focusing on the short-term successes of our design as opposed to the long-term perspective. In a rapidly changing environment, we can fall into the trap where we only rely on quantitative data and metrics. And design only to improve those metrics. So what can you do to not only design good experiences but also ensure ethical decision-making?
Exploring the designer’s dilemma
As designers, we’re solving problems to make people’s lives easier. However, it’s important to understand that what we design is not neutral, but in fact, it’s inherently biased. Referring to people as users creates both a comfort zone and a contradicting truth for designers. We can’t actually care about every single person who interacts with our designs on an individual level, but empathizing with their problems is part of our job. Although, I would argue that the word user itself inevitably creates ignorance and detachment from the very people who we should care about. It strips them of their complexity by reducing them into traces of digital data that we use to improve products or services. This essentially reinforces the narrative that people are a source of profit foremost.
By acknowledging that even our design vocabulary affects our way of thinking, we can move one step closer to design ethically. It makes us feel better to think that our design decisions are external to our flaws, but in reality, they’ll always be extensions of us.
In order to truly improve people’s lives through design, we need to ask difficult questions. And we need to ask them within the very system they’re being created and reinforced. It shouldn’t just be the UX team’s responsibility. It’s everybody’s responsibility, so these conversations need to happen at a leadership level as well. In this way, we can avoid being in a situation where we have to reverse certain decisions or spend a lot of time reworking.
Steps towards designing ethically
Planning for this is the most important part — so try starting the conversation in advance: How can our design be abused or exploited? What’s the worst possible outcome? What are some potential tradeoffs that can affect people who use our product or service negatively? Asking difficult questions will help you prepare you for the real work that begins after:
- Do qualitative research. Go out and meet real people.
- Do participators design or co-designing. Bring other people into the design process, especially if it’s a sensitive topic.
- Think about the people in society who could be the most vulnerable. They’re often the marginalized people who have limited options available for services and products. For this reason, they are often the people who get negatively impacted by design. Think about disability or accessibility needs. And advocate for reiterating the design that was created with only a specific demographic group in mind that doesn’t involve marginalized people.
- Take a moment to educate yourself about systemic problems. Even though systemic level issues are larger than design itself, we’re still part of the system that creates and reinforces them. Design processes most often replicate the processes and values that exist outside of the design world.
- Recruit a diverse team to have a well-balanced representation during the design process. If we have a group of designers from the same background, they all have similar experiences and values. You’ll unconsciously transfer your biases into your designs. Include a diverse group of people in your research process as well.
Can we solve social problems with design?
At UX studio, we decided to consciously channel our knowledge and creativity toward positive impact. When we reexamined our mission as a company, we made a collective decision to focus on helping people holistically (and ethically) through our design decisions.
As more people question the status quo of the tech industry, we can also sense a shift within the design community. A group of East London designers developed “Every One Every Day,” a network that is rebuilding connections for people in a low socio-economic community. The Social Design Cookbook initiative uncovers case studies of a broad, international selection of socially cooperative formats that have been successful in their local communities. The Center for Urban Pedagogy collaborates with teachers, students, policy experts, and community advocates, along with artists and designers, to visually communicate complex urban processes and policy decisions. The Curry Stone Foundation uses design as a tool for social change, especially in marginalized communities. Airbnb launched it’s Project Lighthouse program to uncover, measure, and overcome discrimination and built an illustration guideline for a more inclusive visual identity.
What we can learn from this
We’ve started to question ourselves about the consequences of the products we’re designing and how we should wield our influence. New frameworks and ways of applying design methodologies have emerged to solve complex social problems, as well as to undo some of the problems that unethical design itself created. This shift goes by many different names — “Circular Design,” “Social Design,” “Impactful Design,” “Social Innovation Design,” and “Ethical Design” among other things. All of these ideas have their very own definitions, but the phrasing is the less important part here. It’s the possibility and the space that allows us to use design to counteract social issues. It’s the thinking and exploring. It’s looking at examples. It’s making sense of things. It’s going beyond the current trend of applying design thinking to social problems. And it’s the dismantling of systemic mechanisms that harm people to create alternative, equal, inclusive, and ethical futures.
I aspire and crave to be one of those “Ethical, Impactful, Circular, Social Designers”. I’m not sure about the hows, but I know that I do have a choice. Because more than ever, we have a seat at the table, and now it’s our turn to create new principles.
Continue learning with UX studio
Searching for the right UX agency?
UX studio has successfully worked with over 250 companies worldwide.
Is there anything we can do for you at this moment? Get in touch with us, and let’s discuss your current challenges.
Our experts would be happy to assist with the UX strategy, product and user research, or UX/UI design.