Modeling for the Humans on the Other Side of the Screen

Omelas
5 min readDec 27, 2018

--

(This is Part 1 of our 4-part series on the intersection of ethics and technology, specifically AI, in a practical sense)

What do dogs and machine learning models have in common? They are easy to train and manage in secluded settings, but harder to handle in public, when other environmental factors, such as people, are involved.

Building a high-performing machine learning model is difficult but doable. It may take substantial time and resources to attain the desired performance level, but a myriad of research exists which can point data scientists in the right direction. Embedding such a model into a live technology platform, however, presents an entirely different challenge. This is because data scientists are forced to relinquish control of the model to an independent user base. Since data scientists have no opportunity to directly communicate the model’s intricacies, users will absorb the model’s outputs in a way that makes the most sense to them. They may then use the model to satisfy an individual need, regardless of whether it aligns with the original intentions of the model’s creators. For example, a hiring manager might base hiring decisions on the outputs of a recruitment tool that was originally designed to complement a thorough interview process. As such, data scientists face the risk of misinterpretation or unintentional misuse of their work, and must grapple with the ethical implications of these outcomes.

The misinterpretation or misuse of high-performing machine learning models is risky, and can jeopardize individual livelihoods. At the 2018 Artificial Intelligence, Ethics, and Society (AIES) conference, for example, researchers from the University of California, Los Angeles (UCLA) presented a contentious model that predicted whether crime was gang-related. The researchers received criticism from audience members for not considering the model’s potential side effects. Though the researchers view their work as early-stage, future commercialized models could systematically mislabel crime suspects as gang members. Although the researchers did not have harmful intentions when building the model, they failed to assess the ethical consequences of their model being applied or misinterpreted in a real-world scenario.

Data scientists can try to mitigate the misuse of such models by clearly outlining the assumptions and caveats that lead to a prediction. However, this involves both technical and design challenges. Data scientists must explain statistical concepts to a diverse audience, and also ensure these explanations fit into a smooth user experience. Typically, users do not like to be bombarded with detailed information regarding potential biases in the source data or the assumed score distribution among a population. However, hiding these details in fine print almost guarantees that they will not be read. So what is a data scientist to do?

Data scientists should spend more time learning about end users. Leaders in the field have begun to emphasize the importance of the end user in machine-learning endeavors, invoking concepts from the design and UX disciplines. One framework that is useful when designing for end users is the human-centered design framework which focuses the problem-solving process around the individuals being impacted. IDEO, a design consultancy firm that promotes this framework, acquired a data science company in order to bring human-centered design concepts to the data science space. Similarly, the Google UX community is working towards human-centered machine learning in order to help their products better serve human needs. These practitioners recognize that machine learning models do not exist in a vacuum; they must be designed to integrate with the people and processes which they serve.

At Omelas, we embrace these principles and take steps to understand the humans on the other side of the screen when building machine learning models. Our data scientists work closely with our business team to better understand end users and seek continuous user feedback. Here are some of the questions we ask ourselves during this process:

  • Who are our users? In order to answer this question, we create archetypes for each category of anticipated platform users. This allows us to define who exactly our product is meant to support. Security analysts and business executives within the same organization have very different backgrounds and priorities; if our model is meant to serve both these roles, we must ensure that the results are accessible by both groups.
  • Why are they using our platform, and what will they do with the information? In order to understand how our platform fits into each user’s current workflow, we outline the job activities and work deliverables of each user archetype. For example, an analyst may use our platform to identify a list of online sources to investigate, whereas an executive may use the platform to obtain a snapshot of the online information environment. Understanding these use cases is pivotal as they enable us to design models that serve specific user needs.
  • How are our users going to interpret the results? In order to understand how our users will likely respond to different representations of model results, we confer with UX experts. These experts help us evaluate the tradeoffs between too much versus too little background information, the merits of weighted versus unweighted scores, and the pros and cons of different data visualization techniques. This, combined with feedback from actual users, allows us to choose the presentation option that best aligns with our intended interpretation of the model.
  • How can we supplement user’s understanding? We work closely with clients to train their personnel to effectively use our platform. During this process, we are transparent about each model’s purpose, use cases, assumptions, and shortcomings. This helps us manage expectations about how much of the user decision-making process can be offloaded to the model. We particularly emphasize that the models are tools whose results should be critically evaluated; they should not replace human judgement.

Building technology ethically is a difficult task, and we have encountered frequent tradeoffs between short term achievements and long term responsibility and reliability. We cannot anticipate every possible case of model misinterpretation or misuse. Nevertheless, investing time in understanding the platform’s end users is a necessary first step towards establishing an ethical approach to data science.

--

--

Omelas
Omelas

Written by Omelas

Omelas stops the weaponization of the Internet by malicious actors

No responses yet