![]() |
| (From the Guardian's live coverage of Pope Francis's visit to the U.S., Sep. 27) |
Designerly Mind
"He knows not where he's going. For the ocean will decide. Its not the destination. It's the glory of the ride.” ― Edward Monkton, Zen Dog
Empathy & Leadership
I'm not a Catholic, but I love Pope Francis for his deep empathy and true leadership.
Meanings
There are just a few stories I’d like to share, along with some thoughts and questions related to what we as designers do.
Story 1, 9/25/15
A woman approached the hospitality desk to ask if anyone had turned in her new TracFone, which she had lost while waiting for the bus with several other Shalom Community Center (a local non-profit organization serving the homeless and extremely poor) patrons. I turned to Bob (pseudonym, same for other names below), an experienced fellow volunteer. He told me it’s rare that lost cell phones make their way to Shalom’s lost and found bin because they are highly desirable. I asked the woman to describe her phone. She referred to it as an “Obama phone,” which I later found to be based on a government-subsidized cell phone program for low-income Americans. I then left a note in the lost-and-found bin in case her phone ends up getting turned in. She was quite upset when she realized her phone might not be returned to her.
Image source: http://www.obamaphone.com/
Mike, another Shalom patron who often appears around the hospitality desk area, was also there today. He, too, lost his cell phone recently. He didn’t seem as upset, but I recall seeing him use his phone quite a bit in my previous shifts.
Story 2, 9/8/15
When I arrived at Shalom, I found several volunteers at the hospitality desk - more than what the work there would require, so when I saw Mario - the volunteer who oriented me during my first shift - work with someone else in Shalom’s family room, I asked if I could help. He welcomed extra helping hands.
It turned out they were clearing out stuff from several long-term storage bins that had not been checked by their owners “for months and months” (in Mario’s words). Given how tight the space at Shalom is (see picture below), it's necessary to do this every so often.

The bins on the left-hand side are long-term storage bins. Image taken by author on 9/25/15.
I was told to set aside paperwork in a separate bag for each bin, and put everything else in large clean trash bags. The useful things would then be sorted and used by other patrons.
Almost right away, questions popped into my mind. Where are the owners of these storage bins? What happened to them? What if they come back to retrieve their belongings? What do these things mean to them?
I thought of a “Visual Thinking Meaning and Form” course assignment from last semester. It was to use photos and short descriptions to inventory personal belongings of special meanings to us. The things we chose to include ranged from books, notebooks, clothes, to electronics, paintings, crafts, etc. Such were the things I cleared out from those storage bins, only that the cell phones I emptied all appeared broken.
What does it mean to be homeless? How does it feel to have a roughly 2’x1.5’x1.5’ bin for perhaps all of one’s “long-term” belongings? What does “long-term” mean for these people? Why are broken cell phones kept in these bins?
A woman’s bin I emptied had a craft that appeared to have been made by a child, plus some paperwork and personal items that seemed to belong to the woman’s kid. What is it like to be homeless or in extreme poverty with a child? How might the child grow up?
“What’s in my bag”
Different from the previous two stories, this one is not something I personally experienced. It’s a blog post about what refugees carry with them by an organization called “International Rescue Committee.” Again, it reminds me of the said “personal inventory” assignment, only this one is much more poignant.
Among the six bags presented in this post, we could see cell phones in four of them (top four bags below). One of them (top left) even includes a cell phone charger without a phone. One can only imagine the painful stories these refugees must have experienced.
A mother
|
A teenager
|
A pharmacist
|
An artist
|
A child
|
A family
|
Source of images: https://goo.gl/acZdkQ
What do these stories mean for designers?
In design research, extreme cases often afford us insights in ways that the other cases don’t. For refugees and people living in extreme poverty, cell phones seem like one of the things many of them hang on to. What is the meaning of cell phones to them? Is this meaning shared by more well-to-do people? If not, why not, and what’s the difference? If so, what is the meaning beyond cell phones’ trendy, glamorous, and fun facade? And, does it mean cell phones as a “thing” have permeated into modern life and human psyche so profoundly to the point that it has become an inseparable part of human existence? What is it about cell phones that has such power? What is cell phones’ influence on human beings’ emotional experience? What are the implications of answers to these questions for designers?
Interview Connoisseurship
Before our team conducted a
contextual inquiry on the use of the exercise wristband Nike+ SportWatch GPS by
our participant (pseudonym “Sam”), we worked on the study procedure
and expected outcome. From there we prepared a list of question to ask Sam
before and after we observed him run with the wristband.
Most of what we observed and learned from Sam is within what we expected based on the research we did and information the website of the wristband. However, something he said did stand out. I didn’t expect to hear it.
Toward the end of our post-observation interview, I asked Sam if using his exercise band had changed his habit or running routine in any way. He paused briefly before starting to tell a short story. He said that it used to hurt to run because he had some flat foot issues. He then talked about how “the watch” - as he calls his exercise wristband - quickly became the companion he needed to motivate him to run. Compared to similar products Sam has put his hands on, the exercise wristband he showed us is favored by him. He demonstrated the messages he sees on the interface of this exercise band, and how the conversational tone on the interface works for him. He said the wristband would show a message “Are we going to run?” When he answers “Okay,” the band would respond “That sounds great!” Sam said “It got me into running,” “I really really like this watch. It’s really useful.” and “The interface is the best. It’s almost like it has a personality.”
Coming into the contextual inquiry, I didn’t expect this emotional attachment Sam elaborated. It was only through probing based on what he had displayed and expressed we were able to uncover this dimension of his engagement with the wristband beyond the behavioral and functional aspects of this product.
Nike may or may not have categorized this level of user attachment as an “intended use” of this product. Its website does not mention any of it, so it is fair to say that Nike does not sell this product by appealing to potential buyers beyond its functional value.
I really enjoy the process of eliciting the deep personal meanings unique to our participant. Being able to hear from Sam the companionship his exercise band means to him and how important it was as a motivator for his running routine made all the time and efforts put into the preparation of the contextual inquiry worthwhile.
As laid out in the syllabus of our Experience design course, one of the course objectives is to “develop a critical and creative practical sensibility with regard to ‘experience’ in relation to interactivity” (Stolterman 2015: 1). I believe this sensitivity includes being able to elicit and discern the unique meanings users develop over things.
However, researching to uncover the embedded meanings someone has over something is challenging. It requires synthesizing and interpreting what the participant says on the fly, and framing follow-up questions accordingly. Distinguished journalists such as Bill Moyers, Margaret Warner, Terry Gross , and Charlie Rose have demonstrated this skill superbly time and again.
Even with their career success, I do not believe these accomplished journalists would conduct an interview without doing their homework. Even though they do not seem to read their questions from their notes, I am sure they always prepare a list of questions or at least an outline for any interview, in addition to researching their interview participants’ background and setting a goal for each interview. To undiscerning eyes, their interviews seem effortless as they flow like excellent conversations. However, I have no doubt that their not having to read questions from their notes shows they internalize the background information and what they would like to ask, which is essential to be able to probe deeply into their subjects’ inner worlds.
Professor Eric Stolterman elaborated this Tuesday (March 10) in class on Elliot W. Eisner’s notion of connoisseurship – “the ability to make fine-grained discrimination among complex and subtle qualities.” I believe interview as a technique also involves connoisseurship. Being able to do it well demands an appreciation that there is more to the interview technique than “just going out there and asking questions.” Like appreciation for good wine, cuisine, architecture, and other designs and developing differentiating tastes for them, it also requires lots of practice.
Additionally, learning from good interviews such as those conducted by reputable journalists and being able to recognize good interviews are conducive to improving our interview skills over time, as everything else concerning connoisseurship. Moreover, I believe critiquing our own interviews is also essential, because seeing ourselves can be extremely effective as a heuristic experience. Many people feel surprised when they see themselves in video recordings, because it is inherently difficult to see ourselves in action. That’s why I think getting the participant’s approval to videotape interviews (which need not show the participant’s face) is important, at least in this stage of our career.
When I am in charge of preparing for the participant consent form, I try to prepare different versions for the contingencies where participants may or may not agree to be video-recorded. I also try to work with the person(s) who recruit our participants to know before our study whether the participants would agree to let us videotape the study. The goal is to maximize the chance of getting their consent to let us videotape the process so that we can review how we did in the interview and improve based on the experience. The recording also has the benefit of allowing us to conduct a more thorough analysis of what the participants displayed or expressed.
In all, learning to draw unexpected threads of meanings from our interview participants is not trivial in both the skills it involves and the practice it takes. More importantly, it is not trivial in terms of the profound understanding we can acquire about our participants or their experience. Personally, I feel it is really worth it.
Most of what we observed and learned from Sam is within what we expected based on the research we did and information the website of the wristband. However, something he said did stand out. I didn’t expect to hear it.
Toward the end of our post-observation interview, I asked Sam if using his exercise band had changed his habit or running routine in any way. He paused briefly before starting to tell a short story. He said that it used to hurt to run because he had some flat foot issues. He then talked about how “the watch” - as he calls his exercise wristband - quickly became the companion he needed to motivate him to run. Compared to similar products Sam has put his hands on, the exercise wristband he showed us is favored by him. He demonstrated the messages he sees on the interface of this exercise band, and how the conversational tone on the interface works for him. He said the wristband would show a message “Are we going to run?” When he answers “Okay,” the band would respond “That sounds great!” Sam said “It got me into running,” “I really really like this watch. It’s really useful.” and “The interface is the best. It’s almost like it has a personality.”
Coming into the contextual inquiry, I didn’t expect this emotional attachment Sam elaborated. It was only through probing based on what he had displayed and expressed we were able to uncover this dimension of his engagement with the wristband beyond the behavioral and functional aspects of this product.
Nike may or may not have categorized this level of user attachment as an “intended use” of this product. Its website does not mention any of it, so it is fair to say that Nike does not sell this product by appealing to potential buyers beyond its functional value.
I really enjoy the process of eliciting the deep personal meanings unique to our participant. Being able to hear from Sam the companionship his exercise band means to him and how important it was as a motivator for his running routine made all the time and efforts put into the preparation of the contextual inquiry worthwhile.
As laid out in the syllabus of our Experience design course, one of the course objectives is to “develop a critical and creative practical sensibility with regard to ‘experience’ in relation to interactivity” (Stolterman 2015: 1). I believe this sensitivity includes being able to elicit and discern the unique meanings users develop over things.
However, researching to uncover the embedded meanings someone has over something is challenging. It requires synthesizing and interpreting what the participant says on the fly, and framing follow-up questions accordingly. Distinguished journalists such as Bill Moyers, Margaret Warner, Terry Gross , and Charlie Rose have demonstrated this skill superbly time and again.
Even with their career success, I do not believe these accomplished journalists would conduct an interview without doing their homework. Even though they do not seem to read their questions from their notes, I am sure they always prepare a list of questions or at least an outline for any interview, in addition to researching their interview participants’ background and setting a goal for each interview. To undiscerning eyes, their interviews seem effortless as they flow like excellent conversations. However, I have no doubt that their not having to read questions from their notes shows they internalize the background information and what they would like to ask, which is essential to be able to probe deeply into their subjects’ inner worlds.
Professor Eric Stolterman elaborated this Tuesday (March 10) in class on Elliot W. Eisner’s notion of connoisseurship – “the ability to make fine-grained discrimination among complex and subtle qualities.” I believe interview as a technique also involves connoisseurship. Being able to do it well demands an appreciation that there is more to the interview technique than “just going out there and asking questions.” Like appreciation for good wine, cuisine, architecture, and other designs and developing differentiating tastes for them, it also requires lots of practice.
Additionally, learning from good interviews such as those conducted by reputable journalists and being able to recognize good interviews are conducive to improving our interview skills over time, as everything else concerning connoisseurship. Moreover, I believe critiquing our own interviews is also essential, because seeing ourselves can be extremely effective as a heuristic experience. Many people feel surprised when they see themselves in video recordings, because it is inherently difficult to see ourselves in action. That’s why I think getting the participant’s approval to videotape interviews (which need not show the participant’s face) is important, at least in this stage of our career.
When I am in charge of preparing for the participant consent form, I try to prepare different versions for the contingencies where participants may or may not agree to be video-recorded. I also try to work with the person(s) who recruit our participants to know before our study whether the participants would agree to let us videotape the study. The goal is to maximize the chance of getting their consent to let us videotape the process so that we can review how we did in the interview and improve based on the experience. The recording also has the benefit of allowing us to conduct a more thorough analysis of what the participants displayed or expressed.
In all, learning to draw unexpected threads of meanings from our interview participants is not trivial in both the skills it involves and the practice it takes. More importantly, it is not trivial in terms of the profound understanding we can acquire about our participants or their experience. Personally, I feel it is really worth it.
Reference
Eric Stolterman. 2015. Syllabus for I544 Experience Design.
School of Informatics and Computing, Indiana University Bloomington
(unpublished).
Video "Under the Dome"
Chai Jing's "Under the Dome" is inspiring. She paces superbly in her narration, which is in the TED talk style, but much longer - 104 minutes. There are no fancy visual effects, but the audience obviously resonated with the storyline and arguments Chai presented. The entire presentation flows beautifully, so even though it's very substantive and contains lots of statistics, it's engaging.
She uses a series of very clear and sensible questions to guide her investigation and walk the viewers through her narration. She most likely did not know exactly what she would find, so her investigation plan - including where to go, whom to interview, etc. - must have adjusted to the circumstances. This process is similar to what good design research requires.
Her interview skill is very impressive, too. She obviously did her homework, which made her probing questions really sharp and to the point. Her demeanor effectively mitigates the critical nature of her questions.
The video of her talk became so popular that within a few weeks more than 20 million people had watched it before the Chinese government censors banned it from public access.
Until this video was released on Feb. 28, the middle class in China generally seemed to have bought into the Chinese authority's economic development argument - the country is making progress steadily in political reform, sustainability, and other quality-of-life measures while making sure first and upmost living standards are improving. Chai's video effectively challenges this myth, even though she didn't say it explicitly in the video. After listening to Chai, it's not hard to see the Chinese middle class ask questions such as: If the air is so toxic, does it matter how much the GDP growth rate is? Why hasn't the government taken more action to protect the health of the general public? Why have the environmental protection authorities and the laws been idle instead of doing their job?
This video may have created another watershed - Many Chinese hitherto have believed corruption is an individual problem isolated case by case. Banning the video may have made people realize that it's more than systemic than they previously thought, otherwise what does the government have to fear about the video?
Image Source: https://www.youtube.com/watch?v=T6X2uwlQGQM (1:00:22)
On Subjectivity
Boehner et al. (2007) claim that designers have a subjective role in creating probes and that they "reveal themselves through the design proposals or through speculative design inspired by the probe results". Is this subjectivity problematic?
Whether we are researchers or designers, we inevitably wear certain lenses in filtering information and focusing on specific aspects of “facts.” As Werner Herzog says, "Facts do not constitute truth." (Theodore 2014). As much as we may try to be objective, our perspectives are colored by past experiences and the resulting inner emotional world, sometimes without us even realizing it.
As designers, we are taught to reflect in action and reflect on action (Schön 1987). Reflecting on how our viewpoint affects what we design is a part of it. It’s not that we necessarily have an agenda in mind and deliberately try to design based on it. The subjectivity can be on autopilot, especially if we have little self-awareness about what we bring to the table. That’s part of the reason why reflection is so important for designers.
Perspectives differ. Perspectives can vary widely from one persona to another. Therefore, the same study purpose can result in drastically different interviews or even methods. For example, when we conduct an interview, whom we decide the interview means whom we are giving a voice; what questions we ask in the interview determines what our subject can articulate. All these decisions depend on our judgments, which are influenced by our perspectives.
Additionally, like Boehner et al (ibid.) argue, cultural probes can also have very different configurations from one researcher to another. Even with the same configuration, the interpretation of the result often is wide open, too.
Even quantitative analysis involves perspectives. For example, when we remove outliers from our dataset, we choose to ignore data points that do not conform to the majority pattern. This practice marginalizes what is not “normal.” There are ethical and possibly other issues in this practice, and the consequences can be profound and far-reaching.
Is it possible to be objective? Should we try to be objective? In my view, we can strive to be objective, but we need to be aware of the danger in claiming to be objective by keeping a critical eye on such claims. Even claiming to be objective represents a certain perspective. I think the most important thing is to hold ourselves accountable for the judgments we make in the form of rationale, and not to be afraid of making judgments, which according to Nelson and Stolterman (2014) is an essential part of what designers do.
Perhaps when we are not ashamed of saying we have our perspectives, we can articulate our rationale better, and we can better appreciate the connection between people’s viewpoints and their past experiences.
References
Kirsten Boehner, Janet Vertesi, Phoebe Sengers, and Paul Dourish. 2007. How HCI Interprets the Probes. Paper presented at CHI 2007, April 28-May 3, 2007 in San Jose, CA, USA.
Harold G. Nelson and Erik Stolterman. 2014. The Design Way: Intentional Change in an Unpreductable World. 2nd edition. Cambridge, MA: The MIT Press.
Donald Schön. 1987. Educating the Reflective Practitioner. San Francisco, CA: Jossey-Bass.
Marie-Françoise Theodore. 2014. 12 Things I Learned at Werner Herzog's Rogue Film School. Retrieved from http://www.indiewire.com/article/12-things-i-learned-at-werner-herzogs-rogue-film-school-20140924 on March 4, 2015.
Note: This post was originally composed for a weekly blog post assignment for the course “Interaction Design Methods” at the School of Informatics and Computing at Indiana University Bloomington.
Note: This post was originally composed for a weekly blog post assignment for the course “Interaction Design Methods” at the School of Informatics and Computing at Indiana University Bloomington.
Reflecting on the Card Sort Focus Group Interview Process
For the card sort focus group assignment, the first challenge was writing
the script for it. As the person in charge of the script, I consulted several
sources and received feedback from the team. It was particularly helpful
hearing inputs from Jianping, who recruited four of our five participants and
checked with them to see if any of them would mind being photographed or
video-taped. Since we didn’t hear from all our participants before the card sort
focus group activities took place, I prepared two versions of participants
consent form in case any participant would mind our photographing or
videotaping the process.
Putting together the script and consent form with no example or model to hang on to took time and thoughts, but it’s a good learning experience, as it required thinking through what we tried to find out and how we were planning to approach it using the structure provided in the assignment. It’s exactly what Eric talked about in the Experience Design class where he presented a schema that ties the questions (what & why), lens selection, tool selection, and data analysis elements of the design research process. These elements have to fit in a coherent way and as design researchers we need to constantly reflect on whether we’re losing sight of certain elements in the process.
The second challenge I faced was being the facilitator of the card sort and focus group interview activities. Even though I prepared a pretty detailed script that laid out the specific activities for participants to do and starting questions to ask them during the focus group interview, I was aware that my role as the facilitator would also include addressing whatever contingencies that might arise. I was also aware that a good interview would almost never be one that just reading from a list of pre-scripted questions.
One of the contingencies I had to deal with was that there was some confusion about how the cards should be prepared and what should go on them. We literally finished the preparation of the cards for the card sort activity the last second. Lessen learned: always check if everything is in place with enough time to fix potential problems.
I was a participant of a card sort before, but I never facilitated one prior to this experience. Even though I gave our participants instructions on how to sort the cards, initially them seemed confused. Fortunately, thanks to good division of labor (the rest of the team was either photographing, videotaping, or note-taking) and observation from the team, I was able to clarify and clear the problem on the spot.
I had conducted many interviews before, and I believe interview is a technique that gets better with practice. Still, every interview is different with different participants and background or domain knowledge involved. I’ve learned that listening very carefully during the interview process and asking good follow-up questions are essential parts of good interviews.
As Werner Herzog argues, "Facts do not constitute truth … construct a reality that illuminates the truth." (Theodore 2014) People have their own vantage points based on past experiences and their inner emotional world. Therefore, even though our participants played the same game and engaged in the same card sort activity together, they had different viewpoints. It was my job as the facilitator to probe with good questions.
The questions not only had to dig into what we tried to find out and why we conducted these activities. They also had to be considerate from participants’ psychological viewpoints. For example, as Gregory (2015) suggests, “why” questions are generally bad, and so is making assumptions about participants or using language that the participants might not know. How to probe without making participants feel uncomfortable requires sensitivity. Additionally, techniques such as validation, “tentafiers” (“Do you mind if I ask you . . . ”), strength identification, and empathetic responses can be useful in interviews (Grogory, ibid). The challenge is how to use these techniques skillfully so that the interview flows like a good conversation, as opposed to a cross-examination or robotic Q&As.
The efforts I took in preparing for potential breakdowns paid off. Even though we didn’t encounter any serious breakdowns, having thought it through primed my mindset for dealing with any potential glitches. For example, a few participants were not as vocal than the others. I was able to notice that and solicit their opinions on the spot. Fortunately, they responded pretty well and articulated their thoughts.
The other challenge I see in this card sort focus group exercise is connecting our activities in a meaningful way. Initially I was having a hard time seeing why we were doing card sort. Seeing how our participants talked to each other enthusiastically during the focus group interview convinced me that the eight minutes we had had them play our selected game “dumb ways to die” served as an excellent conversation piece for the discussion, just like usability test can be a good way to start an interview as Galt (2015) suggests. The card sort activity was arguably as relevant as the game play, but to some extent it refreshed participants’ memory about what they saw in the game.
In the end, it is up to us how we mix and match methods. We can use various methods or techniques to triangulate data; we can deploy one method as a starting point for another method; we can use techniques that draw insights that complement each other; etc. The crucial thing is that it is done in a methodologically sound way that serves a clear purpose, and I believe reflecting on our experience of using these methods is critical for learning to use the right methods in the right contexts for the right purpose.
References
Putting together the script and consent form with no example or model to hang on to took time and thoughts, but it’s a good learning experience, as it required thinking through what we tried to find out and how we were planning to approach it using the structure provided in the assignment. It’s exactly what Eric talked about in the Experience Design class where he presented a schema that ties the questions (what & why), lens selection, tool selection, and data analysis elements of the design research process. These elements have to fit in a coherent way and as design researchers we need to constantly reflect on whether we’re losing sight of certain elements in the process.
The second challenge I faced was being the facilitator of the card sort and focus group interview activities. Even though I prepared a pretty detailed script that laid out the specific activities for participants to do and starting questions to ask them during the focus group interview, I was aware that my role as the facilitator would also include addressing whatever contingencies that might arise. I was also aware that a good interview would almost never be one that just reading from a list of pre-scripted questions.
One of the contingencies I had to deal with was that there was some confusion about how the cards should be prepared and what should go on them. We literally finished the preparation of the cards for the card sort activity the last second. Lessen learned: always check if everything is in place with enough time to fix potential problems.
I was a participant of a card sort before, but I never facilitated one prior to this experience. Even though I gave our participants instructions on how to sort the cards, initially them seemed confused. Fortunately, thanks to good division of labor (the rest of the team was either photographing, videotaping, or note-taking) and observation from the team, I was able to clarify and clear the problem on the spot.
I had conducted many interviews before, and I believe interview is a technique that gets better with practice. Still, every interview is different with different participants and background or domain knowledge involved. I’ve learned that listening very carefully during the interview process and asking good follow-up questions are essential parts of good interviews.
As Werner Herzog argues, "Facts do not constitute truth … construct a reality that illuminates the truth." (Theodore 2014) People have their own vantage points based on past experiences and their inner emotional world. Therefore, even though our participants played the same game and engaged in the same card sort activity together, they had different viewpoints. It was my job as the facilitator to probe with good questions.
The questions not only had to dig into what we tried to find out and why we conducted these activities. They also had to be considerate from participants’ psychological viewpoints. For example, as Gregory (2015) suggests, “why” questions are generally bad, and so is making assumptions about participants or using language that the participants might not know. How to probe without making participants feel uncomfortable requires sensitivity. Additionally, techniques such as validation, “tentafiers” (“Do you mind if I ask you . . . ”), strength identification, and empathetic responses can be useful in interviews (Grogory, ibid). The challenge is how to use these techniques skillfully so that the interview flows like a good conversation, as opposed to a cross-examination or robotic Q&As.
The efforts I took in preparing for potential breakdowns paid off. Even though we didn’t encounter any serious breakdowns, having thought it through primed my mindset for dealing with any potential glitches. For example, a few participants were not as vocal than the others. I was able to notice that and solicit their opinions on the spot. Fortunately, they responded pretty well and articulated their thoughts.
The other challenge I see in this card sort focus group exercise is connecting our activities in a meaningful way. Initially I was having a hard time seeing why we were doing card sort. Seeing how our participants talked to each other enthusiastically during the focus group interview convinced me that the eight minutes we had had them play our selected game “dumb ways to die” served as an excellent conversation piece for the discussion, just like usability test can be a good way to start an interview as Galt (2015) suggests. The card sort activity was arguably as relevant as the game play, but to some extent it refreshed participants’ memory about what they saw in the game.
In the end, it is up to us how we mix and match methods. We can use various methods or techniques to triangulate data; we can deploy one method as a starting point for another method; we can use techniques that draw insights that complement each other; etc. The crucial thing is that it is done in a methodologically sound way that serves a clear purpose, and I believe reflecting on our experience of using these methods is critical for learning to use the right methods in the right contexts for the right purpose.
References
Malcolm Galt. 2015. Conduct Usability Testing To Create
Killer Online Marketing Campaigns. Retrieved from http://blog.uxeria.co.za/conduct-usability-testing-to-create-killer-online-marketing-campaigns/,
Feb. 27, 2015.
Alice Gregory. 2015. R U There? Retrieved from http://www.newyorker.com/magazine/2015/02/09/r-u,
Feb. 27. 2015.
Marie-Françoise Theodore. 2014. 12
Things I Learned at Werner Herzog's Rogue Film School. Retrieved from http://www.indiewire.com/article/12-things-i-learned-at-werner-herzogs-rogue-film-school-20140924 on Feb. 27, 2015.
Note: This post was originally
composed for a weekly blog post assignment for the course “Interaction Design
Methods” at the School of Informatics and Computing at Indiana University
Bloomington.
Considerations in apply the Semantic Differential Technique
Semantic differential (SD) has been widely used to measure attitudes since Osgood et al. (1957) proposed this method in the 1950’s. Its broad application is attributable to its relative low entry barrier in terms of requisite training and costs. However, like most of the other methods, there are considerations and issues for using it properly and for deploying it where it is useful. This post is to discuss some of these considerations and issues.
One of the most important things to consider is culture, especially when SD involves emotive evaluations. The empirical research Scherer and Wallbott (1994) conduct with data from 37 countries does not focus on SD per se, but it finds a “high degree of universality of differential emotion patterning and important cultural differences in emotion elicitation, regulation, symbolic representation, and social sharing.” (326) It suggests that culture plays a certain role in some dimensions regarding emotions. However, they are also careful in postulating whether some emotions might be unique to a certain culture. On the other hand, Al-Hindawe (1996) cites a study by Furuya-Nakajima and Vogt (1990), which finds significant cultural difference in how “ambition” and “self-confidence” are considered differently in Japan compared to Western cultures. Al-Hindawe (1996) also indicates gender as a potential variable influencing how traits are evaluated.
Additionally, Bradley and Lang (1994) caution that relying on a verbal rating system makes the technique difficult to use in non-English speaking countries and in populations not linguistically sophisticated (e.g., children or people with linguistic disorders.) There are other studies in the literature that concur this critique (e.g., Heise 1970). Bradley and Lang (ibid.) study a picture-oriented instrument called the Self-Assessment Manikin (SAM) and find it easier to administer than the traditional SD and overcoming some of the said pitfalls.
Are emotions suitable traits to use in SD? There are several sub-issues under the “emotion” umbrella. For example, how do we define “emotion?” Morgan & Heise (1988: 19-20) distinguish pure emotion (“internal, mental feeling whose focus is solely on affect”) from traits (e.g., trustworthy, warmhearted), physical states (e.g., sleepy, droopy), and cognitive states (e.g., alert, confused). However, in some cultures and contexts, some of these traits, physical states, or cognitive states may imply some emotional state. For example, sleepiness might indicate emotional detachment. In other words cultural and emotional considerations can be intertwined.
Another sub-issue related to emotion is when to use emotional traits in SD. I find two lines of research with bearing on this topic: Vergara et al. (2007) argue that commercial products are “bearers of users’ feelings” (9), so tools as mundane as hammers can be considered “emotional designs” as well and be potential candidates for studying with the SD. The other relevant study I find is by Allen et al. (1992), who find emotive reports and attitudinal judgments can be complementary in their study on behavioral predictors. For example, they think products that evoke nostalgic experience are good candidates for benefiting from taking emotions into account.
Other than cultural considerations and applicability of emotional traits, there are other methodological issues in using SD as well. For example, Bradley and Lang (ibid.) critique that using the SD technique involves analyzing a large database that requires statistical expertise such as factor analysis. Moreover, how do we know what people mean by choosing zero on the scale? It seems to me it could mean indifference, not applicable, don’t know, and refuse to answer (for whatever reason). Furthermore, we also need to consider potential threats to validity of what we measure with the SD instrument. For example, Heise (ibid.) suggest social desirability might plague participants’ responses.
In conclusion, SD is applicable in many contexts and situations, but we need to be aware of the potential issues that might compromise its usefulness. To deploy the method wisely, we also need to keep up with the methodological development of this method becomes more sophisticated.
References
- Al-Hindawe, J. 1996. Considerations when constructing a Semantic Differential Scale. Bundoora Victoria, AU: LaTrobe University.
- Allen, C.T., K.A. Machleit, and S.S. Kleine. 1992. A Comparison of Attitudes and Emotions as Predictors of Behavior at Diverse Levels of Behavioral Experience. Journal of Consumer Research 18: 493-504.
- Bradley, M.M. and P.J. Lang. 1994. Measuring Emotion: The Self-Assessment Manikin and the Semantic Differential. Journal of Behavioral Therapy & Experimental Psychiatrics 25(1): 49-59.
- Heise, D.R. 1970. The Semantic Differential and Attitude Research. In F. Summers (Ed.) Attitude Measurement, Chapter 14, pp. 235-253. Chicago, IL: Rand McNally.
- Morgan, R.L. and D.R. Heise. 1988. Structure of Emotions. Social Psychology Quarterly 51(1): 19-31.
- Osgood, C. E., P. H. Tannenbaum, and G. J. Suci. 1957. The Measurement of Meaning. Urbana: University of Illinois Press.
- Scherer, K.R. and H.G. Wallbott. 1994. Evidence for Universality and Cultural Variation of Differential Emotion Response Patterning. Journal of Personality and Social Psychology 66(2): 310-328.
- Vergara, M., S. Mondragón, J.L. Sancho-Bru, P. Company, and A. Pérez-González. 2007. User Profile Differences in Semantic Design - Application To Hand Tools. Presented at the International Conference On Engineering Design, Iced’07, 28 - 31 August 2007, Cite Des Sciences Et De L'industrie, Paris, France.
Artifacts and Meanings
Akama et al. suggest using artifacts in interviews as “triggers for reflection and imagination, tools for the articulation and communication of ideas and experience, and facilitators for participation and generative meaning-making” (2007: 173). Their discussion about artifacts reminds me of the research Csikszentmihalyi (1993) does on the meanings of household objects. Considering the rich underlying meanings and social relationships, artifacts could be a powerful tool to facilitate interaction, retrieve memories, and encourage insights when coupled with traditional interview techniques.
Artifacts provide “a form of comfort” (Akama et al. 2007: 177) especially when they are indigenous to the participant’s native environment, such as their home. The participant’s familiarity with the artifacts makes the interview process feel more relaxed and less formal. Therefore, artifacts can be good interview ice-breakers that invite participation and bridge the psychological gap between the participant and the interviewer.
Artifacts also provide focal points in the interview process. The material dimension allows for sensory touch, which can aid participants in processing thoughts and triggering memories of experience with the artifacts. They enable recalling of histories and stories associated with them, as well as other objects they have relations with. There can also be other people involved in these histories, stories, memories, and network of objects. With proper follow-up questions, artifacts serve as starting points that can help the interviewer snowball questions, thereby catalyze generative meaning-making, nuances and insights that otherwise may not be uncovered.
Focusing attention on artifacts enables opportunities for interview participants to reexamine how they relate to the objects at issue. In so doing, they reveal their sense about who they are. As Csikszentmihalyi (1993) argue,
[Objects help] both focus attention, reducing entropy in consciousness, and vividly brings back old memories and experiences, thus adding a sense of depth and wholeness to the self of its owner… the most meaningful symbol of his private self … had the power to put him back in touch with himself. (25)
Therefore, artifacts “embody the values and tastes as well as the accomplishments of the owner” and they serve as “repositories of meanings about the self” (Csikszentmihalyi 1993: 25-26).
As repository of meanings, artifacts are often capable of drawing out rich narratives complete with sounds, smell, images, tastes, feelings, emotions, and the context in which these sensory dimensions exist. Therefore, engaging interview participants with artifacts opens the window to peek into the holistic experience the owner of the artifacts has had.
To me, the most important take-away of studying the use of artifacts in interviews is developing sensitivity toward the interaction between artifacts and people, and toward the experience surrounding such interaction. Artifacts are not just objects. The acts of acquiring, keeping, and using an artifact, as well as giving it a spot in their private homes all have meanings. Why and how an artifact becomes a part of someone’s life is intriguing.
References:
Akama, Y., R. Cooper, L. Vaughan, and S. Viller. 2007. Show and Tell: Accessing and Communicating Implicit Knowledge Through Artefacts. Artifact I (3):172–181.
Csikszentmihalyi, M. 1993. Why We Need Things. In S. Lubar and W.D. Kingery (eds.) History from Things: Essays on Material Culture , pp. 20–29. London: Smithsonian institution press.
From Zero to Five, to whatever Number that ensures Sample Representativeness
Goodman et al’s (2012: 304) assert that “… usability is a
means of directed product evaluation, not scientific inquiry.” What do they
mean?
A reason why usability tests are not scientific inquiry can
be found just two pages following said Goodman et al assertion – “Usability tests are not statistically
representative” (authors’ emphasis.) In science, statistical
representativeness requires strict sampling procedures to ensure the sample is
representative of a well-defined target population. Random sampling and
appropriate sample size are among the ways to ensure sample representativeness.
The meaning of “random” is not what lay people usually use
this word for. Only when everyone in the target population has equal chance to
be included in the sample through the sampling process can we say the sample is
randomly selected. Therefore, sitting at a coffee shop and testing willing
customers does not give us a random sample.
“Being scientific” may be the gold standard of scientific
research, but in the context of design and usability tests, this standard is
not feasible in most design cases. Even in the rare cases where it is, it is
most likely not desirable.
On the feasibility front, the
sample size required to achieve representativeness may be beyond what the
design project can afford, and the time it takes to complete the tests may be
too long for the time line of the product development the tests are for.
Regarding the desirability of adhering to scientific
methods in conducting usability tests, the resources these methods require can
be better spent in not-so-scientific usability tests. For example, Nielsen
(2000) finds the marginal benefits of testing more than five users significantly
drop, as the graph below shows. Later and more tests (Nielsen 2012) confirm such a
finding.
Source: Nielsen (2000)
Another non-scientific aspect of usability tests is the difficulty in replicating findings. Studies adhering to scientific methods are supposed to be replicated in findings if the same methods are used, but replicating findings exactly is not usability tests' forte. However, does it make good sense to try to make usability tests' findings replicable? Nielsen (2011) argues against attempts to find all issues in usability. The marginal benefits in relation to costs as demonstrated in the graph above is one of the reasons behind this argument. The other is that he finds most websites, applications, and
mobile apps have serious usability issues. By focusing on the big issues,
usability tests can significantly improve the key performance of the website or
application. This is the 80/20 argument, which makes good sense from the pragmatism viewpoint.
Heuristic evaluation provides arguably an even more cost
effective way to evaluate usability than usability tests do, because it has
proven to be able to find the majority of major and even minor problems
usability tests can (Nielsen 1995). However, sometimes heuristic evaluation may
not find some of the problems that usability tests on the same design uncover.
One such scenario is that the experts who conduct the heuristic evaluation do
not have the specific domain knowledge (Nielsen 1995). In this case, it would
be appropriate to make design decisions based on usability tests.
In all, usability tests are not scientific inquiry, but
they have an importance place in the design process. As Nielsen (2000)
indicates, a critical take-away of the said graph is: when the number of tested
users is zero, we find zero usability problems. Therefore, one is much better
than zero, and five can just hit the sweet spot. It doesn’t seem nearly as
daunting as the kind of sample size that is required to qualify as scientific
inquiry. Does it?
References
- Goodman, E., M. Kuniavsky, and A. Moed. (2012). Chapter 11: Usability Tests. In Goodman, E., Kuniavsky, M., & Moed, A. Observing the User Experience : A Practitioner's Guide to User Research (2nd Edition), pp. 273-326. Saint Louis, MO: Morgan Kaufmann. Retrieved from http://www.ebrary.com.
- Nielsen, J. 1995. Characteristics of Usability Problems Found by Heuristic Evaluation. Access via http://www.nngroup.com/articles/usability-problems-found-by-heuristic-evaluation/ on January 30, 2015.
- Nielsen, J. 2011. Accuracy vs. Insights in Quantitative Usability. Access via http://www.nngroup.com/articles/accuracy-vs-insights-quantitative-ux/ on January 30, 2015.
- Nielsen, J. 2000. Why You Only Need to Test with 5 Users. Accessed via http://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/ on January 30, 2105.
- Nielsen, J. 2012. How many Test Users in a Usability Study? Accessed via http://www.nngroup.com/articles/how-many-test-users/ on January 30, 2015.
Subscribe to:
Posts (Atom)


