as team FourSense was to develop a versatile payment system for all users by understanding how to build confidence in visually impaired users during person-to-person transactions.
From our varied backgrounds we have all been sensitized to effective visual communication as a mode of design. Each of us chose this project to practice design thinking beyond ordinary visuals to create a real difference in a very important problem. Our design solution should present an experience that prioritizes, not simply through visual communication, but all communication.
We chose the name FourSense to represent our own learning goals in this project: understanding how to utilize the other four senses while de-emphasizing visual design in favor of a more holistic approach.
Meet Our Team
Nastasha Tan is the team's Project Lead, and leverages her multidisciplinary background in user-centered design, cognitive science, and psychology to ensure that the team not only functions smoothly, but gains a deep understanding of the data.
Jooyong Lee is the team's Communications Director and draws upon his experience in working in the information technology industry to bring the team together and coordinate technical solutions.
Brendan Kiu is the Software Architect of the team, and combines his enthusiasm for good interfaces and training in computer science to engineer user-friendly software for the team.
Zhenshuo (Daisy) Fang is the team's Research Director, coordinating research and ensuring that studies are focused on user needs. She also uses her design skills to assist with information design and sketching.
James Mulholland is the team's Design Lead, and combines his experience in graphic design with his passion for design thinking to ensure that the team's work is both compelling and deeply insightful.
Our team was asked to understand life with visual impairment and learn about challenges during point-of-sale transactions. Solutions addressing these challenges will abstract the experiences of the visually impaired to improve the payment experience for all users. We formed insights around our data and determined opportunity areas for new technology. As students of Human-Computer Interaction we prioritize empathy to better understand users, knowing that our experiences and approaches are different from those for whom we design. Our team was motivated by the desire to improve quality of life for all users.
Our research focuses on three areas:
- Management methods the visually impaired use for monetary transactions
- How caregivers, the social and educational community support the visually impaired in approaching transactions
- Aspects of a transaction that make a visually impaired person more comfortable
The mission of the Human-Computer Interaction Institute (HCII) at Carnegie Mellon University includes the analysis of human behavior which leads to the development of technological systems. User research guides design solutions that best support tasks while improving user experience, leveraging cutting-edge technology to solve real-world problems.
The Master's of Human-Computer Interaction program is a 12-month program that includes a capstone project, where teams partner with an industry client for a nine-month project. The research for that project is presented in this report.
Through our research, we carry on our school's motto. "Our heart is in the work."Back to top
About Our Client
Bank of America has been in the forefront of the credit and debit card industry and competes with many of the major banking corporations around the world. Most major banking corporations have provided consumers with credit and debit card services by working with Visa, MasterCard, and American Express. Although American Express and Discover provide direct credit card services as well, most banks use Visa or MasterCard as a primary credit provider.
As the largest banking organization in the country, Bank of America also remains at the top in the satisfaction rankings amongst financial institutions, beating out Wells Fargo, HSBC, Citi, and Chase. Bank of America remains as the number one provider of debit cards and the number two provider of credit cards.Back to top
We used five methods during our research, each of which is detailed below.
- Contextual Inquiry (CI)
Contextual inquiry is a specific type of interview that calls for one-on-one observations of work practice in its naturally occurring context. Our team performed ten contextual inquiries with both blind and low vision users and their helpers at the point-of-sale terminal in grocery stores, banks and local shops. We created different models to understand the information flow, task sequence, and cultural aspects of monetary transactions, emphasizing the breakdowns that occured during the process.
Fly-on-the-wall is a method that allows the researcher to freely observe a situation without interrupting the work flow and being noticed. Our team used the fly-on-the-wall method during a conference for technology for the disabled in San Diego (CSUN).
- Guerilla-Style CI
In guerilla-style contextual inquiries, researchers make direct contact with the users without previous arrangements to acquire real-time data. We performed guerilla-style contextual inquiries during the CSUN conference where we asked several visually impaired users to show us how they use technology on their smartphones.
Our team conducted several expert interviews with Carnegie Mellon professors, Microsoft researchers, Bank of America employees, and IPPLEX researchers to understand existing research related to visually impaired users and payment technology. We also interviewed four visually impaired users referred to us by Bank of America to get insights on their lifestyle and as well as an understanding of the visually impaired community.
- Artifact Walkthrough
During an artifact walkthrough, users show artifacts related to the process being studied and explain how they use it. During our contextual inquiries, we asked the user to show us their wallet (and other money-related devices) so we could understand better how the blind and visually impaired manage their money and interact with devices they carry around everyday.
The visually impaired community includes a large spectrum of individuals of varying visual impairments, from low vision to blind. In order to discovered as much as we could about the visually impaired community, we recruited our users of different age, race, economic status, occupations, and technology-use.
We recruited 23 users total, conducting 10 contextual inquiries, 4 user interviews, 7 guerilla-style CI's, and 2 fly-on-the-wall studies.Back to top
These themes were derived to encompass the insights we generated from the data collected through observation and interview. The meaning of each theme is simple, but the implications resonate deep into the direction of future designs.
Users are not defined by their visual impairment, but prefer to be seen "as a person first." Solutions should be comprehensive to encompass experiences for both blind and sighted users.
Users appreciate tools that support their needs and reduce impact on others. Technology should be affordable and encourage greater efficiency for users and their helpers.
Users are hopeful for a more independent life through technology. Solutions should empower users by providing feedback to make effective choices and ensure opportunities to maintain their privacy.
Under each of these categories, we've articulated several key insights from our research.
- Users have personal and unique reasons to choose payment methods.
- Each user wants to be treated like a normal person.
- Independence is essential for the user to be self-confident and comfortable.
- Payment problems have a significant impact on helpers.
- Users adopt technology that is cheap and has minimum obligation.
- Transaction errors are critical because users have difficulty detecting and correcting them.
- Users seek efficiency in all shopping activities.
- Feedback is necessary for users to learn from errors.
- Feedback is necessary for users to understand the status of the system.
- Privacy makes users feel comfortable and secure.
- Having choice in tools, methods, and technology empowers users.
- Users need predictability and consistency to allow for preparation.
- A user's understanding of current status guides expectations
Download the presentation text by clicking here.Back to top
Based off our key findings, we have identified four directions for future development.
The authorization process requires new ways of thinking about feedback and error prevention. Visually impaired participants often required guidance to the point of sales device for signing. At the point of sale, they discovered the digital signature pad does not correctly capture the signature or they felt their signature was illegible and therefore pointless to use. Recent trends in payment technology are moving towards contact-less payment. However, this new payment system still requires a signature for authorization when the total price exceeds a certain limit. There is a product opportunity to redefine the authorization process and leverage other physicalities for authorization.
- Improve the current signature interface or replace it all together.
- Change the authorization process by reshaping the way payments are modeled.
- Provide multiple authorization options and include authorization methods that do not require a signature.
Restructuring the Shopping Process
A key component to a visually impaired person's shopping experience was the assistance they received. Whether from navigating through the store to finding the point of sales device, some level of assistance was required. Providing tools to empower our users and provide them more control in the shopping process will help them make more decisions of their own and increase independence.
- Develop a pervasive search functionality to avoid looking around to shop for the best item.
- Empower users with tools that allow for them to make a selection between different options on the shelf.
- Simplify the shopping trip throughout the store by utilizing the shopping list in-context.
- Bring together the search, reserve, and pay process into a more seamless experience.
- Leverage the wisdom of the crowd to make the shopping process more efficient by referencing ratings on stores and employees.
Assisting the Mobility of the Visually Impaired
For the visually impaired, moving throughout the store and locating objects requires assistance. As mentioned in our findings, independence improves our users' confidence levels and comfortability with their surroundings. There is a product opportunity to provide the visually impaired with more confidence by developing technologies that focus on movement and articulation in physical space. We can improve kinetic responses and interactions with devices through the use of appropriate feedback.
- Avoid moving around in stores or locating the signature pad.
- Leverage advanced sensory technologies to enhance awareness of direction and distance.
- Improve movement awareness and way-finding.
- Apply directional auditory feedback and computer vision technology to improve the way we navigate through the store and concentrate only on the things we care about.
Many banks are tapping into the mobile space as a way to incorporate NFC (Near Field Communication) payment technology to make banking and payments more versatile. There is an opportunity to consider key accessibility features and implement options to assist the visually impaired users in these new technologies. However, these new solutions leave much room for improvement as the technology is yet nascent in its development for transactions in the US. Considering accessibility from the ground up is important for creating solutions for the visually impaired. Direct feedback, status updates, and privacy features could be implemented cheaply utilizing the ability to embed payment information in objects and advanced feedback on mobile devices.
- Leverage NFC technology for mobile banking and payments.
- Develop flexibility in the delivery method of payment.
- Enable real-time balance updates to support management.
- Empower individuals to have full control over their bank account which saves trips to the bank.
- Provide instantaneous feedback about banking and transaction activities.
The Current Shopping Process
A large part of researching and learning about life with visual impairment from our users was about the challenges that took place not only when paying, but largely throughout the entire shopping process. In the spring, we first-handedly observed our users experience difficulties in navigating through the store and browsing for items to buy. As we began to understand our large problem space from our research findings, we abstracted an understanding of the shopping process to define opportunities for improving the experience for the visually impaired and sighted community.
Based on our research findings and our own experiences from shopping, we defined the shopping process into five parts that describe how a typical customer would proceed to shop and purchase his items. The following details each step of the shopping process:
Determining where to go to retrieve specific items and the means of transportation to get to and from the store.
Once the customer arrives at the store, they revisit their goals for visiting the store, ask for assistance, and/or grab a cart to begin shopping.
After all of the items are gathered, the customer proceeds to checkout their items and pay for them.
To pay, the customer enters their PIN information or signs their name to validate their identity.
The customer checks to see if their receipt accurately reflects their purchase and proceeds to exit the store.
During our spring research phase, we learned as much as possible about our problem space and our users. During our design phase, we synthesized our research findings to explore different design opportunities and develop concepts that could be validated with our users. Eventually, our final design was chosen based on its potential impact. In later iterations of our final design, we evaluated our prototypes closely with our users to refine our interface. More focus was involved in each prototype to end with a comprehensive and usable product.
The beginning of our design process was characterized by an emphasis on creativity and idea generation. As represented in our design funnel, we began with a very broad ideation that was eventually narrowed by our focus. As with most design projects, beginning with a very broad scope encouraged many ideas not subject to immediate judgement. It was important for our team to have free flowing thoughts and an opportunity to think innovatively without boundaries so that our product ideas and concepts could be novel and revolutionary. We used the following brainstorming techniques to generate our ideas, while considering the needs of our users.
We began our design process with a brainstorm that explored over 100 ideas. The ideas generated during our brainstorm came from our research findings and the product opportunities that were derived from them. Brainstorming was a successful way to begin our design process because it resulted in a handful of great ideas we could pursue.
We brainstormed not only by generating new ideas from our minds, but also by crafting physical objects using an assortment of cheap materials (foam, pipe cleaners, paper, and plastic trinkets). Through crafting, we created objects that quickly came to mind that were influenced both by the material of the object and the ideas we had fresh in our minds from our research findings. Many of our ideas focused on novel applications of tactile, haptic, and audio feedback that set the groundwork for a novel and revolutionary product.
As our design scope focused more clearly on payment solutions, our team participated in a bodystorm that involved exploring various ways to physically interact with a payment terminal. Bodystorming is like an improvisational skit that involves spontaneous movements and embodiment of gestures with various objects. The ideas generated are valuable not only because they are what first comes to mind, but also because the ideas generated consider the interaction and the experience of the entire body. We used bodystorming to explore novel interactions at payment terminal, resulting in ideas that ranged from paying with a phone to paying with a karate chop gesture!
Our personas were used to guide our design process ensuring that the needs of our users were identified and addressed in our designs. When generating design concepts and ideas as a team, we referenced our personas to make sure they fit the goals of our personas.
- Shopping Process
Throughout our ideation process, we referenced the shopping framework to understand our solutions in context. This helped us determine the extent to which our soultion would have the intended impact.
In order to better understand our problem space, we had to acknowledge legal, organizational, and cultural constraints. These constraints were important to identify because they shaped the way our final product would be designed.
In order to evaluate our design ideas with our users, we created prototypes of various fidelities. Prototypes were used in both our concept validation and final design evaluations because they are cheap, simple and easy to make. Because we wanted to evaluate a lot of our ideas quickly, there was an emphasis on prototyping rapidly. If ideas were successful and worth committing to, we created higher fidelity prototypes to communicate a greater level of detail.
To test our initial concepts, we relied heavily on creating props to convey our design concepts to our users. We used paper and low-fidelity materials to communicate the look-and-feel, and overall functionality of our concepts. Because most of our initial users were blind, we focused on creating prototypes that had tactile features distinguishable by touch. To add a level of depth to our low-fidelity prototypes, we used our owns voices to simulate the audio feedback that would be given by the system. This not only made our concept more realistic, it gave us the flexibility to try out various types of audio and speech feedback to see what would be most effective in our final design.
- Gesture Prototyping
We validated a gesture-based interface that involved prototyping the gesture language, rather than prototyping the physical object and visual interface. We focused our prototyping efforts on the different gestures that could be used to access features and information on Android's Nexus S phone. Again, because most of our users were visually impaired, visual design was not a high priority for our team to prototype. As we prototyped our gesture language, we focused on different types of gestures that could be used to access information from the phone by exploring user-generated gestures and the gestures we designed from our own ideas. Defining a set of gestures with our users helped us understand what our users were comfortable performing. While evaluating our gestures based on user feedback, we also considered the phone's ability to recognize these gestures in a mobile phone context. Through user testing, we refined the gesture language to avoid accidental activation of these gestures due to false positives and negatives.
- Visual Prototyping
Alongside our gesture language, we implemented a visual interface that would allow sighted users to use our prototype. This visual interface began as a set of low-fidelity wireframes created in Adobe Illustrator. Eventually, the wireframes were improved using Balsamiq Mockups, a prototyping tool. As we made improvements to our visual interface, we wanted to evaluate our visual interface coupled with our gesture interface. To do this, we created a foam-core model of a mobile phone that made simulating screen changes easier. In addition, a foam-core model was a good way to test our gesture interface because it was something users could physically hold in their hand. While the wireframes were used to user test, we gradually implemented the visual interface in code on Android's Nexus S phone. We took an agile development approach when designing our visual interface because it supported quick iterations of our visual interface based on quick user feedback. After our visual interface was improved for a second iteration, we began developing a high-fidelity visual interface for our final prototype, Sensei. To evaluate our visual interface improvements, we created a click-through prototype of our phone screens. Testing our interface through a functional on-screen prototype allowed us to get feedback regarding the functionality and design affordance of on-screen features without having to implement our visual interface in code.
- Unified Interface Prototyping
After several rounds of testing the visual and gestural interface as separate entities, we created our final prototype, Sensei, which combines both the visual and gestural interface. It was important for our final prototype to exemplify our goal of having a single experience for both visually impaired and the sighted users because we needed to evaluate the user experience and accessibility of the combined interface. We used this opportunity to test how the audio, speech, and visual interface complemented each other.
During both our concept validation and final design phases, we used different methods to test our designs and evaluate our concepts.
- Wizard of Oz Testing
For our low-fidelity prototypes, we did not create a robust functional prototype. In early development of a new concept, because it is difficult to invent new technologies before understanding broad high-level concerns, wizard-of-oz testing, was used to test an idea close to our actual final prototype by simulating the functionality. A tester behind-the-curtain, operated the prototype's functionality to simulate a real response from our prototype. By testing with this method, our users experienced a close model of our real product without any code implementation.
We spoke with our users before and after each test to validate and confirm our observations. Our goal during interviews was to better understand the user's state of mind, comfortability with the product, and willingness to learn about our prototype. We tried to understand their current experiences with technology and how their expectations may have influenced the test results. From this, we were able to gain a lot of knowledge and perspective from our users.
- Participatory Design
We wanted to better understand how users perceived the use of a mobile phone so we worked with users by asking them to perform abstract actions with the phone. This allowed us to understand their perceptions about the phone and what types of gestures were natural to articulate.
- Think-Aloud Testing
After gaining enough focus on our concept, we were able to design more details within the system and evaluate more minute interaction elements, specifically regarding: application flow, button location, gesture choice, sound and speech feedback. We asked users to walk-through a number of tasks, meanwhile speaking aloud their thought process. Our intent to test our concept was not to validate that it was the right design. Rather, the purpose of testing was to gain an understanding of where and why it failed to gain significant knowledge of the system. Combining empirical data with our qualitative observations provided a direction for our subsequent iterations.
We took an iterative design approach through multiple stages of our designs. We evaluated our ideas based around the needs we identified in the spring. Through evaluation, we wanted to understand how our users reacted and used our designs.
- Iteration 1
This phase included broad concept validation for ideas that concentrated on earlier parts of the shopping framework. These ideas centered around meeting the needs observed around the shopping process.
- Iteration 2
After evaluating our ideas from Iteration 1, our conclusions influenced us to develop new ideas. We focused these new ideas around the purchase and payment area of the shopping process so that users could effectively move through the transaction space.
- Iteration 3
After deciding on the concept of our final product, we perfected the detailed elements of the interface. Our interface included a number of technologies including physical interaction which needed careful consideration. At the same time, we worked through four iterations of the visual interface through Prototyping eventually working towards a click-through prototype to be tested with users.
- Interation 4
We combined the visual and physical interfaces for this final iteration and worked with users to determine any issues from being combined.
- User Recruitment
During our second round of testing we had a great challenge recruiting new users. In the end we relied on many of our users from the research phase of the project. We acquired new blind and low-vision users through Bank of America's affinity group in Charlotte, NC. We were able to approach members of our lab for low level iteration, but relied heavily on our clients for new user connections and also approached sighted users off of the street. We needed users who have a sense of financial responsibility and regularly manage their own finances and therefore required our users to be at least 21 years of age. We did our best to recruit younger users who were more likely to be familiar with new technology.
- User Demographic
We tested with 30 participants in total that included 10 completely blind, 4 low vision, and 16 completely sighted users. Among those 30 participants, 18 of them were female and 12 were male. In order to compare the feedback from first and second time users, we tested our prototypes with several consistent users through out the process. We had 5 consistent blind users (2 male and 3 female) and 2 low vision users (1 male and 1 female). The following table details our participant distribution.
This site has been designed specifically to be friendly with screenreader technology, with an easily searchable single-page layout, as well as high-contrast colors and minimal graphical elements to create a well-crafted experience for both sighted and visually impaired users.