scenarios, instead of systematically going through the user interface screen by screen.5 Aspects of heuristic evaluation we’ll examine here include • • • • Potential heuristics Use of scenarios Location of the evaluation (laboratory versus field) Capturing and documenting findings Heuristics The heuristics you choose can be based on established lists, such as Jakob Nielsen’s heuristics or Bruce Tognazzini’s “First Principles of Interaction Design.”6 However, in most cases you’ll want to adapt and expand these more generic classics to include specific areas of interest for your app For example, location-based apps may have their own heuristics (e.g., “Users can always locate their current position”), and content creation apps may have heuristics for topics such as autosaving content At a minimum you’ll want to adapt the classic heuristics for the iPhone, removing web and desktop references Nielsen’s severity ranking scale and iPhone-adapted heuristics are included in TABLE 5.3 and FIGURE 5.2 One of the major weaknesses of heuristics is the emphasis on usability problems Understanding what your competitors are doing wrong is valuable, but there are also best practices that you’ll want to emulate Moreover, you will undoubtedly be inspired by many of the innovative design solutions in the iPhone space As a result, you should stray from the heuristics from time to time They’re an excellent starting point, but they can be limiting if you’re trying to match every observation to a specific heuristic If something is interesting to you—good or bad—and there isn’t a label, create your own! TABLE 5.3 Nielsen Severity Ranking Scale7 Rating Description I don’t agree that this is a usability problem at all Cosmetic problem only Need not be fixed unless extra time is available Minor usability problem Fixing this should be given low priority Major usability problem Important to fix, so should be given high priority Usability catastrophe Imperative to fix this before product can be released Shirlina Po et al., “Heuristic Evaluation and Mobile Usability: Bridging the Realism Gap,” Mobile Human-Computer Interaction (ACM, 2004), www.springerlink.com/content/yven5m13vf437y1u/ Bruce Tognazzini, “First Principles of Interaction Design,” www.asktog.com/basics/firstPrinciples.html Jakob Nielsen, “Severity Ratings for Usability Problems,” www.useit.com/papers/heuristic/ severityrating.html 98 CHAPTER ● EVALUATING THE COMPETITION Download from www.wowebook.com Visibility of app status The app should always keep users informed about what is going on, through appropriate feedback Match between app and the real world User control and freedom The app should sense the user’s environment and adapt the information display accordingly Users often choose app functions by mistake and need a clearly marked “emergency exit.” Shazam provides feedback as it analyzes audio The compass changes the map orientation as needed Recognition rather than recall Flexibility and efficiency of use Aesthetic and minimalist design Minimize the user’s memory load by making objects, actions, and options visible Accelerators can help expedite tasks and reduce typing Screens should not contain information that is irrelevant or rarely needed Yelp’s Recents tab stores businesses recently viewed Urbanspoon provides suggestions as a user enters a query “Cancel” and “x” buttons are common iPhone controls Photo controls are hidden when not in use Error prevention Eliminate error-prone conditions or check for them and present users with a recovery option Spell check has an option to reject its recommendation Consistency and standards Users should not have to wonder whether different words, situations, or actions mean the same thing Kindle uses standard controls for bookmarking and showing progress Help users recognize, diagnose, and recover from errors Error messages should be expressed in plain language that precisely indicates the problem and the solution Epicurious displays a message when users are offline 10 Help and documentation Help should be contextual, concise, and specific Ocarina provides contextual help upon start-up; users can also access tutorials while using the app FIGURE 5.2 Nielsen’s ten usability heuristics adapted for the iPhone METHODS 99 Download from www.wowebook.com Scenarios Depending on the evaluation goals, you can walk through the entire app or focus on a small set of scenarios When embarking on a new app design, I recommend evaluating as much of the competitors’ designs as possible If time is limited, focus on the scenarios that enable users to achieve their primary goals and postpone scenarios associated with secondary goals Here are potential differences between these goals, using Urbanspoon as an example Primary goals: • • Find a restaurant Get directions to the restaurant Secondary goals: • • Review a restaurant Add/edit restaurant information Apps that are connected to web sites or desktop applications should not be considered in isolation, especially when some level of syncing is involved For example, Yelp enables users to draft reviews on their iPhone and then complete the reviews on the Yelp site Of course, it’s not necessary to review the entire Yelp web site, but it would be important to provide a scenario that includes completing the draft started on the iPhone app Laboratory versus Field Heuristic evaluations can take place in the lab or in the field—the decision should be based on user behavior, the type of app, and the dependency on context to generate the insights needed.8 For example, if you’re evaluating an app that is largely driven by location-based data, such as Yelp or Foursquare, you should be out in the field to fully evaluate the user experience However, if the app has less demanding contextual dependencies, you may be able to simulate the context in the lab If you take a simulation approach, some questions to consider include the following: • Environment What is the environment like where the app is used—is it dark, noisy, crowded? Think of creative ways to simulate this environment in the lab Christian Monrad Nielsen et al., “It’s Worth the Hassle! The Added Value of Evaluating the Usability of Mobile Systems in the Field,” Proceedings of the 4th Nordic Conference on Human-Computer Interaction: Changing Roles (ACM, 2006), www.usabilityprofessionals.org/upa_publications/jus/2005_november/ mobile.html ; Anne Kaikkonen et al., “Usability Testing of Mobile Applications: A Comparison between Laboratory and Field Testing,” Journal of Usability Studies (November 2005) 100 CHAPTER ● EVALUATING THE COMPETITION Download from www.wowebook.com • Concurrent activities What other activities occur concurrently? If the user watches TV, sit back and relax But if the user is typically running or walking, you may need to get moving! • Entities What other people, devices, or objects are involved when the app is in use? Apps used in pairs (e.g., games or messaging) will require help from your colleagues • Time Is the app used during a certain time of day or season? If your app is influenced by these factors, schedule your heuristic evaluation accordingly • Network Does the app require an Internet connection? Consider trying the app in the company elevator or basement, or switch to Airplane Mode and see how the app behaves Also, you may want to test the app without WiFi (with only a cell connection), or with only WiFi on but a slow WiFi connection • User data Does one need existing data to fully evaluate the app? You may need to create a user account and pre-populate it with content before getting started Capturing Findings If the heuristic evaluations are conducted out in the field, take shorthand notes and expand upon them later Regardless of whether you’re in the lab or field, be sure to take copious screenshots along the way When you return to your office, consider posting the screenshots on foam boards, then adding sticky notes with your observations, as shown in FIGURE 5.3 After going through each competitor, start looking for themes across competitors Placing similar screens side by side can help illuminate differences For example, FIGURE 5.4 includes heuristics as well as interesting terminology and navigation differences; Yelp uses the term Nearby whereas Urbanspoon uses Near Me Documenting Your Findings Depending on your company’s needs, foam boards covered with screenshots and observations may be sufficient documentation for your team In the past, my teammates and I have kept these types of boards in our “war room” and referred to them over the course of a particular project Alternatively, you can create slides with findings and recommendations; a few examples can be found on Ginsburg METHODS 101 Download from www.wowebook.com FIGURE 5.3 The Yelp iPhone app search form and results with relevant heuristics: “Flexibility & efficiency of use” and “Recognition rather than recall” FIGURE 5.4 Side-by-side comparison of the Yelp and Urbanspoon Nearby/Near Me views In this particular case, both designs have some weaknesses—too many categories and too cluttered—thus the insight might be “Prioritize the display of business information” (as opposed to categories and UI debris) Design’s Slideshare page.9 And if you really need a formal document, you can use the same outline included in the slides, with a more fully developed narrative Your outline might contain the following information: • • • • • • Executive summary Methodology Apps included Scenarios included Findings with annotated screens Best practices COMPETITIVE USABILITY BENCHMARKING Although heuristic evaluations will provide rich insights, your company may want to supplement these qualitative findings with quantitative data One widely used method is competitive usability benchmarking, which involves quantitatively measuring how well users perform certain tasks with your competitors’ apps Metrics typically logged include • • • Number of errors Task completion time Whether the task was successfully completed Ginsburg Design on Slideshare, www.slideshare.net/ginsburgdesign/ 102 CHAPTER ● EVALUATING THE COMPETITION Download from www.wowebook.com Benchmarking studies focus more on the metrics and less on the understanding of why there are problems; this type of controlled focus is needed in order to generate comparable results Benefits Benchmark findings will help you assess where your competitors are failing in the user experience and may suggest ways to differentiate your app For example, imagine that you’re creating an app for finding local restaurants and you’ve decided to conduct a benchmarking study that would include Yelp and Urbanspoon If you discovered that only a small percentage of users are able to successfully create a new account on the iPhone, this may be an area where you could differentiate your user experience Additionally, you could use this metric to establish goals for your app If the success rate for your competitors’ sign-up was 50 percent, perhaps you would establish a goal of 60 or 70 percent for your app Protocol In traditional usability studies, the facilitator asks participants to think aloud as they proceed through a series of tasks This enables the facilitator to understand the participants’ cognitive processes and whether the interface supports their way of thinking In the case of benchmarking studies, facilitators tend to stay in the background, interrupting only to start and stop tasks One of the main reasons is that benchmarking is focused on measurement and the “thinking aloud” protocol prolongs task completion times, making it difficult to compare results across participants Keep in mind that the benchmarking numbers won’t necessarily tell you the why behind completion rates Therefore, you may want to follow the benchmarking session with an interview to probe into specific interactions with the app NOTE The “thinking aloud” protocol and other usabilitytesting details will be discussed in Chapter 8, “Usability-Testing App Concepts.” Capturing Data How you capture benchmarking data depends on your needs Simply put, the more data you need, the more sophisticated the setup.10 Here are three possible configurations: • Low-tech With this option, the facilitator manually notes errors, completion times, and whether the scenarios were successfully completed This can be accomplished in the lab with relative ease but can be more challenging in the field 10 Antti Oulasvirta and Tuomo Nyyssönen, “Flexible Hardware Configurations for Studying Mobile Usability,” Journal of Usability Studies (February 2009), www.usabilityprofessionals.org/ upa_publications/jus/2009february/oulasvirta1.html METHODS 103 Download from www.wowebook.com • Logging software Another option is to use logging software combined with observation.11 This reduces the burden on the facilitator and generates more accurate start and end times • Audio and video Finally, you can capture audio and video of the sessions Video setups can range from one camera on the phone to multiple cameras that capture the participant’s face and environment More complex setups provide facilitators with a view into the participant’s screen Other Setup Tips Additional tips on the number of participants, apps, and scenarios are discussed in this section • Number of participants The number of participants is highly correlated to the degree of confidence you can have in your results Statistical sampling best practices tell us that 30 observations is a good rule of thumb for obtaining precise results, but you may need to adapt this number depending on your study goals You can recruit participants using the profile identified in your up-front user research, as discussed in Chapter 3, “Introduction to User Research.” • Number of apps Given the amount of time required for capture and analysis, you may want to focus on two apps But if you have the resources, you can certainly benchmark additional apps • Number of scenarios The benchmark study should cover the scenarios that enable users to achieve their primary goals However, if these take more than one hour, you’ll want to prioritize accordingly • Laboratory versus field As with the heuristic evaluations and other user research, the context should be determined by the type of app Location-based apps should be tested in the field, but testing of apps with less stringent contextual dependencies can take place in the lab.12 11 Jurgen Kawalek, Annegret Stark, and Marcel Riebeck, “A New Approach to Analyse Human-Mobile Computer Interaction,” Journal of Usability Studies (February 2008), www.usabilityprofessionals.org/ upa_publications/jus/2008february/kawalek.html 12 Kaikkonen et al., “Usability Testing of Mobile Applications.” 104 CHAPTER ● EVALUATING THE COMPETITION Download from www.wowebook.com Analyzing and Presenting Data Your data analysis approach will be influenced by the benchmark goals and datacapturing technique With low-tech data capture—manually logged data—you could create simple tables (TABLES 5.4–5.5) If you use logging software, most products provide built-in analysis and charting tools Additionally, if the sessions were recorded, you may want to analyze the video However, keep in mind that video analysis is labor-intensive If you have solid notes, video can be used for reference or to supplement your report TABLE 5.4 Comparing Scenario Completion Rates Alongside Your Company’s Goals Competitor A Competitor B Our Goal Scenario 50% 55% 60% Scenario 40% 60% 65% Scenario 45% 55% 60% TABLE 5.5 Comparing Number of Errors Alongside Your Company’s Goals Competitor A Competitor B Our Goal Scenario Scenario Scenario Choosing a Method As you consider which method to use, it’s important to evaluate your competitive analysis goals Are you interested in formulating best practices? Do you want an overview of how your competitors are meeting users’ needs? Are you seeking inspiration? In many cases combining methods is the most effective strategy Strengths and weaknesses of alternative methods are summarized in TABLE 5.6 CHOOSING A METHOD 105 Download from www.wowebook.com TABLE 5.6 Summary of Competitive Analysis Methods Method Strengths Weaknesses Needs alignment charts Good for assessing where competitors are meeting user needs No best practices or inspiration Two-by-two diagrams Good way to illustrate how the app fits into the overall competitive landscape No best practices or inspiration Attributes highly subjective Heuristic evaluations Fast and inexpensive Dependent on reviewer’s expertise and heuristics used Good for determining best practices and finding inspiration Competitive benchmarking Good for gathering quantitative data Time-consuming and expensive No understanding of why behind behaviors unless follow-up interview is included Impact on the Product Definition Statement In the previous chapter we discussed how up-front user research can help refine your Product Definition Statement—the declaration of your application’s main purpose and its intended audience An in-depth analysis of your competitors may also impact your app purpose and audience To illustrate, let’s revisit the Product Definition Statement from Chapter 4, “Analyzing User Research”: An app to help urban art enthusiasts find, share, and review art events As a result of your competitive analysis, perhaps you’ll discover that there are ten other apps that claim to provide the same exact service How will your app be different? Imagine that your competitive research revealed that only one app is focused on outdoor art—graffiti, sculpture, murals If you choose to focus on this type of art, your revised statement might look like this one: An app to help urban art enthusiasts find, share, and review outdoor art Additionally, let’s say that you conducted a heuristic evaluation of potential competitors and identified common pain points in these apps You may decide that overcoming key pain points (such as those related to primary goals) could be an effective way to distinguish your app For example, users may be more likely to choose your app if it was easier to geo-tag outdoor art; thus you may further refine your Product Definition Statement: An easy way for urban art enthusiasts to geo-tag, review, and share outdoor art 106 CHAPTER ● EVALUATING THE COMPETITION Download from www.wowebook.com Summary This chapter discussed how competitive UX analyses can provide a holistic view of the competitive landscape, which you can then reference throughout the app design process In particular, these analyses can help you formulate best practices, identify opportunities, and provide inspiration We introduced a variety of competitive analysis methods—needs alignment charts, two-by-two diagrams, heuristic evaluations, and competitive benchmarking—which can be combined and adapted to meet your app needs Finally, we explained how your findings can help shape your Product Definition Statement, the declaration of your application’s main purpose and its intended audience Competitive analyses are beneficial for both new and existing apps As you embark on competitive analysis for your own app, remember the following: • Cast a wide net when selecting competitors to evaluate (e.g., other platforms and related domains) • • Develop your own heuristics to cover specific areas of interest for your app Post your findings in your company hallway or team war room so everyone can easily refer to them during the design process ■ SUMMARY 107 Download from www.wowebook.com ... into the participant’s screen Other Setup Tips Additional tips on the number of participants, apps, and scenarios are discussed in this section • Number of participants The number of participants... Visibility of app status The app should always keep users informed about what is going on, through appropriate feedback Match between app and the real world User control and freedom The app should... to establish goals for your app If the success rate for your competitors’ sign-up was 50 percent, perhaps you would establish a goal of 60 or 70 percent for your app Protocol In traditional usability