The Modern Digital Tragedy / Part 3 of 6

Part 2: Relying on Process Change

In an effort to improve adoption and usability of digital products, development organizations engage customers or users for guidance. They rely on this feedback to plan and change their software, apps, or sites. Though the instinct for advice is sound and input may help, this approach often backfires.

Seeking External Feedback

Let’s imagine Sisyphus, rolling that boulder up the hill for the what, zillionth? time. He thinks, “I must be doing something wrong. I can never get this dang rock to stay at the top.” Being a worldly, clever man, he decides to seek help. He looks for opinions and advice from people who use or purchase boulders – engineers, landscapers, or perhaps stonemasons. Surely, they can give him some insight about this boulder and this hill.

Development-centered organizations gravitate toward this approach. Customers buy and use the products they make, so they must have opinions about them. Those opinions could be the catalyst to making better, more user-friendly digital products. They hope independent, objective input will help them identify and fix highly specific problems with their apps, sites, or software.

Some tactics include:

1. Establishing Customer Advisory Boards (CABs)

Organizations sometimes gather a group of current (and sometimes prospective) customers together to offer periodic feedback to development teams. Often, a CAB includes participants with a strong (even symbiotic) connection to the development organization. They offer high-level strategic advice and may review products in progress.

WHY THEY FAIL
CABs are inherently flawed.
Customers have vested interests in themselves, not your product. Everyone on a CAB has an agenda, particularly the dominant, important clients. If they can, they will steer a product wholly toward their preferences. What begins with innocuous prodding or desire for specific features devolves into direct visual prescriptions for the interface. This feedback may be useful, but it’s more likely driving digital products toward a laundry list of features that serve few. First-rate interfaces cannot be created this way.

CABs rarely include actual users.
CABs are often populated by account representatives and project managers who specialize in squeaky-wheel diplomacy. Even the most well-meaning clients are focused on their own unique problems. Helping create the best possible interface for the most users is hardly their primary motivation.

Managers may be keen to produce better digital products, but they are likely even more desperate to keep a key client happy.

Account or project leads typically lack visual design skills, or for that matter, software design skills or experience. When amateurs drive functionality and interaction, feature glut and interface confusion rule the day.

CABs cede control to clients.
Some development teams are forced into this arrangement by management that believes customer direction is the wellspring of product success. Perhaps this is carefully considered opinion. Perhaps it reflects a management fad or has been gleaned from the latest software development book. The answer could be simpler. Managers may be keen to produce better digital products, but they are likely even more desperate to keep a key client happy.

It doesn’t matter how well a customer is served (or placated). They will not like the ineffective, confusing digital product they have helped create. After all the work devoted to giving them what they want, they may still leave for a competitor

2. Organizing Focus Groups

Focus groups are collections of people (often representative users) who gather together to offer direct feedback about software products. These moderated sessions are usually held prior to development (to obtain directional ideas) or after development (to obtain immediate feedback). Interface screens may be shown on screen or printed. Focus groups typically involve group discussion. These groups can be referred to as “user tests,” even though they don’t involve any interface testing. Feedback gained from these sessions is quite different from that discovered during formal user tests.

WHY THEY FAIL
Focus groups rarely work.
Focus groups are inherently problematic. Some people share opinions more forcefully than others. Groups can be dominated by single, powerful personalities, causing a “group-think” effect. Focus groups work well for reactions to advertising campaigns or films in development, but are unreliable for software usability. People are notorious for their inability to accurately predict their behavior, particularly with interactive products. When focus group participants comment on a completed product, they do so apart from actually using it. Their opinions are speculative at best. Changing an interface based on such feedback could actually harm usability.

3. Querying the Source

When confronted with user discontent, developers attack the problem logically. They point-blank ask users what they want. These are not theoretical interviews. Specific end-users are directly contacted (informally or formally) and asked to provide detailed feedback about what an application should do and what changes should be made to the interface. Prescriptions are plugged-in to the project plan as schedule allows. The development team believes they are now user-centered.

WHY IT FAILS
Users can be inadvertently misleading.
Development-driven interactive products are often difficult to use. Users recognize glaring flaws and suggest changes. Fixes are made. Everyone wins, right?

Talking to users is not wrong. When done properly, it’s absolutely right. Unfortunately, it is rarely done properly. No user input should be accepted at face value. Opinions and preferences should always be treated with skepticism. Not all user comment should be afforded the same emphasis or assigned the same importance. Regrettably, direct end-user suggestions are often implemented without deep investigation, provided the development team agrees with the feedback.

While people are generally clear on what they want to accomplish and can identify what annoys them, they almost always lack the perspective and skill to prescribe solutions that truly fix their problems. They communicate as best they can, but don’t know when their suggestions make an interface worse rather than better.

User to-do lists are not a solution. Intended to make a product more usable, they can deepen the original problem.

4. Imitating Competitors

When pressured to perform in a competitive market, the most talented teams can be tempted to solve problems precisely as their competitors do. This is particularly true for smaller firms competing with larger, more established organizations. Lacking confidence and courage, they perceive imitation as a shortcut to success.

WHY IT FAILS
Imitation attacks the wrong problem.
Direct reproduction of another product may help a team leap forward, but that leap will have solved another organization’s challenges (and for different users). Most small, online stores seek to imitate firms like Amazon, a multinational, enormous company dealing with entirely different economies of scale, markets, and strategies. Their problems and context couldn’t be more different.

Further, the grass is not necessarily greener on the other side of the browser. The organization being copied may itself feel rudderless, lacking strong direction. It may have made a mistake. It may already be copying someone else.

Looking outside is not enough.

Copying others represents the least viable option for development teams. Yet, ironically, it is the most often trod path.

When internal teams lack interface know-how they are more apt to rely on external input or sources. Seeking inspiration from customers, focus groups, end users, or competitors makes sense. External perspectives can spark product improvements. Organizations realize they often operate in an internal echo chamber. What better way to counteract this than seeking connections beyond corporate walls?

This input instinct is admirable, but can be counterproductive. When we solicit opinion from outside groups, we assume they’ll show us a direct path to more user-friendly interfaces. Unfortunately, input is more likely to be unintentionally or intentionally biased (customer advisory boards), misleading (focus groups), or myopic (end-users). We cannot take this feedback at face value, though we often do.

We can’t pat ourselves on the back because we’ve pursued external direction for our digital products. We haven’t done anything if we are unsure of the usefulness or effectiveness of the feedback we get, and our products are in trouble if we can’t interpret feedback into practical interface improvements.

What do we do then?

Well, development teams may think, if we cannot properly learn from customers or users how to fix our interfaces, perhaps we can simply fix the users themselves.

Continue to Part 4 ›

Teaching Correct Behavior

The Modern Digital Tragedy

Part 1: The Eternal Quest for Exceptional Digital Products
Part 2: Relying on Process Change
Part 3: Outsourcing the User Interface
Part 4: Teaching Correct Behavior
Part 5: Upgrading Visuals in a Vacuum
Part 6: Why Development-Centered UI Approaches Fail


About truematter

Frustrating screen experiences are everywhere. You deal with them, we deal with them, our older relatives deal with them, and they make us all want to take a hammer to whatever device we’re using.

Truematter exists to make all of our lives easier any time we have to deal with a website, app, or piece of software. Our team is always thinking about how to improve user experience to help create digital products that are usable, useful, and loved. You can read more of our thoughts at blog.truematter.com.

Credits

Author: Dean Schuster
Editor: Bailey Lewis
Illustrator: Daniel Machado
Whitepaper Designer: Rannah Derrick