How to write a Research framework

what is the difference between research process and research methodology, how to create a research framework, how to draw a research framework, how to construct a research framework
JohenCorner Profile Pic
JohenCorner,France,Professional
Published Date:02-08-2017
Your Website URL(Optional)
Comment
Chapter 10 Research10 So far, we’ve focused on concepts and components. Now we’re going to shift gears and explore the process and methods for creating information architectures. If it were just a matter of whipping up a few standard blueprints, our jobs would be easy. But as we’ve explained, information architecture doesn’t happen in a vacuum. The design of complex web sites requires an interdisciplinary team that involves graphic designers, software developers, content managers, usability engineers, and other experts. Effective collaboration requires agreement on a structured development process. Even for smaller projects, when teams are tiny and individuals fill multiple roles, tackling the right challenges at the right time is critical to success. The following chapters provide an overview of the process and the challenges you’ll encounter along the way. Our focus on the early stages of research, strategy, and design, rather than the later stages of implementation and administration, belies our consulting background. While the vast majority of our experiences have involved strategy and design for fast-paced information architecture projects, we are true believers in the importance of nailing the details in implementation and building sus- tainable information architecture programs. The dedicated in-house staff who pro- tect and perfect information architectures over the long haul are the unsung heroes of the field. 231Process Overview In the early days of web design, many companies employed a one-step process called “Code HTML.” Everyone wanted to jump right in and build the site. People had no patience for research or strategy. We remember one eager client asking us in the mid- dle of a planning session, “So when are we going to start the real work?” Fortu- nately, after several years of painful lessons, there’s a growing realization that designing web sites is hard work and requires a phased approach. Figure 10-1 illus- trates the process of information architecture. Project Research Strategy Design Implementation Administration Program Figure 10-1. The process of information architecture development The research phase begins with a review of existing background materials and meet- ings with the strategy team, aimed at gaining a high-level understanding of the goals and business context, the existing information architecture, the content, and the intended audiences. It then quickly moves into a series of studies, employing a vari- ety of methods to explore the information ecology. This research provides a contextual understanding that forms the foundation for development of an information architecture strategy. From a top-down perspective, this strategy defines the highest two or three levels of the site’s organization and navi- gation structures. From a bottom-up perspective, it suggests candidate document types and a rough metadata schema. This strategy provides a high-level framework for the information architecture, establishing a direction and scope that will guide the project through implementation. Design is where you shape a high-level strategy into an information architecture, cre- ating detailed blueprints, wireframes, and metadata schema that will be used by graphic designers, programmers, content authors, and the production team. This phase is typically where information architects do the most work, yet quantity can- not drive out quality. Poor design execution can ruin the best strategy. For an infor- mation architect, the meat is in the middle and the devil is in the details. Implementation is where your designs are put to the test as the site is built, tested, and launched. For the information architect, this phase involves organizing and tag- ging documents, testing and troubleshooting, and developing documentation and training programs to ensure that the information architecture can be maintained effectively over time. And last but not least comes administration, the continuous evaluation and 232 Chapter 10: Research improve- ment of the site’s information architecture. Administration includes the daily tasks of tagging new documents and weeding out old ones. It also requires monitoring site usage and user feedback, identifying opportunities to improve the site through major or minor redesigns. Effective administration can make a good site great. Admittedly, this is a simplified view of the process. Clear lines rarely exist between phases, and few projects begin with a clean slate. Budgets, schedules, and politics will inevitably force you off the path and into the woods. We don’t aim to provide a paint-by-numbers design guide. The real world is far too messy. Instead, we present a framework and some tools and methods that may be useful when applied selectively within your environment. Before we begin, we’ll offer a word of encouragement. Much of this work looks tedious and boring when taken out of context. Not all of us can get jazzed up about poring over search logs and analyzing content. But when you do this work in the real world, it can be surprisingly engaging. And when that magic light bulb turns on, revealing a pattern that suggests a solution, you’ll be glad you took the time to do it right. A Research Framework Good research means asking the right questions. And choosing the right questions requires a conceptual framework of the broader environment. We have found our faithful three-circle diagram shown in Figure 10-2 to be invalu- able in shaping a balanced approach to research. It helps us to decide where to shine the flashlight, and to understand what we see. Consequently, we have used this model to organize our exploration of the research process. Business goals, funding, politics, culture, technology, human resources Context Content Users Document/data types, Audiences, tasks, needs, content objects, metadata, information seeking behavior, volume, existing structure experience, vocabularies Figure 10-2. A balanced approach to research We begin with an overview of tools and methods for research (see Figure 10-3). Obviously, it won’t make sense or be possible to use every tool on every project. And, of course, you should absolutely seek out and try methods we haven’t covered. Our goal is to provide you with a map and a compass. The journey is left to you. A Research Framework 233 Background Presentations Stakeholder Technology Context research and meetings interviews assessment Metadata Heuristic Content Content and content Benchmarking evaluation mapping analysis Search log Use cases and Contextual User interviews Users and clickstream personas inquiry and user testing analysis Figure 10-3. Tools and methods for research Context For practical purposes, an investigation of the business context can be a good place to start. It’s critical to begin projects with a clear understanding of the goals and an appreciation of the political environment. Ignoring business realities is just as dan- gerous as ignoring users. A perfectly usable site that fails to support business goals won’t last long. The term “user-centered design” is valuable insofar as it moves the pendulum away from executive-centered design, but don’t let that pendulum swing too far. Of course, context isn’t just about politics. We also need to understand goals, budgets, schedules, technology infrastructure, human resources, and corporate culture. Legal issues can also be important, particularly in heavily regulated industries. All of these factors can and should influence the shape of the information architecture strategy. Getting Buy-In Research is not a one-way street. While conducting your investigation, it’s impor- tant to recognize the value of building awareness and support for your project. After all, you’re not a scientist studying rats. Your human subjects will have their own set of questions and concerns. For example: • Who are you and why are you asking me these questions? • What’s information architecture and why should I care? • What’s your methodology and how does it relate to my work? The way you answer these questions will influence the level of support you receive throughout the project. Since most large sites today depend upon interdepartmental collaboration and decentralized content ownership, it’s impossible to succeed with- out broad buy-in. For this reason, you’ll want to weave elements of presentation and persuasion throughout the research process. 234 Chapter 10: ResearchBackground Research When a project begins, an information architect’s head is filled with all sorts of good questions. • What are the short- and long-term goals? • What’s the business plan? What are the politics? • What’s the schedule and budget? • Who are the intended audiences? • Why will people come to the site? Why will they come back? • What types of tasks should users be able to perform? • How will content be created and managed? And by whom? • What’s the technical infrastructure? • What worked in the past? What didn’t? But just asking the right questions is not enough. You need to ask them of the right people in the right way at the right time. You must be very focused in how you use people’s time and realistic about who can answer which questions. Consequently, it’s good to begin with a review of background materials. Sometimes the best way to learn about the future is to dig into the past. Get your hands on any documents that relate to the site’s mission, vision, goals, intended audiences, and content. Also, try to find documents that provide a broader picture of the manage- ment structure and culture. Organization charts are really valuable if you’re an out- side consultant, particularly when working on intranets. They capture an important component of the users’ mental model of the organization and will help you deter- mine potential stakeholders and user groups for interviews and testing. A revealing exercise is to compare the vision that preceded the current web site with the actual site itself. In some cases, we’ve seen elaborate PowerPoint presentations, hundreds of pages long, that paint a tremendously ambitious picture of what the web site should be. And then we’ve looked to the Web and found a small, poorly designed site with limited functionality. This gap between vision and reality is a red flag, suggesting misunderstanding between the managers who produce the slides and the team who must build the site. Great visions are useless without the time, money, and expertise to implement them. In these cases, you’ll need to rein in expectations quickly. Introductory Presentations When you’re kicking off an information architecture project, it’s worth taking time for an introductory presentation. It’s good to get authors, software developers, graphic designers, marketing folks, and managers all on the same page in under- standing the following issues. Context 235 • What is information architecture and why is it important? • How will the information architecture relate to the other components of the site and to the organization itself? • What are the major milestones and deliverables? These presentations and the discussions they provoke can identify potential land- mines and foster productive relationships between teams. They are especially useful in building a common vocabulary that helps people communicate with one another more successfully. Research Meetings In the early 1990s, we held full-day marathon meetings with our clients’ web teams to learn as much as possible about mission, vision, audience, content, and infrastruc- ture, and to begin fleshing out a framework for the information architecture. In those days of small, centralized web design teams, one mammoth research meeting would often suffice. Today, the design and production of web sites is often more compli- cated, involving several teams drawn from different departments. This distributed reality may call for a series of targeted research meetings. Consider the following three meetings and their agendas. Strategy team meeting In many organizations today, there’s a centralized strategy team or working group that’s been tasked with management of the web or intranet effort. It’s this strategy team that sets the high-level goals, defining the mission, vision, intended audience, content, and functionality. This is the group that deals with the big balancing act between centralization and autonomy. Because of the need to establish trust and respect, face-to-face meetings with this team are essential. Only by having these meetings will you learn about the real goals of the project and the hidden landmines in your path. And only during face-to-face conversations will you reach a comfort level that allows both you and your col- leagues to ask the difficult but necessary questions. It’s important to keep these meetings small and informal. Five to seven people is ideal. If the group gets too large, political correctness takes over and people won’t talk. As far as the agenda goes, you’ll want to hit on some of the following questions: • What are the goals for this site? • Who are the intended audiences? • What is the planned content and functionality? • Who will be involved in this effort? • When do you need to show results? • What obstacles do you anticipate? 236 Chapter 10: Research However, the key in these meetings is to follow your nose. Be ready to dig deeper into the most interesting and important topics that come up. The worst thing you can do is rigidly stick to a formal agenda. Think of yourself as the facilitator, not the dictator. And don’t be afraid to let the discussion wander a bit. You’ll learn more, and everyone will have a more enjoyable meeting. Content management meeting The content owners and managers are the people you’ll want to engage in detailed discussions about the nature of the content and the content management process. These people typically have lots of hands-on experience and a perspective more informed by bottom-up realities. If you can establish a rapport, you might also learn a lot about the culture and politics of the organization as well. Questions for these folks include: • What are the formal and informal policies regarding content inclusion? • Is there a content management system that handles authoring and publishing? • Do those systems use controlled vocabularies and attributes to manage content? • How is content entered into the system? • What technology is being used? • What content does each owner handle? • What is the purpose of the content? What are the goals and vision behind this content area? • Who is the audience? • What is the format of the content? Is it dynamic or static? • Who maintains the content? • What future content or services are planned? • Where does content originate? How is it weeded? • What legal issues impact the content management process? Information technology meeting You should meet with the system administrators and software developers early on to learn about the existing and planned technical infrastructure that will support the web site or intranet. This provides a good opportunity to discuss the relationships between information architecture and technical infrastructure, as well as to build trust and respect. Remember, you depend on these folks to forge the connection between ideas and implementation. Questions include: • Will we be able to leverage content management software (CMS)? • How can we create a metadata registry to support distributed tagging? • Does the CMS handle automated categorization of documents? Context 237 • What about automated browsable index generation? • What about personalization? • How flexible is the search engine? • Will the search engine support integration of a thesaurus? • Can we get regular access to search logs and usage statistics? Unfortunately, the IT groups in many organizations are swamped with work and don’t have the time to support information architecture and usability efforts. It’s important to identify this problem early and develop a practical, realistic solution. Otherwise, your whole effort can stall when implementation time arrives. Stakeholder Interviews Interviews with opinion leaders or stakeholders are often one of the most valuable components of the business context research. These interviews with senior execu- tives and managers from a variety of departments and business units allow for broader participation in the process and bring new perspectives, ideas, and resources to the table. During these interviews, the information architect asks the opinion leaders open- ended questions about their assessment of the current information environment and their vision for the organization and its web site. It’s worth taking the time to explain your project to these folks—their political support may be more important in the long haul than the answers they give during the interview. Sample questions for an intranet project include: • What is your role in the organization? What does your team do? • In an optimal world, how would your company use the intranet to build com- petitive advantage? • In your opinion, what are the key challenges your company intranet faces? • What enterprise-wide initiatives are occurring that the intranet strategy team should know about? • Do you use the existing intranet? If not, why not? If so, what parts of the intra- net do you use? How often? • What incentives exist for departments and employees to share knowledge? • What are the critical success factors for the intranet? • How will these factors be measured? What’s the ROI? • What are the top three priorities for the intranet redesign? • If you could tell the intranet strategy team one thing, what would it be? • What question should we have asked that we didn’t? As with the strategy team meeting, these interviews should be informal discussions. Let the stakeholders tell you what’s on their minds. 238 Chapter 10: ResearchTechnology Assessment In our dream world, we would design our information architectures independent of technology, and then a team of system administrators and software developers would build the infrastructure and tools to support our vision. In the real world, this doesn’t happen very often. Usually, we must work with the tools and infrastructure already in place. This means that we need to assess the IT environment at the very beginning of a project so that our strategies and designs are grounded in reality. This is why it’s critical to talk with IT folks up front. You’ll want to understand what’s in place, what’s in process, and who’s available to help. Then you can per- form a gap analysis, identifying the disconnects between business goals, user needs, and the practical limitations of the existing technology infrastructure. You can then see if there are any commercially available tools that might help to close these gaps, and you can initiate a process to determine whether it’s practical to integrate them within the context of the current project. (We’ll discuss tools for information architects in more detail in Chapter 16.) Either way, it’s much better to come to terms with these IT issues early on. Content We define content broadly as “the stuff in your web site.” This may include docu- ments, data, applications, e-services, images, audio and video files, personal web pages, archived email messages, and more. And we include future stuff as well as present stuff. Users need to be able to find content before they can use it—findability precedes usability. And if you want to create findable objects, you must spend some time studying those objects. You’ll need to identify what distinguishes one object from another, and how document structure and metadata influence findability. You’ll want to balance this bottom-up research with a top-down look at the site’s existing information architecture. Heuristic Evaluation Many projects involve redesigning existing web sites rather than creating new ones. In such cases, you’re granted the opportunity to stand on the shoulders of those who came before you. Unfortunately, this opportunity is often missed because of peo- ple’s propensity to focus on faults and their desire to start with a clean slate. We reg- ularly hear our clients trashing their own web sites, explaining that the current site is a disaster and we shouldn’t waste our time looking at it. This is a classic case of throwing out the baby with the bathwater. Whenever possible, try to learn from the existing site and identify what’s worth keeping. One way to jump-start this process is to conduct a heuristic evaluation. Content 239 A heuristic evaluation is an expert critique that tests a web site against a formal or informal set of design guidelines. It’s usually best to have someone outside the orga- nization perform this critique, so this person is able to look with fresh eyes and be largely unburdened with political considerations. Ideally, the heuristic evaluation should occur before a review of background materials to avoid bias. At its simplest, a heuristic evaluation involves one expert reviewing a web site and identifying major problems and opportunities for improvement. This expert brings to the table an unwritten set of assumptions about what does and doesn’t work, drawing upon experiences with many projects in many organizations. This practice is similar to the physician’s model of diagnosis and prescription. If your child has a sore throat, the doctor will rarely consult a reference book or perform extensive medical tests. Based on the patient’s complaints, the visible symptoms, and the doctor’s knowledge of common ailments, the doctor will make an educated guess as to the problem and its solution. These guesses are not always right, but this single- expert model of heuristic evaluation often provides a good balance between cost and quality. At the more rigorous and expensive end of the spectrum, a heuristic evaluation can be a multi-expert review that tests a web site against a written list of principles and guidelines. This list may include such common-sense guidelines as: • The site should provide multiple ways to access the same information. • Indexes and sitemaps should be employed to supplement the taxonomy. • The navigation system should provide users with a sense of context. • The site should consistently use language appropriate for the audience. • Searching and browsing should be integrated and reinforce each other. Each expert reviews the site independently and makes notes on how it fares with respect to each of these criteria. The experts then compare notes, discuss differ- ences, and work toward a consensus. This reduces the likelihood that personal opin- ion will play too strong a role, and creates the opportunity to draw experts from different disciplines. For example, you might include an information architect, a usability engineer, and an interaction designer. Each will see very different problems and opportunities. This approach obviously costs more, so depending on the scope of your project, you’ll need to strike a balance in terms of number of experts and for- mality of the evaluation. For a good example of such a list, see Jakob Nielsen’s Ten Usability Heuristics (http://www.useit.com/papers/ heuristic/heuristic_list.html). 240 Chapter 10: ResearchContent Analysis Content analysis is a defining component of the bottom-up approach to information architecture, involving careful review of the documents and objects that actually exist. What’s in the site may not match the visions articulated by the strategy team and the opinion leaders. You’ll need to identify and address these gaps between top- down vision and bottom-up reality. Content analysis can take the shape of an informal survey or a detailed audit. Early in the research phase, a high-level content survey is a useful tool for learning about the scope and nature of content. Later in the process, a page-by-page content audit or inventory can produce a roadmap for migration to a content management system (CMS), or at least facilitate an organized approach to page-level authoring and design. Gathering content To begin, you’ll need to find, print, and analyze a representative sample of the site’s content. We suggest avoiding an overly scientific approach to sample definition. There’s no formula or software package that will guarantee success. Instead, you need to use some intuition and judgment, balancing the size of your sample against the time constraints of the project. We recommend the Noah’s Ark approach. Try to capture a couple of each type of animal. Our animals are things like white papers, annual reports, and online reim- bursement forms, but the difficult part is determining what constitutes a unique spe- cies. The following dimensions should help distinguish one beast from another and build toward a diverse and useful content sample: Format Aim for a broad mix of formats, such as textual documents, software applica- tions, video and audio files, and archived email messages. Try to include offline resources such as books, people, facilities, and organizations that are repre- sented by surrogate records within the site. Document type Capturing a diverse set of document types should be a top priority. Examples include product catalog records, marketing brochures, press releases, news arti- cles, annual reports, technical reports, white papers, forms, online calculators, presentations, spreadsheets, and the list goes on. Source Your sample should reflect the diverse sources of content. In a corporate web site or intranet, this will mirror the organization chart. You’ll want to make sure you’ve got samples from engineering, marketing, customer support, finance, human resources, sales, research, etc. This is not just useful—it’s also politically astute. If your site includes third-party content such as electronic journals or ASP services, grab those, too. Content 241Subject This is a tricky one, since you may not have a topical taxonomy for your site. You might look for a publicly available classification scheme or thesaurus for your industry. It’s a good exercise to represent a broad range of subjects or top- ics in your content sample, but don’t force it. Existing architecture Used together with these other dimensions, the existing structure of the site can be a great guide to diverse content types. Simply by following each of the major category links on the main page or in the global navigation bar, you can often reach a wide sample of content. However, keep in mind that you don’t want your analysis to be overly influenced by the old architecture. Consider what other dimensions might be useful for building a representative con- tent sample for your particular site. Possibilities include intended audience, docu- ment length, dynamism, language, and so on. As you’re balancing sample size against time and budget, consider the relative num- ber of members of each species. For example, if the site contains hundreds of techni- cal reports, you certainly want a couple of examples. But if you find a single white paper, it’s probably not worth including in your sample. On the other hand, you do need to factor in the importance of certain content types. There may not be many annual reports on your web site, but they can be content-rich and very important to investors. As always, your judgment is required. A final factor to consider is the law of diminishing returns. While you’re conducting content analysis, you’ll often reach a point where you feel you’re just not learning anything new. This may be a good signal to go with the sample you’ve got, or at least take a break. Content analysis is only useful insofar as it teaches you about the stuff in the site and provides insights about how to get users to that stuff. Don’t just go through the motions. It’s unproductive and incredibly boring. Analyzing content What are you looking for during content analysis? What can you hope to learn? One of the side benefits of content analysis is familiarity with the subject matter. This is particularly important for consultants who need to quickly become fluent in the lan- guage of their client. But the central purpose of content analysis is to provide data that’s critical to the development of a solid information architecture. It helps you reveal patterns and relationships within content and metadata that can be used to better structure, organize, and provide access to that content. That said, content analysis is quite unscientific. Our approach is to start with a short list of things to look for, and then allow the content to shape the process as you move forward. 242 Chapter 10: ResearchFor example, for each content object, you might begin by noting the following: Structural metadata Describe the information hierarchy of this object. Is there a title? Are there dis- crete sections or chunks of content? Might users want to independently access these chunks? Descriptive metadata Think of all the different ways you might describe this object. How about topic, audience, and format? There should be at least a dozen different ways to describe many of the objects you study. Now’s the time to get them all on the table. Administrative metadata Describe how this object relates to business context. Who created it? Who owns it? When was it created? When should it be removed? This short list will get you started. In some cases, the object will already have meta- data. Grab that, too. However, it’s important not to lock into a predefined set of metadata fields. You want to allow the content to speak to you, suggesting new fields you might not have considered. You’ll find it helpful to keep asking yourself these questions: • What is this object? • How can I describe this object? • What distinguishes this object from others? • How can I make this object findable? Moving beyond individual items, also look for patterns and relationships that emerge as you study many content objects. Are certain groupings of content becoming apparent? Are you seeing clear hierarchical relationships? Are you recognizing the potential for associative relationships, perhaps finding disparate items that are linked by a common business process? Because of the need to recognize patterns within the context of the full sample, con- tent analysis is by necessity an iterative process. It may be on the second or third pass over a particular document that the light bulb blinks on and you discover a truly innovative and useful solution. With the exception of true bottom-up geeks (and we use the term respectfully), most of us don’t find content analysis especially thrilling or addictive. However, experi- ence has proven that this careful, painstaking work can suggest new insights and pro- duce winning information architecture strategies. In particular, content analysis will help you in the design phase, when you begin fleshing out document types and meta- data schema. But it also provides valuable input into the broader design of organiza- tion, labeling, navigation, and searching systems. Content 243Content Mapping Heuristic evaluation provides a top-down understanding of a site’s organization and navigation structures, while content analysis provides a bottom-up understanding of its content objects. Now it’s time to bridge these two perspectives by developing one or more content maps. A content map is a visual representation of the existing information environment (see Figure 10-4). Content maps are typically high-level and conceptual in nature. They are a tool for understanding, rather than a concrete design deliverable. Customer Content sources Marketing Legal Support Content model Product Process Reference Contact System Content types Handling How To Content templates Steps Table Figure 10-4. A small slice of a content map Content maps vary widely. Some focus on content ownership and the publishing process. Some are used to visualize relationships between content categories. And others explore navigation pathways within content areas. The goal of creating a con- tent map is to help you and your colleagues wrap your minds around the structure, organization, and location of existing content, and ultimately to spark ideas about how to provide improved access. Benchmarking We use the term benchmark informally to indicate a point of reference from which to make comparative measurements or judgments. In this context, benchmarking involves the systematic identification, evaluation, and comparison of information architecture features of web sites and intranets. These comparisons can be quantitative or qualitative. We might evaluate the num- ber of seconds it takes a user to perform a task using competing web sites, or take 244 Chapter 10: Research notes about the most interesting features of each site. Comparisons can be made between different web sites (competitive benchmarking) or between different ver- sions of the same web site (before-and-after benchmarking). In both cases, we’ve found benchmarking to be a flexible and valuable tool. Competitive benchmarking Borrowing good ideas, whether they come from competitors, friends, enemies, or strangers, comes naturally to all of us. It’s part of our competitive advantage as human beings. If we were all left to our own devices to invent the wheel, most of us would still be walking to work. However, when we take these copycat shortcuts, we run the risk of borrowing bad ideas as well as good ones. This happens all the time in the web environment. Since the pioneering days of web site design, people have repeatedly mistaken large finan- cial outlays and strong marketing campaigns as signs of good information architec- ture. Careful benchmarking can catch this misdirected copycatting before it gets out of control. For example, when we worked with a major financial services firm, we ran up against the notion that Fidelity Investments’ long-standing position as a leader within the industry automatically conferred the gold standard upon its web site. In several cases, we proposed significant improvements to our client’s site but were blocked by the argument, “That’s not how Fidelity does it.” To be sure, Fidelity is a major force in the financial services industry, with a broad array of services and world-class marketing. However, in 1998, the information architecture of its web site was a mess. This was not a model worth following. To our client’s credit, they commissioned a formal benchmarking study, during which we evaluated and compared the features of several competing sites. During this study, Fidelity’s failings became obvious, and we were able to move forward without that particular set of false assumptions. The point here is that borrowing information architecture features from competitors is valuable, but it must be done carefully. Before-and-after benchmarking Benchmarking can also be applied to a single site over time to measure improve- ments. We can use it to answer such return-on-investment (ROI) questions as: • How much did the intranet redesign reduce our employees’ average time finding core documents? • Has the web site redesign improved our customers’ ability to find the products they need? • Which aspects of our redesign have had a negative impact on user efficiency or effectiveness? Content 245 Before-and-after benchmarking forces you to take the high-level goals expressed in your statement of mission and vision, and tie them to specific, measurable criteria. This forced clarification and detail-orientation will drive you toward a better infor- mation architecture design on the present project, in addition to providing a point of reference for evaluating success. Following are the advantages of before-and-after benchmarking, as well as those of competitive benchmarking: Benefits of before-and-after benchmarking • Identifies and prioritizes information architecture features in the existing site • Encourages transition from broad generalizations (e.g., “Our site’s navigation stinks”) to specific, actionable definitions • Creates a point of reference against which you can measure improvements Benefits of competitive benchmarking • Generates a laundry list of information architecture features, bringing lots of new ideas to table • Encourages transition from broad generalizations (e.g., “Amazon is a good model”) to specific, actionable definitions (“Amazon’s personalization feature works well for frequent visitors”) • Challenges embedded assumptions (e.g., “We should be like Fidelity”) and avoids copying the wrong features for the wrong reasons • Establishes current position with respect to competitors and creates a point of reference against which to measure speed of improvement Users They’re called users, respondents, visitors, actors, employees, customers, and more. They’re counted as clicks, impressions, advertising revenues, and sales. Whatever you call them and however you count them, they are the ultimate designers of the Web. Build a web site that confuses customers, and they’ll go elsewhere. Build an intranet that frustrates employees, and they won’t use it. This is the Internet’s fast-forward brand of evolution. Remember the original Path- finder web site from Time Warner? They spent millions of dollars on a flashy, graph- ical extravaganza. Users hated it. A complete redesign followed months after the original launch. This was an expensive and embarrassingly public lesson in the importance of user-sensitive design. So, we’ve established that users are powerful. They’re also complex and unpredict- able. You can’t blindly apply lessons learned by Amazon to the information architec- ture design of Pfizer.com. You’ve got to consider the unique nature of the site and of the user population. 246 Chapter 10: Research There are many ways to study user populations. Market-research firms run focus groups to study branding preferences. Political pollsters use telephone surveys to gauge the public’s feelings about candidates and issues. Usability firms conduct interviews to determine which icons and color schemes are most effective. Anthro- pologists observe people acting and interacting within their native environments to learn about their culture, behavior, and beliefs. No single approach can stand alone as the one right way to learn about users and their needs, priorities, mental models, and information-seeking behavior. This is a multidimensional puzzle—you’ve got to look at it from many different perspectives to get a good sense of the whole. It’s much better to conduct five interviews and five usability tests than to run one test ten times. Each approach is subject to the law of diminishing returns. As you consider integrating these user research methods into your design process, keep a couple of things in mind. First, observe the golden rule of discount usability engineering: any testing is better than no testing. Don’t let budgets or schedules become an excuse. Second, remember that users can be your most powerful allies. It’s easy for your colleagues and your boss to argue with you, but it’s difficult for them to argue with their customers and with real user behavior. User research is an extremely effective political tool. Usage Statistics Most projects today involve redesigning an existing site. In these cases, it makes sense to begin by looking at data that shows how people have been using the site and where they’ve been running into problems. Your site’s usage statistics are a reasonable place to start. Most statistics software packages, such as Google Analytics shown in Figure 10-5, provide the following reports: Page information The number of hits per day for each page in the site. This data will show which pages are most popular. By tracking page hits over time, you can observe trends and tie page popularity to events such as advertising campaigns or the redesign of site navigation. Visitor information Statistics products claim they can tell you who is using your site and where the users are coming from. In reality, they’ll tell you only the domains (e.g., aol.com, mitre.org) of those users’ Internet service providers, which is often of limited value. If you’d like to dig deeper, we recommend reading User and Task Analysis for Interface Design by Joann Hackos and Janice Redish (Wiley). And then, of course, there are all sorts of wonderful articles and books by usability guru Jakob Nielsen (http://useit.com). Users 247 Your stats software may provide additional views into the usage data, indicating the times and dates when people are visiting, the referring sites your users are coming from, and the types of browsers being used, as shown in Figure 10-5. Figure 10-5. Usage data presented by Google Analytics The path that users trace as they move through a web site is known as the clickstream. If you want a higher level of sophistication in your usage statistics, you can buy software that handles clickstream analysis. You can trace where a user comes from (originating site), the path he takes through your site, and where he goes next (destination site). Along the way, you can learn how long he spends on each page of your site. This creates a tremendously rich data stream that can be fascinat- ing to review, but difficult to act upon. What you really need to make clickstream data valuable is feedback from the user explaining why he came to the site, what he found, and why he left. Some companies use pop-up surveys to capture this informa- tion as users are leaving the web site. Search-Log Analysis A simpler and extremely valuable approach involves the tracking and analysis of que- ries entered into the search engine. By studying these queries, you can identify what users are looking for, and the words and phrases they are using. This is fantastic data when you’re developing controlled vocabularies. It’s also useful when prioritizing terms for a “Best Bets” strategy. (You’ll learn more about Best Bets in the MSWeb case study in Chapter 20.) 248 Chapter 10: Research At a basic level, search-log analysis will sensitize you to the way your users really search. Users generally enter one or two keywords, and you’re lucky if they spell them right. Looking at search logs provides a valuable education for information architects who are fresh out of school and all steamed up about the power of Bool- ean operators and parenthetical nesting. You can achieve the same effect using a live search display such as Metacrawler’s metaspy, which shows the terms that real peo- ple are using to search right now (see Figure 10-6). Figure 10-6. A public search voyeur service But with your own site’s search logs, you can learn much more. At a bare minimum, you should be able to get a monthly report that shows how many times users searched on particular terms during that month, as shown here: 54 e-victor 53 keywords:"e-victor" 41 travel 41 keywords:"travel" 37 keywords:"jupiter" 37 jupiter31 esp See http://searchenginewatch.com/facts/searches.html for more on live search displays. Users 24930 keywords:"esp" 28 keywords:"evictor" 28 evictor 28 keywords:"people finder" 28 people finder 27 fleet 27 keywords:"fleet" 27 payroll 26 eer 26 keywords:"eer" 26 keywords:"payroll" 26 digital badge 25 keywords:"digital badge" But hopefully, you can work with your IT group to buy or build a more sophisti- cated query-analysis tool that allows you to filter by date, time, and IP address. Figure 10-7 shows a good example of such a tool. This tool can help you answer the following questions: • Which popular queries are retrieving zero results? • Are these zero-hit users entering the wrong keywords, or are they looking for stuff that doesn’t exist on your site? • Which popular queries are retrieving hundreds of results? • What are these hundred-hit users actually looking for? • Which queries are becoming more popular? Less? Based on the answers, you can take immediate and concrete steps to fix problems and improve information retrieval. You might add preferred and variant terms to your controlled vocabulary, change navigation labels on major pages throughout the site, improve search tips, or edit content on the site. Note that smart marketing groups are also getting interested in search logs as a valuable source of information about customer needs. Customer-Support Data In addition to reviewing web site statistics, it’s worth looking to the customer- or technical-support departments to see if they’ve been capturing and analyzing the problems, questions, and feedback from the customers of your web site or intranet. Help-desk operators, call-center representatives, librarians, and administrative assis- tants can also be rich sources of information; in many large corporations, these are the people to whom customers or employees turn for answers. That means they are the people who know the questions. 250 Chapter 10: Research

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.