Introduction to Market Research Process
Marketing research is the systematic process for finding facts and Figures that fulfill market and customer needs. This tutorial explains the complete Market research process with new examples.
A carefully crafted coherent plan enables the market researcher to plan from the beginning to the end and ensures time will be used more efficiently.
On the importance of doing your own market research …
When we see the entrepreneur presenting a project with his own preliminary market research, it is a strong indication that he is proactively working to understand how his product fits in the market.
Market Research Process
Before describing the market research process, I want to stress that there is no one unique market research process. I use a simplified process that consists of
Prepare Your Market Research Plan
Preparing your market research plan breaks down into two distinct steps:
a. Identify and formulate the problem:
The first step of the market research process is to identify and formulate the problem (or the opportunity). By formally defining the problem, the market researcher will focus his research effectively, ensuring that all participants share the same vision and objectives for the project.
As such, the problem identification step will usually involve discussions with decision-makers, a review of secondary data, and conversations with key opinion leaders.
The topic of research is usually defined in a few words. For example, it could be to identify emerging market opportunities for a new technology, the size, and a segment of the current market, or developing a customer profile.
b. Determine the research design:
The next step is to determine the research design. It is the approach that you will use to collect your data and will guide you in choosing the specific methods you will use to collect the information you need. Some key questions you will answer at this step are
Which method(s) will I use to collect data?
How will I connect with my data sample? Who will I need to connect with? How can I connect with them? Will I need to incentivize them? How?
Which data collection tools will I use (telephone, in-person, Internet)?
What is my total budget (both monetary and timewise)?
When determining your research approach, there are three types of research design you can choose from. The three classifications are exploratory research, descriptive research, and causal research.
Exploratory market research is akin to basic research in science. It is done to better understand a phenomenon, or when the existing knowledge that the organization possesses is too vague.
It helps the organization gain some broad insight and to learn more, enabling it to gain familiarity with a topic and to conduct more precise research. Usually, exploratory research is qualitative in nature and uses techniques such as secondary research, focus groups, and interviews.
To illustrate this, let me share an example of exploratory research I did a few years ago. A client was interested in expanding into a new geographic market but did not know which specific technologies were used in the targeted territory.
To find out, I researched the topic online (secondary research) and contacted a few key opinion leaders that have experience in the region (interviews).
After this project, the client did not have a formal market appraisal, but he did have an appreciation of which technologies competed on the market, which companies were present, and some key trends that would help him decide if he wanted to explore this market further.
Descriptive research is much more detailed and generates more granular data for the organization. This can only be done if the market researcher already has a good appreciation of the market, and can properly define his research needs.
Quantitative data applies best for descriptive research and is mostly collected through surveys, although secondary research into specialized databases is also possible.
Continuing with our earlier example, we had identified that three technologies were present in the target market. At this point, the client was interested in knowing the market share of each product. We conducted a survey, sending it to relevant users. Once we collected sufficient data, we had an accurate quantitative appreciation of the market.
Causal research is the most specific type of research and is usually done if sufficient data has already been collected. It is done to find specific explanations for specific issues. Most of the research will be qualitative, and the most popular methods of collecting data are observation, experimentation, and in-depth interviews.
Concluding our example, I had determined that a specific medical device had over 70% of the market share in the territory he wanted to expand into. My client’ s new research problem was finding out why this product had such a dominant position.
To find out, he had different options. He could observe end users using the device, and compare that to end users using other devices.
He could purchase each of the three devices on the market, use them, or have them used by an independent third party. Or he could do in-depth interviews of end users, focusing exclusively on how and why they use it to gain insight.
Data Collection Step
The data collection phase splits into distinct steps. First, there is the design step where you will design your sampling plan and your tools. This is followed by the collection of data.
Design Sampling Plan
The sampling plan is the detailed framework of who will be contacted and what the expected sample size is. The sample size is crucial for the validity of the information you collect. That’ s why calculating it beforehand is essential. There are two types of samples that a researcher can use: probabilistic and non-probabilistic.
Non-probabilistic samples are those where the participants are not selected at random. If you are consulting a defined group of experts, there is no need to build a survey group at random. Some non-probabilistic samples include:
Convenience sample: Participants are selected because they are available and willing to participate. For example, if a researcher’ s company has a kiosk at a trade show, collecting data from participants at the trade show would be a convenience sample.
Quota sample: If a researcher has decided beforehand that there are some minimum population requirements needed in his sample, he is building a quota sample. For example, he could decide that at least 50% of respondents should be women, or that 40% of respondents should be within a certain geographical area.
Snowball sample: This is used in situations where participants are very specialized and hard to reach. Hence, KOLs are contacted and asked for contacts to populate the survey.
As participants take part in the survey, they invite others and so on. It can be necessary to start the recruitment process again once the sample is no longer supplying new participant leads.
Voluntary sample: When contacting a closed population, the sample is the participants who elect to participate. For example, members of an association who decide to participate in an association’ s annual survey serve as the basis of a voluntary sample.
As for probability sampling, it is a form of sampling that uses some form of random selection. If you are using simple random sampling, you select a number of participants out of a total population of users who each have an equal chance of being selected.
Stratified sampling implies dividing your population into homogenous subgroups, then taking a simple random sample from each, while systematic random sampling implies selecting individuals of a population according to a random starting point and a fixed interval (e.g., surveying every three houses on a street).
A question that’s often asked when doing a project is “ How many people should we contact?” The sample size is not a standard number we can describe and there are no rules of thumb. That’ s why before doing descriptive research, it is necessary to do some exploratory research.
Of course, the available budget will have an impact on the sample size. It might be nice to interview 500 people individually to get a scientifically valid sample, but time and monetary-wise, such an endeavor would be cost-prohibitive. As such, sometimes compromises must be made. A researcher
Design Your Tool
Once the research question has been designed, and the methodology decided on, it is time to design the research tool. For example, if you have decided to do interviews, you will have to design an interview guide.
Building an interview guide ensures consistency between each interview, between each interviewer, as well being a useful tool for remembering topics during the interviews.
Ideally, you should test your research instrument before using it at large. Test your interview guide with a few potential participants: you might find that some questions are redundant, that some questions are missing, and that some questions are misunderstood by your target audience.
It is much more cost-effective to find this out at this stage rather than at the data analysis stage.
The two main types of tools a researcher can prepare when doing market research are questionnaires and a research guide (which includes variants such as discussion and observation guides).
A questionnaire is a list of questions that a researcher prepares to collect data from a statistically significant number of subjects. It is mostly used in surveys (online or in person), telephone, and mailing interviews.
Most of the time, the market researcher will not be administrating the survey as it will be done by a third party (such as an online web tool, or a paid resource).
As such, the researcher has to make sure that the questions are sufficiently clear, that misinterpretations are unlikely, and that the collection is standardized across multiple interviewers.
As for the discussion guide, it is mostly used in focus groups and in-depth interviews and includes mainly open-ended question that guides the conversation.
This is the moment when you go down to the trenches and put your plan into action. You start recruiting and doing interviews, you send out your survey, or you start searching the Internet for the information you need.
This could be done by you, or you might hire a firm to do some (or all) of the data collection, but the important thing to remember is that this step is often the most time-consuming step of the market research process.
Analyze Your Data
Once you have reached the end point of your data collection, it is time to start analyzing data.
If you have quantitative data, you might use spreadsheet software (such as Excel) or a statistical software package (such as SPSS or JMP). You will build tables and graphs correlated with the demographic variables (age, geography, etc.). You start looking for trends to find the story that you will be sharing with stakeholders.
If your data is qualitative, the first step is the transcription and codification of data. Codification enables the market researcher to identify patterns in responses.
For example, you might start to formulate broad categories around responses, creating buckets for the data to make sense. For some projects, it might be useful for two individuals to independently go over the data, and then compare results to try to reduce bias.
Prepare Your Data for Presentation
Once you have interpreted your data and transformed the information from raw data into a coherent story, it is time to share it with stakeholders, investors, and potential partners.
You might want to spend some time presenting the methodology used to find the data to showcase and reinforce its validity, but the bulk of the presentation will be on results, as well as the related recommendations and insights that were generated following your research.
This section is so important that two whole blogs will be dedicated to it. One blog will go over different ways to present data, and how to classify it in a coherent way to generate additional insights.
Case Study: Market Research Process in Action
To illustrate the process, here is a mini case study based on a project I did a few years ago. At the time, a client had developed an innovative insulation solution that could be used in multiple markets but was not familiar with the shipping and transportation of medical products.
He wanted to assess the feasibility of targeting the medical shipping market as a short-term market for his product, and he wanted to understand the main trends and decision factors when selecting a shipping solution. For a sample market research plan.
Primary data is information that is generated directly by the market researcher to answer his research question. For example, when he is doing interviews, online surveys, or making observations, he is gathering primary data.
Generally, primary research is costlier to generate (both in terms of time and resources), but it is customized for the researcher’ s needs. If he has correctly designed his tools, he should be able to solve his research problems. Also, the data he collects is proprietary, so it belongs to the organization exclusively, becoming a competitive advantage.
This blog is divided into two distinct sections. The first section deals specifically with the data collection framework (i.e., the questionnaire or discussion guide), while the second section deals with the information collection activities specifically, such as in-depth interviews, focus groups, surveys, observation, mystery shops, and Delphi groups, from an implementation perspective and provides examples and information on how these activities are used by life science organizations.
Importance of Preparing a Market Research Tool
The preparation of your market research tools is an important part of the data collection process as it ensures the quality of the data collected by making sure that
a. The data is collected in a consistent manner: To be able to compile data from different investigators, or from different time periods, it has to be collected in a consistent manner.
The questions have to remain the same from one sample to the next. It is problematic to compile data if some respondents answered questions that were phrased differently.
For example, a few years back, I was brought in to a project to analyze some primary data a client had collected, as he was unable to build consistent models. A web survey had been posted on two different websites that belonged to two different subsidiaries.
Each subsidiary had designed a different autonomous web survey and had collected the data independently. After a careful review, I noticed some subtle yet significant differences between the questions in the two surveys.
In an effort to “ improve” demographic data, questions and answers were changed in one of the two surveys. The changes were significant enough that the online survey was not collecting the data the same way in the two locations.
As such, it was impossible to merge the data from the two data sources, and the data was impossible to tabulate correctly until some of the data was recorded in one of the surveys.
b. The researcher does not forget any questions and covers every topic: This is more important for voice interviews than written data collection activities. During an interview, it is easy to get wrapped up in the conversation and forget to ask some questions on a specific topic, skipping some questions.
While it is occasionally possible to go back and ask the person interviewed, the dynamic created in the initial interview is lost.
Designing a Data Collection Tool: Step by Step
Building your data collection tool is an important step. If possible, multiple individuals should be implicated at various stages to ensure that all the information needed is collected. We propose here four simple steps to develop your data collection tool.
Step One: Define the Context
As discussed earlier, the first step in doing market research is to define the information required, the target respondents and data collection tool you will use.
Defining the information required helps the researcher focus his questions. While he might have defined his needs from a broad perspective, it might be useful to do a little secondary research first to gain some additional insight and help clarify what information really needs to be created versus what already exists.
Next, identify the target respondents. Defining them clearly helps conceptualize the knowledge level of the respondent and their technical expertise, which influences what data collection tool you will use and the way the questions are going to be prepared.
Finally, the choice of the data collection tool will dictate how the questions will be asked and worded. For example, a questionnaire might be self-administered (the participant is alone when answering the questions) or assisted (with an interviewer asking the questions, clarifying topics if needed).
If it is self-administered, you should take into account that the user will have to interpret the questions and answers by himself, and more detailed questions might be necessary.
Step Two: Build Your Question Bank
A brainstorming session with other interested parties is a great way to start building your question bank. During this creativity phase, the objective is to identify all the topics you want to be covered by your market research activity.
So as to generate as many questions you think you will need to ask and try to cover as much ground as possible. You can weed out duplicates and unnecessary questions when you build your data collection tool.
Step Three: Build Your Data Collection Tool
Once you believe you have generated enough questions to cover all the ground you need to be covered, it is time to place your questions in a meaningful order.
End of survey
Remember that as the participant nears the end of the data collection tool, he might become increasingly indifferent, giving careless or half-thought-out answers (as he’ s just trying to reach the end of the survey to get his incentive).
Questions that are very sensitive in nature can be included at the end of the survey to avoid losing participants before other important information is collected.
For example, asking a question on a sensitive topic (e.g., assisted suicide, abortion) might deject participants. Having the question at the end lets you use the other data they provided before dropping out.
At the end of the survey, remember to include questions related to demographics: This will ensure that you can compare the answers between segments, as well as correlate answers and identify trends.
Some researchers prefer putting demographic questions at the beginning of the survey instead, using them as warm-up questions. That is acceptable and especially recommended if you have a few sensible questions at the end of your questionnaire, or feel that participant attrition will be high.
Step Four: Validation
Before deploying your data collection tool at large, test it with a sample of your target population. If your objective is to interview end users, try your questionnaire beforehand on a few end users to identify any gaps or misunderstandings your participants may have.
This helps ensure that the questions are clearly asked and that the data you collect is what you need as well as being easy to compile and interpret.
It is also useful to make sure that the questions are answered consistently across different individuals. Finally, it is useful to validate your survey so that you can remove questions that are not really needed, or that are redundant.
Pre-testing your data collection tool also lets you validate its length. If you see participant fatigue during the testing phase, consider removing some questions, or adjusting them from open- to closed-ended.
As a rule of thumb, an online survey should take about 20 minutes to fill out (unless you are offering an incentive or you have an especially captive audience) while an in-depth interview could reasonably last 30– 45 minutes and a focus group could last between 60 and 90 minutes.
Of course, if you are targeting participants that are very busy, a shorter tool is needed: a project targeting chief executive officers (CEOs) should include only four to five key questions to ensure a maximum participation rate.
After testing, you should be ready to start data collection.
In this section, we will go over some of the finer points relating to formulating questions. Take note that the golden rule of building questions is to keep your questions simple.
The more complex the question is, the more likely the person answering it will either skip it, not answer truthfully or just answer incorrectly as they are n’t able to understand what you are trying to ask.
This section will deal with closed-ended questions and open-ended questions. As we discussed earlier, your market research can have an exploratory objective, a descriptive objective, and a causal objective.
Some research models generally lend themselves to open-ended questions (exploratory and causal research mostly), while some others lend themselves to closed-ended questions (mostly descriptive research).
Following some tips on writing these types of questions, we will go over projective and choice modeling questions, discuss the use of question banks and mindful surveys in helping you prepare your questions, and close with a few things to watch for.
There are three types of closed-ended questions: nominal, ordinal, and interval questions.
Nominal questions are those where the choices are presented to participants across categories but in no particular order. Asking users which color they prefer, or which brand of painkiller they use is a nominal question.
An important side note here is that the order that options are presented to a respondent can influence their answer choices.
This is called the Primacy Effect: in paper and internet surveys, respondents tend to pick the first choice of a list, rather than reading through every available option. If you are using an online survey tool, use a random sorting option to attenuate the Primacy Effect.
Ordinal questions are used when there are different answer categories, with an importance ranking across possible answers. For example, a market researcher may ask to rank a product across a scale of the slow, medium, and fast.
It is important to make sure that there is no overlap or confusing terms across categories. For example, a scale of “ small, average, large, big” possesses several issues: What is average? What is the difference between large and big? A much better scale would simply be “ small, medium and large.”
An interval question is one where spacing across categories is even. For example, asking users how much they like something on a scale of one to seven, where 1 is “ not at all” and 7 is “ the most” is an interval question.
A good interval scale will usually have five to seven points and will include a middle category to produce better data, as this lets people who evaluate the item being measured as average answer truthfully. This is called a Likert item.
This scale can also be paired with a visual representation. For example, some survey platforms will allow you to ask a question and include a visual representation element as a grading scale.
Remember that points on the scale should be labeled with clear, unambiguous words, and write questions so that both positive and negative items are scored “ high” and “ low” on a scale.
Also, it’ s important to be consistent in building scales throughout the questionnaire: don’ t go from low too high in one question, and then switch from high to low in the next one.
Finally, building items that are symmetric (meaning they have the same number of positive and negative items) ensures a neutral data collection tool.
Open-ended questions are questions that require clarification from the participant. They require him to engage and justify his position, while closed-ended questions are questions where the participants only have a number of possible answers. Consider the following two examples relating to headache medication:
Question #1: Enumerate all the different brands of head-ache medication you know. (open-ended)
Question #2: Do you know the headache medication [XYZ brand]? (closed-ended)
In the first question, the researcher can generate a lot of information, such as
The consumer’ s knowledge of different brands Which brand(s) comes to mind Which one came to mind first
In which order they came to mind
Which one(s) the consumer does not have any awareness about
In the second question, the researcher is learning exclusively about the consumer’ s knowledge of the [XYZ] brand.
If there are a large number of people to question, the data from open-ended questions will need considerably more resources to be analyzed and codified, while data from closed-ended questions can be analyzed quite rapidly. Hence, budgetary concerns definitely weigh in when choosing between open- and closed-ended questions.
The other issue with open-ended questions is that they require more effort from the participant, giving him more “ room to act” and answer falsely.
Some participants take the opportunity given in an open-ended question to share displeasure with the product, the company, or even the survey, even if it has no relevance to the question. You can easily have 5% of responses in open-ended questions that are incoherent and have to reject them.
Projective question techniques are used when the market researcher wants to get more insight into a participant’ s perception of a brand or product. Usually, participants are asked to project their thoughts on other things and are then asked to explain their answer.
Photo sort questions :
This technique involves giving participants a series of photos, then asking them to select which ones they associate with a product or brand. Afterward, the participants are asked to explain why they chose these pictures to gain additional insight. It can very useful when identifying a product’ s perceived positioning, such as a medical device or laboratory product.
This technique involves asking participants what they believe other people are thinking, feeling, and saying.
This is useful when tackling subjects that are sensitive in nature, and you expect people might hide their true feelings on a topic. As such, topics in health-care that are very sensitive (assisted suicide, abortion) are often best explored using the third-person technique.
There are other techniques that are used in market research but are less likely to be used in the life science market research, such as Cartoon drawing (presenting the participants with a cartoon and asking for interpretation)
Personification (turning a brand into a person, and describing a said person)
Stereotyping (presenting a description of people and asking questions about the situation of interest)
Choice Modeling Questions
If you are trying to ascertain an individual’ s decision process, it can be a good idea to include choice modeling questions.
Choice modeling consists of asking a series of discreet questions where the participant makes relative choices (e.g., A vs. B, B vs. C, and A vs. C) in order for the researcher to infer the prioritization between A, B, and C.
To build a choice model, you have to properly identify the service or product to evaluate, and then select the attributes you are testing. For example, if you are creating a portable diagnostic device, you might be interested in learning whether speed, accuracy, price, robustness, or portability is the most important attribute.
By asking a series of questions opposing one attribute to another, you can identify a pattern. It is possible to build more complex models, but for market research purposes, a simple model can often supply a wealth of information.
The advantage of choice modeling is that it forces the participants to make choices: asking if speed, accuracy, price, robustness, and portability are important will most likely leave you with inadequate data (they are all important after all), but having a series of questions that force the user into making a decision can more likely bring the consumer to reflect, and can generate a more definitive model.
It can also be used to estimate the impact of pricing on attributes and reduces the participants’ ability to bias the research results.
The main disadvantage is that the data generated is ordinal, so it provides less information than ratio or interval data. So you might learn which attribute is the most interesting, but won’ t necessarily know how much more important it is.
Overall, the main use of choice modeling is for predicting preferences and refining new product development, estimating the willingness of customers to pay for goods and services, estimating the impact of product characteristics on consumer potential purchases, and evaluating the importance of product attributes.
Question banks can be a great source of inspiration when building a questionnaire. Some sources for pre-existing questions are shown in below
The General Social Survey Contains a core of demographic, (www.gss.norc.org) behavioral, and attitudinal questions, plus topics of special interest
An innovative approach to designing questions for surveys is the Mindful Surveys Technique. It integrates qualitative and quantitative questions into a two-step process to build an iterative questionnaire. This is especially useful for companies with a limited budget, or for researchers with limited knowledge of the topic that will be investigated.
As mentioned earlier, analyzing a large qualitative data set can be long and costly (especially if you have to do a double-blind evaluation by two independent researchers).
A way to mitigate this is to use closed-ended answers, but you might not feel confident enough to enumerate all possible choices. In this situation, you might prefer to collect more information before developing a closed-ended question.
Mindful surveying is a two-step process to resolve this issue. The first step is to open the survey to participants, using open-ended questions, and collecting about 100 responses.
Once enough completed surveys are received, the researcher temporarily closes the survey and analyzes the data. Using those responses, the researcher can identify patterns and trends to craft a closed-ended question.
He can then modify the survey with his new question, open up the survey again, and collect the data in a more digestible format. If he could not identify enough trends, he could open up the survey again, collect another hundred entries, and analyze and try to identify trends again.
The researcher should remember when using this approach that it is important to set guidelines on the “ stop-go sample” so that some groups are not underrepresented.
For example, collecting 100 responses could be an end point, but if the researcher is surveying different age groups, it could be more relevant to stop the survey once he gets 50 answers for each age group.
Double-barreled questions :
A double-barreled question is a question that is attempting to measure multiple items simultaneously. This makes it difficult for participants to answer and leads to incorrect or unusable data. For example:
What is the quickest and most precise test to verify XYZ?
Are you interested in the quickest or the most precise test?
What if one test is the quickest and another one more precise … which one should the participant answer?
Respondents who think one test is quickest and another more precise would be unable to answer, and if they do answer, the answer they would supply would be a compromise. Instead, the correct solution would be to split the question into two distinct questions, each having its own objective.
Choosing the right words :
One of the biggest issues is the choice of words. Incorrect words can lead to bias, rendering data useless. Consider the following question:
Do you like it when your medication relieves pain quickly?
Very few people do not like quick pain relief, but the question as it is worded leads the person to answer in a very specific way. A better question would be
What are the characteristics you look for in pain medication?
Overlapping interval answer sets: Overlapping interval answer sets occur when categories are incorrectly designed, leading one or more answers to overlap. Consider the following question:
How many times do you see a doctor a year?
ii. 1– 3 times a year
iii.3– 10 times a year
iv.10 times a year or more
See the issue? The categories overlap, so it would be impossible for someone who sees a doctor three times a year to answer this question. While seemingly innocuous, a researcher would be unable to use this data set.
Avoid technical terms, acronyms, and jargon:
Words should be easily understood by anyone taking the survey. But if you use a technical term or an acronym, a participant might be unable to answer your questions. Do not assume that participants have your level of technical knowledge, and instead use clear terminology. As such, rather than saying
Is arthralgia frequent at your clinic?
Is joint pain a frequent condition at your clinic?
Another alternative would be to take the time and explain the acronym or complex term in your question, rather than assuming the reader already knows what you mean, but beware that many participants will just skip over the explanation and try to answer the question as best they can.
Avoid negative questions:
A negative question is one that is worded in such a way that it requires a “ no” to respond affirmatively, and a “ yes” to respond negatively. Questions that are phrased negatively rather than being phrased positively are more difficult to answer and are more likely to be answered incorrectly. As such, rather than saying
Do you believe the government does not do enough to promote healthcare?
Do you believe the government does enough to promote healthcare?
Thoughts on “ prefer not to answer”: Some researchers prefer to compel participants to answer every question and do not include any “ prefer not to answer” options in their questionnaires. Others believe including a “ prefer not to answer” option increases response rates.
Use your judgment: if the question is very sensitive, include a “ prefer not to answer”, which will ensure that the participants’ answer truthfully (rather than answering randomly to get to the next question or dropping out of the survey). If they prefer not to answer, at least the rest of the data you collect will be useful.
Integrate some consistency checks into your questions:
Sometimes, users will answer your questions half-heartedly, more preoccupied with the incentive than the questions they are answering.
Others might under-evaluate the commitment needed to answer all the questions, and start botching answers: this is especially true if you are running along web survey.
To avoid collecting inconsistent data, integrate some consistency checks into your questions to ensure your participants are answering seriously, and the data you have collected is useful.
This could include asking the same question twice in the survey, and then comparing the responses between both questions to ensure that the participants answered the same way twice. Alternatively, you could incorporate a simple “ checking” question such as “ If you are reading this question, select ‘ Maybe’ as the response.”
Incentives are an important element to account for when budgeting for a market research activity. A properly structured incentive can increase participants’ response rates by 5% to 20%. Incentives can take the form of money, information, services, and more.
In an article published in Health Service Research, the authors found that the response rate of doctors to a survey varied significantly depending on the form of remuneration offered.
In the study, doctors were invited to participate in a survey. Randomized groups were offered different incentives, all of which were worth U.S. $25. The incentives were immediate cash remuneration, immediate check remuneration, a promised check not requiring a social security number (SSN), and a promised check requiring an SSN.
The immediate cash group had the highest response rate (34%), followed by the immediate check group (20%), the promised check without SSN group (10%), and the promised check with SSN group (8%). Globally, the study found that direct incentives generated more immediate responses than promised incentives.
The amount of the incentive also has a significant impact on the response rate. A recent study (2018) found that offering a modest compensation had little to no effect when surveying doctors.
In that study, the response rate between those who were offered a blog for participating in a survey versus those who were not offered a blog showed little variance (11.6% vs. 10.8% response rate).
Another study found that offering a more significant gift ($50 redeemable gift card) was an effective way to motivate clinicians to participate: out of the 117 clinicians who participated in a survey, 63.5% of participants redeemed the gift card that they received, while the others did not redeem it. The authors believed that this reinforced the need for adequate compensation.
There are a number of possible incentives you can offer participants such as:
Direct remuneration: A cash incentive can be offered to the participants. The Market Research Society† advises that any monetary incentive used should be “ reasonable and proportionate,” which means that incentives should be considered project by project based on the demographics of the expected respondents, how specialized the subject matter is, and how much the respondent will be inconvenienced by participating.
The minimum you will pay for an in-depth interview will vary with the profession of the participant as well as the length of the interview: it could range anywhere from $50 for a non-specialized participant to $350 for a specialist doctor.
In the case of a focus group, the cost per participant could range from $100 for a regular participant up to $300 for a specialized participant (e.g., a patient with a rare disease). Professionals could cost almost double in some situations, making this quite a costly endeavor.
A variation on direct remuneration, vouchers that are redeemable for specific goods are also well appreciated as they are perceived as having monetary value (a $50 voucher buys $50 of “ real stuff” ).
The advantage for the researcher is that online vouchers are quicker and easier to purchase and distribute (this can all be done online), easier to track and manage (vs. checks or cash), and (sometimes) if bought in bulk, discounts are available.
Non-monetary gifts :
Giving participants a non-monetary gift is especially advantageous in recruitment when there is a convergence of interests between the topic, the gift, and the profile of the participants.
For example, discussing traveling trends with frequent travelers, and then giving them traveling gear as a thank-you for participating. The gift has to be carefully chosen to appeal to the participants (to make sure it isn’ t something the participants probably already have, and to make sure that it does incentivize the participants).
Also, there could be some logistical concerns around managing the distribution of products (especially if you have to mail them), but there is the advantage of being able to purchase products in bulk.
Free food :
This works best for people participating in a focus group while in work capacity (i.e., employees) or in a community/patient group. You can buy a nicer meal than they would usually get for themselves, and it’ s convivial to “ chat over lunch.” In these cases, it is a nice touch to add a small participating gift (such as a small voucher) as a takeaway for participation.
Charitable donations :
Some individuals with high revenue are insensible to monetary incentives. As such, offering the budget that you planned for incentives to a charity organization instead might incentivize them to participate.
For example, a per diem of $100 might not be enough to encourage doctors to participate in a focus group, but donating the $2000 recruitment budget to a worthy cause and sharing this information during recruitment can sway and increase participation. It appeals to a participant’ s generous and philanthropic side.
It is also possible to offer participants a choice to which charity the funds will be donated to. Once the donation is made, it is important to send a confirmation from the charity (or a contact at the charity itself) to participants so they know it has been done.
[Note: You can free download the complete Office 365 and Office 2019 com setup Guide.]
Sharing the research results :
Some participants might be interested in the results of your findings, as the topic is directly related to their work. As such, it is possible to recruit participants by offering to share the high-level results of the research. In this case, you will have to prepare an executive summary of your findings specifically for those who request it.
Also called the Tepoztlan interview strategy, this strategy consists of helping a group or a participant in a specific task in exchange for participating in the data collection effort. This could consist of assisting a non-profit trade association with a specific task in exchange for assistance in recruiting participants.
This technique is especially useful when working in a closed environment, such as a hospital, an association, or a community. The advantage is that while the researcher is rendering service, his presence within the environment is much more understandable and enables him to be a participant observer.
Finally, some of the best practices related to incentives include
1.Speedy delivery: Incentives should be delivered between 1 and 2 days after the data has been collected. Participants shouldn’ t have to wait for weeks for their incentive to arrive, so don’ t wait until the end of the project to send incentives: it will be a lot simpler to manage if they are distributed as the interview is completed.
2.Branded delivery tool: The e-mail or physical mail that delivers the incentive should have the organizing company’ s information clearly identified so that participants can quickly identify the origin, and don’ t discount the e-mail as spam when they don’ t recognize the sender.
3. Properly crafted message: The message that accompanies the incentive should include a reminder of why they are getting an incentive, and a thank-you note to make them feel appreciated.
The benefits of respecting these best practices include minimizing time doing customer service with participants looking for their incentive, less time spent resending lost (deleted) incentives, and increased chances of the participants participating in future projects.
If you are spending too much time managing your incentive programs, look into some of these tips or into automating some the incentive rewards you use.
Data Collection Methods
The person interviewed could be a potential client, a past client, a key opinion leader or anybody else with a relevant relationship to the research topic. Current employees are another good source of information as they are in constant contact with clients and partners.
As such, they can provide insight into current customer profiles, goods, and services that are popular, satisfaction with current price levels, as well as experiences with competitors.
While in-depth interviews are more costly (in terms of time and money), they present a number of advantages over questionnaires and online surveys. For starters, interviews give more opportunities for the researcher to motivate the respondent to participate in a truthful manner, and to not abandon the interview halfway through.
Also, interviews allow more flexibility in exploring topics as opportunities in the interview occurred. The more exploratory the topic, the more useful the in-depth interview is, as it allows the researcher to change the order of questions, to prioritize some topics if time is short, or to go into more depth if the participant is discovered to be someone with a wealth of information on a specific topic.
But the interviewer should be wary of using his interviewees’ prerogative too often, since deviating from an existing script too many times can lead to a lack of standardization in data collection and issues in data compilation.
Also, interviews allow for more control over the answer process. For example, it might be crucial for a participant to answer a series of questions before another, and an in-depth interview makes it easier for the interviewer to control the flow of responses.
Finally, interviews allow the researcher an extra opportunity to evaluate the validity of the answers being given, by evaluating non-verbal cues, or by asking questions again later to validate consistency. This is especially useful when the topic is sensitive, and the researcher expects participants to exaggerate or undermine some facts.
Interviews allow researchers to gain additional insights on services or products being developed. Some topics that can be explored include
1. Purchasing process: How does the interviewee currently purchase the product? What are the criteria that he looks at the most? Which are the elements that are the most important in the purchasing process? Which things are not important? How important is the price in the selection process?
2. Current product: What does the participant currently use? Why has he chosen to use this product instead of another? Has he tried other products? Why doesn’ the use these products instead?
3. Ear to the ground: What has the participant heard about competitors? Has there been any movement or changes lately?
4. Innovation: What is lacking in current products? Is there something the product used to have and does’ t have any more that irritates the participant? What innovations are coming in existing competing products? Anything that excites him?
There are a number of actions a researcher can take to enhance the interview process. Here are a few useful suggestions.
Prepare your interview: Conduct a quick due diligence check on the interview target prior to your interview, to identify potential specialization and fields of specific interest.
Record the interview: When you make arrangements to record the interview, you will be able to focus on the answers the participants are providing, as well as asking questions and exploring topics with your interviewee, rather than spending time taking notes.
If possible, use either a portable recorder or if the interview is done by phone, use a service such as Save Your Call (www.saveyourcall.com).
These services charge a minimal amount to record interviews, and you can listen to the interviews again later to transcribe, codify, and analyze them as needed.
Remember to ask for consent: Recording an interview without advising a participant is unethical. Furthermore, most people are generally fine about being recorded and appreciate being asked beforehand.
Be clear on who and what the study is for, and what the objectives of the interview are: It is important to share information on what the interview is for, why it is being done, and its purpose at the start of the interview. While the knowledge of who is sponsoring the study might polarize some participants, it is essential that the participant is advised up front.
Also, it is good practice to remind participants of the objective of the interview. Both of these kinds of information can diminish participant suspicions about how the information will be used.
Concentrate on the essentials :
Avoid long interviews. A good interview lasts as long as it takes, but when preparing your interview guide, plan accordingly. A good rule of thumb is that an interview with an unpaid participant should last 15– 30 minutes at most, while a paid participant can expect to spend as much as an hour being interviewed.
Listen to your interviewee :
You are trying to collect data, so that means letting the participant share his information. Be careful not to spoon-feed the answers that you are looking for. This leads to bad data collection, and ultimately, does not reflect real market conditions.
Use silence to get more information :
Sometimes, getting the interviewee to share means not speaking for a few seconds: research in social interactions has shown that silence is as meaningful in verbal communications as rests are in a musical score.
Inserting silences in an interview has a number of benefits:
it slows the pace, it is conducive to a more thoughtful mood, it allows the respondent to control the direction of the next steps of the interview, it allows the researcher to note useful associations, and it reinforces the notion of interest on the part of the researcher as he is not perceived as rushing through the interview.
Keep neutral :
This is especially hard when the subject of the interview is getting criticized or challenged (and you have a vested interested in the topic), but a good interview needs to be completed in a neutral fashion. As such, the participant should not be able to guess if you agree or disagree with his answer.
Be encouraging :
This might seem to be in contrast with the advice to keep neutral, but the two are not mutually exclusive. Being encouraging means encouraging the speaker as he shares the information with phrases such as “ that’ s interesting” or “ can you share more about that?”
Hence, the goal is to keep the interviewee engaged without influencing him by taking a position, or by sharing that position.
Different Types of Individual Interviews
There are different types of individual interviews, each with their own strengths and weaknesses. We will be touching briefly on face-to-face, phone, and online interviews.
b. Telephone :
Doing interviews by phone is both faster and more cost-effective than face-to-face interviews. This method is used in situations where multiple people will need to be contacted rapidly, or when you have a limited budget while still wanting the flexibility of an open-ended questionnaire.
The main issue is that there are attention issues as the user can often multitask during your interview. Also, phone interviews are often missed by participants: as a rule of thumb, one or two out of each five interviews your blog will most likely need to be reblogged due to a missing participant (especially if you are not offering an incentive).
Topics handled by phone questionnaires have to be a little less complex, as you are unlikely to have the users’ complete attention. Finally, it is difficult to pick up on non-verbal cues when doing a telephone interview, so you have to be particularly attentive to verbal cues.
c. Online :
To mitigate the weaknesses of the first two approaches, it is possible to do interviews online using software such as Skype, Google Hangouts, or Viber. When doing interviews online with a camera, the researcher can make sure that he has the complete attention of the participant while also being cost-effective.
Also, some technologies allow the researcher to record the interview for future playback and analysis: for example, some third-party providers have created applications that can be used to record Skype calls.
The downside of doing interviews is mostly technology limitations: quality in audio and video can vary (especially for some countries where the telecommunication infrastructure is relatively weak), so you can get intermittent errors that limit the information you get from visual cues or lead you to miss some comments due to audio problems.
Motivating Interview Respondents
Some interviews will be more difficult than others. Some individuals will have changed their minds about participating; some might do it for the incentive, while others might be compelled (by a superior or colleague) into participating.
Whatever the reason for the resistance, it could make the interview difficult and quite uncomfortable. Here are some tips that might help motivate the respondent and ensure that you get the information you are looking for.
Increase the participants’ interest :
A participant might have low interest since he fails to see the relevance of the answers he is sharing. Supplying context on why the interview is occurring, and mentioning the connection between the individual and the project can motivate the respondent.
For example, reminding the participant that the objective of the project is to develop a technology that will simplify the participant’ s daily work could help motivate him.
Recognizing the participant’ s expertise :
Reminding the participant that his expertise was the reason he was selected for the interview can enhance his need for recognition. This can be done not only at the start of the interview, but throughout the interview as well.
Reducing threats to the ego :
If you feel the participant’ s ego was bruised by a previous question, it is a good technique to include a face-saving preface to attune the impact.
For example, you could start a new series of questions with a statement like: “ As you know, topic ABC is not a problem many researchers are familiar with. But I believe you have some insight on it, so I’ m looking forward to hearing your thoughts and experiences on it.”
Reducing the issues related to perceived authority:
There may be situations where the participant identifies the interviewer with a role or position that influences the interview process.
For example, the interviewer could be a key opinion leader, or a former teacher collecting data from past students. Clarifying the interviewers’ role in the interview process throughout the interview can enhance the interview flow.
In-depth interviews in life sciences are a popular way to get the information you need, especially when dealing with topics that are sensitive in nature. It is a great way to speak to people confidentially and get their views on healthcare topics, such as their personal health and that of loved ones, their use of pharmaceuticals, and so on.
Interviews are also useful to discuss topics for which an “ inch-deep, mile-wide” approach is needed. Sometimes, a market researcher will have little information on the topic he is researching, and asking a few general questions will enable him to quickly identify opportunities.
Interviewing doctors and medical personnel can be especially challenging as these individuals are constantly solicited for their time. As such, you might have to set aside an important per diem to interview them, ranging anywhere from $150 to $300 per interview (and even more for key opinion leaders). Here are some things to keep in mind:
Identifying doctors to interview can be challenging. One way to save time is to purchase a list from a list provider. You can customize your purchase to your needs (by specialty or geographic region).
Be sure to double-check how often the data is updated as well as what contact information they will be supplying (e-mail, physical address, phone, fax number, etc.). Also, verify if the doctors have opted in to be contacted by third parties, or if the information was gathered from the web at large (i.e., scraping).
If you want to build your own list, professional associations, licensing boards, or lists of attendees at a conference are good starting points.
Reviewing scientific publications through a database such as Google Scholar, PubMed, and Europe PMC can also be useful to identify key opinion leaders and topic experts, as well as obtaining their contact information.
The best way to reach doctors is by e-mail. Phones call often end up in voicemail or are intercepted by gate-keepers (such as administrative assistants).
A typical e-mail might look like the following:
I am conducting a study on behalf of Bio Biotech on the technologies currently used to diagnose XYZ. The results of the study will be used to help our client develop a new diagnostic device to better meet your and your patients’ needs.
We value your input and would like to arrange a 20-minute telephone phone call to discuss the topic with you. Please note that all results will be anonymized, and only consolidated results will be shared with the client.
Another way to recruit specialized participants is to use one of the many online platforms that exist where experts are available for consults and single interviews.
The platforms, such as Zintro (www.zintro.com), enable you to easily post projects and contact subject-matter experts directly, acting as an escrow service. You can find scientists, doctors, and more on these platforms, and experts can either respond to your inquiry or refer it to other experts in their network.
Finally, don’ t get discouraged if you get low traction for your solicitation: remember that doctors are a heavily solicited population and that it will take a large recruitment effort to obtain your target sample.
Guidelines for Preparing a Focus Group
A focus group discussion centers on a predetermined set number of questions (anywhere from 8 to 10 questions is a good number of questions). It is important not to overload the discussion guide with too many questions as the moderator might not have time to tackle every topic or might feel rushed to touch on every topic, missing some key insights.
When writing questions, keep them open-ended: participants should not be able to answer them with a simple “ yes or no.” Also, participants usually do not have a visual copy of the question they are discussing, so try to keep them simply phrased.
If necessary, ask follow-on questions to tackle complex topics. A good methodology to use when building your discussion guide is to have three groups of questions:
1. What is your favorite brand of headache medication?
2. Which store do you usually buy headache medication from?
1. How do you choose which medication you will buy at the store?
2. Has anybody here ever tried store-brand headache medication? What was your experience with it?
3. What are your thoughts on store-brand headache medication?
4. How safe are store-brand medications compared with brand medications?
5. What are the pros and cons of store-brand medication?
1. Is there anything else you would like to say about headache medication?
2. The participants’ profiles, as well as inclusion/exclusion criteria, should be decided before the recruitment process begins. This information will be useful to screen and select participants.
Examples of criteria include the sex of the participant, age, user versus non-user, or any other focus you have defined earlier in your market research objectives. There are many ways to recruit participants for a focus group: some examples are included here for convenience.
a.Nomination: Participating key opinion leaders can nominate other applicants they believe would make good contributors. Nominated individuals are interesting potential panel members as they are expected to already have a good knowledge of the topic at hand, and are more likely to want to participate as they have been referred by a noted expert.
b.Random selection: If working with an especially large pool of willing participants, the researcher can select at random participants out of a pre-existing pool (e.g., employees in a company) until the right number of participants is attained.
c. Members of a group: In some cases, an existing group of participants can be a great recruitment pool from which to invite participants. For example, a non-profit association or a chamber of commerce can both provide interesting pools of participants.
d. Volunteers: When doing a broad research effort, participants can be recruited at large through traditional means (flyers, newspaper ads) as well as electronic means, such as Craigslist, online job boards, and specialized discussion groups.
When forming your groups, it is good practice to over-allocate and includes one or two more participants than needed: it is not uncommon to have 20% or more of a group not show up (even if they are incentivized), so over blogging slightly mitigates this.
Also, do not forget the multiple costs associated with running a focus group, such as recruitment costs (anywhere between $250 and $750 per panel if you are dealing with a specialized third party), recruitment fees and incentive fees, facility costs (room, food, recording), direct moderator costs (which can range from $750 to $1500 per session for skilled moderators), and indirect costs (transcription, analysis, and final report).
Complete focus groups can be a very costly endeavor, especially in a start-up situation. The two variants discussed later (online focus groups and triads) are interesting alternatives if you are unable to shoulder the full costs associated with a traditional focus group.
Running the Focus Group
Ideally, a focus group is moderated by two persons: the head moderator and an assistant moderator:
The head moderator’s responsibility is to facilitate the discussion and to directly intervene with the group. He listens and ensures that all participants participate.
He should have solid knowledge of the topic to be able to ask follow-on questions. Skilled moderators will be able to paraphrase long and complex comments, and build on these to move the conversation forward.
Finally, as the moderator is in a position of authority (he is moderating and handling traffic in the conversation), he must stay neutral at all times and refrain from agreeing or disagreeing with the comments as they are shared.
The assistant moderator takes notes and handles the recording (if the focus group is being recorded). He listens, notes, and observes any subtle events that might occur. It is preferable that he lets the moderator talk and handle the conversation, but must be ready to add his input if needed.
For analysis purposes, it is useful to collect the participant’ s demographic information. A short questionnaire that requires a few minutes is all that is needed.
The assistant moderator has to make sure that participants have answered the questionnaire before coming into the focus group session, or has them do it on-site before the focus group begins.
It is also necessary to have the participants fill out a consent form. The consent form informs participants that they have a responsibility to keep all discussions private as well as an agreement not to discuss any information discussed during the focus group outside the focus group setting.
It can also include an authorization request to quote participants (anonymously or directly).
Once the consent forms and demographic sheets are collected and reviewed, the focus group starts. The moderator welcomes participants, reminds the participants of the focus group objectives, and sets some ground rules. These ground rules could include:
I would like everyone to participate: Please do not interrupt another participant when they are sharing their opinion.
There is no right or wrong answer: Everybody is sharing valuable insights and information, so no judging someone's opinion. But you are free to disagree! If you agree or disagree, share it with us, we want to know! Tell us why, in a non-judgmental way.
This meeting is confidential. What you share with us is confidential, and when we share it with others, it will be done in an anonymous manner. We will be taping the conversation: We want to be sure we don’ t miss anything. But don’ t worry, there are no names in the final report, and everything will remain anonymous.
Immediately after all participants leave, the head moderator and assistant moderator should take a moment to debrief while the recorder is still running, while their reaction is still fresh. This debriefing will be an invaluable part of the data collected during the focus group.
Variation #1: Online Focus Groups
Online focus groups are a variation of traditional in-person focus groups. As their name implies, online focus group participants participate online using a webcam and microphone.
The advantages are numerous. First, there are lower costs associated with online focus groups: some estimates are that online focus groups cost about half that of a traditional focus group.
Also, online focus group recruitment is faster, and the time commitment is less intense. This is especially useful with professionals (such as doctors) who have limited time to participate in focus groups and are difficult to reach.
Similarly, individuals with specific characteristics, such as rare diseases, may not be in close geographic proximity to each other. Online focus groups make it easier to bring these individuals together and make it easier to reach a representative sample size.
Moreover, people who are shyer are more willing to talk in an online setting as they feel less directly judged by others (not being in the same room and not being in direct contact).
Finally, observers, clients, and the assistant moderator can observe on a separate feed, and give comments to the moderator discreetly. Note that if the client is observing in a direct feed, notice should be given to the participants.
Nonetheless, there are a number of issues with online focus groups. For starters, as participants are not in proximity to one another, there is a limited opportunity for direct interaction.
As such, a lot of the direct interaction (which is sometimes the richness of a focus group) is lost in an online setting. Some who are more critical of online focus groups will go as far as claiming that the interaction in an online focus group is insignificant. Also, populations who are technologically challenged may be unable to participate effectively in online environments.
So, if you are working on getting feedback from an older population, online focus groups might not be an effective solution.
Furthermore, technology limitations (poor webcam or poor microphone quality) can all have an impact on the quality of the focus group session, and it is a lot more difficult to ascertain a person’ s identity online than it is in person. Finally, it is quite difficult to act on non-verbal cues in an online setting.
To ensure the best results when doing an online focus group, a number of guidelines need to be respected.
1. Groups must typically contain between six and eight individuals, but the researcher might need to over-recruit one or two participants to account for no-shows (as these are more common than in traditional focus groups).
Also, during the recruitment process, the researcher should ensure that the participants have the minimum technology requirements and knowledge to be able to participate.
2. During scheduling, the researcher should account for time zone differences between participants when choosing the time slot.
3. It is important to test the equipment and software before the actual online session. For testing purposes, the researcher can recruit a few of the scheduled participants to participate in a testing session to ensure that the moderator, clients, and other stakeholders have an adequate handle on the technology.
4. Finally, the researcher should make sure that he sufficiently communicates with participants. This includes reminder e-mails and detailed explanations on how to log in to the system.
During the presentation, the moderator can use PowerPoint slides to share content, and he has to be sure to keep an eye on the participants in attendance.
It is his responsibility (or the assistant moderator, if there is one) to make sure that participants are attentive and that they are not multitasking (checking e-mail, playing games). Some software will let the moderator know if the participants’ window is active, making surveillance easier.
Variation #2: Triads
Triad focus groups consist of three participants and are used in very specific situations. As there are fewer participants, these can be shorter (getting everything you need to be done in under an hour), allowing for more groups to be seen in the same time period.
Also, the presence of fewer participants allows the researcher to do more in-depth research and probing while limiting groupthink (participants retain more of their individual position, and outliers are less influenced by the general agreement of a large group).
Finally, triad focus groups allow more dedicated product testing and observational usage, as each participant can have more time individually testing the product, rather than having a few participants test the product and the other participants observing it.
The main disadvantages are that since the groups are smaller, more of them need to be conducted to obtain enough information. Triads should not be used as a cost-saving measure, but rather as a different methodology to obtain the same information.
As a rule of thumb, you should hold enough triad focus groups to equal one focus group (three triad focus groups for each regular focus group). Also, triad focus groups can be more “ awkward” to manage, since there are fewer individuals, putting more pressure on the moderator’ s skills to keep the discussion going forward.
Traditional surveys are a cost-efficient tool used to collect a large amount of quantitative data from a target audience. Using a questionnaire, the researcher develops questions that tackle the topic as needed: most questions will be closed-ended to allow the participant to complete the survey rapidly (and to allow for quick data compilation).
But there is the possibility of having some open-ended questions as well. The two main advantages of traditional surveys are that they are economical and that they can be done in a manner to protect the identity of respondents.
Questionnaires can be prepared and distributed in a much more economical way than one-to-one interviews. For example, if a researcher is administering a survey by mail, or soliciting a large group of individuals at a conference, lesser costs are distributed over a larger sample of participants.
Secondly, questionnaires can be sent to potential participants with a pre-addressed envelope, which participants can send back to the market researcher without any identifying features.
There are several different techniques that can be used to distribute and collect information with questionnaires. We will touch briefly on those used less often in life science, and we will discuss in depth the most useful one, online surveys, in the next section.
Direct Mail Surveys
Direct mail surveys are handed out physically and directly to participants (either by handing them out in person or by sending through the mail). They have long been the technique of choice since they are less expensive, have a wide geographic reach, and there is no risk of interviewer bias.
A recent 2016 survey by Accelerant Research found that 44% of respondents thought direct mail survey was an acceptable way of being contacted for a survey.
Nonetheless, their current effectiveness is questionable (direct mail surveys usually have a 2– 4% response rate), there is a lengthy response cycle (between the survey were first sent out and getting the response back to the market researcher), and you need to engage in constant reminders (by phone or e-mail) to increase participation.
They have been largely replaced by online surveys but could be useful if addressing an older population that is less technically savvy. To increase participation, a Media Logic study found that envelopes which grabbed attention, seemed friendlier, and stood out more were more likely to be answered by seniors in a healthcare setting.
Letters that look easier to read, are quicker and easier to understand and have a better layout were more appreciated. Another way to increase participation is to include a pre-paid envelope as well as personalizing the letter that accompanies the questionnaire.
Telephone surveys are a cost-effective way of collecting data, especially if you are trying to quickly reach a population at large. They usually exhibit higher response rates than online surveys, enable fast collection of data, can be used to tackle complex topics, enable specific targeting of participants, and usually have more positive responses. Telephone surveys’ average response rates can vary from around 8% to 12%.
Also, most phone interviews are recorded, which makes it easier to validate interview quality and to transcribe and analyze data. Nonetheless, there are fewer and fewer landlines, making populations harder to reach.
Furthermore, participants are more likely to drop out halfway through or give fake responses (especially if the phone survey was longer than initially announced).
Finally, participants are less likely to answer if they do not have a relationship with the caller (i.e., they do not recognize the caller ID) and timing is everything when doing a phone survey: calling participants at an inappropriate time will most likely result in the caller refusing to participate.
Companies and organizations that have prescreened lists (members, clients, and opted-in individuals) are more likely to have successful telephone survey campaigns than those that make cold calls.
Another way to increase participation rates is to prepare lists of questions that are simple to answer and to keep the survey short. Remember that if you are talking to people representing organizations, make a note of which individual you interviewed.
In-person surveys bear a resemblance to in-depth interviews but focus on quantitative data. They consist of approaching people in the street or in a natural setting and asking them for a moment so you can ask them a few questions. This is the main strength of in-person surveys: they can be conducted anywhere.
For example, if you want to open a healthcare facility, or sell a product locally, it could be a good way to validate market demand by visiting the chosen location and surveying people around the location to get a sense of the “ walk-in” traffic potential (number of potential clients who will enter a facility as they pass by it).
This method is also useful to quickly collect data on a generic topic, and allows the use of follow-up questions, adjusting the questions in a fluid manner. It also permits the use of simulated materials and mock-ups as the participants are facing the interviewer.
However, it can be difficult and frustrating to recruit participants. Some things you can do to attenuate this include carrying an official ID, surveying the site in advance, and keeping the interview under 10 minutes.
A small (visible) token of appreciation can also enhance participation. Another issue is that in-person interviews can take a lot longer to complete than a web survey.
It is also more costly, especially if you include data compilation and the interviews you will have to reject due to invalidity concerns. Nonetheless, using an in-person survey allows some innovative practices (starting the survey in person and asking participants to complete it online at a later time or the use of mock-ups) and can be useful in very specific situations.
The rapid pace and development of technology have created new opportunities for collecting data. As such, the use of online surveys has grown immensely in popularity.
They are cost-effective, simple to use, and if done properly, can reach a wide range of populations and allow the participants to complete the survey on their own time with little effort.
Also, they allow for the use of different media (such as sounds and videos), making the experience richer for the participant, and enabling more complex data gathering for the researcher.
Most online surveys today are done using a web-based survey tool, although the popularity of surveys on a mobile platform is rising: this has the advantage of allowing for participation tracking (enabling you to directly target non-respondents and send them reminders), while allowing for direct data entry, making them much more efficient.
There are some limits to using an online survey for data collection. For starters, they are much more effective when used with a closed population (employees, members, clients) as the response rate will be higher.
Also, online surveys are focused on populations that have access to web-based technologies: hence, Internet surveys will usually skew toward populations that are younger than 65 years old, college educated, and have higher than average household revenues. If you are not targeting one of these groups, another data collection tool might be more appropriate.
Finally, with the rise in popularity of online surveys, there is a noticeable trend toward over-solicitation of users, leading to user fatigue: proper incentives are becoming a key factor in increasing user participation.
There are considerable challenges in recruiting participants for a life science survey, particularly clinicians.
A study done in 2015 found that an online web survey targeting clinicians got a 35% participation rate, with deep variances across specialties: ranging from 46.6% (neurology/neurosurgery), 42.9% (internal medicine), 29.6% (general surgery), 29.2% (pediatrics) to 27.1% (psychiatry). Lack of time and survey burden were the most common reasons for not participating.*
Another study found that general practitioner survey rates could be increased with incentives (larger and upfront, if possible), peers pre-contacting targets by phone, personalized packages, and sending the survey on a Friday.
Online Survey Tools
There are many web survey tools available online that you can use to build, distribute, and collect data. It would be impossible to list them all here, so we are sharing a few of the most popular ones.
a. Survey Monkey (surveymonkey.com):
Survey Monkey is probably the best known of all online survey tools. It is easy to use with both free and paid options and with its ready to use tools, a survey can be quickly implemented and made available online.
Also, Survey Monkey offers services to recruit participants for your survey directly, respecting the demographic and characteristics you hand-picked for your participants. The downside is that some options cannot be found intuitively, so a user not familiar with Survey Monkey will have to poke around until they find the needed options.
b. Google Surveys (www.google.com/surveys):
Google Surveys is an increasingly popular tool to build online surveys. There are two ways to create a form in Google Docs.
The first way is to create a new form from Google Drive while the second way is to create the form from Google Sheets, which will link the spreadsheet to the form and load all the data into the sheet for later analysis. Simple and free, Google Surveys also has the advantage of having the data pre-formatted for the analysis phase.
c.Zoho Survey (https://www.zoho.com/survey/):
Zoho Survey is a very simple to use online survey tool, with a quick setup and simple deployment. It is useful for doing quick surveys without a large budget, but it does lack some of the more sophisticated features of the other survey tools.
d. SurveyGizmo (www.surveygizmo.com):
SurveyGizmo is a more advanced and powerful survey tool. The free version gives access to the most basic options, and the paid version gives access to more powerful tools, such as styling options, a question library, and e-mail campaigns. Of course, the more complex the tool, the steeper the learning curve.
e.Checkbox survey (www.checkbox.com):
Checkbox survey provides advanced tools and customization. As such, it lets the researcher customize almost everything around his survey while keeping costs reasonable.
As with more complex tools, the researcher will have to invest more time learning for this tool to become functional compared with the more simple tools out there.
f. Pollfish (www.pollfish.com):
Pollfish is a simple to use a web survey tool that is perfect for beginner researchers. Once the researcher has posted his survey, someone from Pollfish revises the survey, offering helpful advice in optimizing questions and programming survey logic.
The survey platform also recruits participants for your survey, enabling you to have your data quite rapidly.
Mobile Online Surveys
Mobile online surveys use mobile technologies (mobile phones, tablets, and PDAs) to collect data. The growth in popularity of these devices has created new opportunities to reach participants and collect data rapidly: Pew Global reports that an overwhelming majority of the population in almost every nation they surveyed owns some form of mobile device, even if it is not considered “ a smartphone.”
Mobile online surveys present multiple advantages. They can be used to reach populations that are increasingly hard to reach with traditional market research methods (i.e., younger audiences). Also, they usually have higher response rates as mobile surveys are convenient to answer.
They also have lower overall costs as it takes less effort to reach the minimum sample size, and mobile surveys can use some of the device’ s features (e.g., the global positioning system, the camera, or the microphone).
Finally, as participants often have their mobile device on them, they respond quickly, leading to immediate feedback and even enabling event-based surveys.
Some of the downsides of mobile marketing research are reduced response rates depending on the group targeted (some demographic groups are underrepresented on mobile devices), the questionnaire has to be shorter and simpler (making it difficult to collect in-depth data), and some practitioners have found that higher incentives are necessary to gain traction.
Also, some people find receiving online survey invitations particularly intrusive (akin to getting a phone call during supper). The overuse of mobile online surveys could lead to members or customers opting out of getting contacted in future communications.
There are a variety of ways to collect participants for the survey, ranging from push methods (where the survey is sent out to participants independently of location and actions) to pull methods (the survey is sent to participants in a specific location, or those purchasing a specific product).
Once your survey is built, the next step is to get participants to answer your survey. If you are surveying existing clients or interested parties, you can invite the participants directly to participate in your survey. If you need to survey people at large, there are a number of solutions that are available to you.
The first solution is to purchase a list or database of prequalified e-mails. A number of associations and private organizations sell lists of prequalified participants and databases that can be used for survey purposes. The price of these lists can vary a lot, from a couple hundred dollars to thousands of dollars.
You have to remember that purchasing a list does not automatically mean that you have the authorization to contact the participants, so it is important to verify whether the database consists of participants that have opted in to receive communications from third parties.
Furthermore, if you are sending out your survey to a population at large without any consent of solicitation, you may run afoul of anti-spam regulations. Make sure you comply with existing regulations by adhering to local requirements (e.g., including a correct header and adding an unsubscribe link).
If you are contacting individuals with whom you have an existing relationship (customers, members, employees), they are usually excluded from such regulations. Double-check your national guidelines to be on the safe side.
The other alternative is to use a third party to recruit participants. For example, Survey Monkey offers this service, and each participant will cost you $1– $7, depending on the criteria of the participants you need.
Another alternative is Pollfish, which uses proprietary and third-party mobile phone applications to recruit and validate participants.
They currently have a database of 370 million users throughout the world. In both cases, expect anywhere between 24 hours and 2 weeks to reach your target sample size.
As you only pay per completed survey, not per respondent (e.g., you don’ t pay for incomplete surveys or those who did not pass the screening qualification), these recruitment services are quite affordable for start-ups trying to get a better perspective on their market.
As an example, in a recent project, a client needed to better understand the purchase process of skincare products for patients undergoing radiotherapy. The secondary information available on the topic was quite limited, so we engaged in a web survey online using a third-party recruitment service.
The simple web survey was online in 24 hours, and within 1 day, we had a sample of 150 people sharing insights on their purchasing process, favored brands, and pricing sensibility for less than $300.
This type of personalized market research would have been impossible just a few years ago and speaks volumes about how emerging technologies are shaping the way we do market research.
The Delphi method is an interesting methodology that is used to get information from experts and key opinion leaders on future trends. It is suitable when the researcher is faced with limited secondary information or a very complex topic.
The methodology is simple. A group of experts is individually asked a series of open-ended questions on a specific issue (usually by e-mail, although this was traditionally done by mail).
The moderator then compiles the answers he has obtained, identifying the majority position of the group, listing the arguments for and against the positions. He then shares the majority position with participants as well as generating a new round of questions, which he sends out to the participants for additional feedback.
As such, the participants are confronted indirectly and face the opinions of other experts in an anonymous setting. This process is usually repeated until a consensus forms around the topic being discussed, but some researchers suggest that a maximum of four iterations should be done before concluding a Delphi group.
Also, divergent opinions should be included in the report with their supporting arguments.
Hence, a Delphi group involves the systematic questioning of renowned experts to explore a breakthrough topic. This methodology is quite useful for forecasting trends, and for getting experts who are difficult to reach individually (as responses are completed at the convenience of the participant) while conferring anonymity on the participants.
Typically, participants are not revealed to each other, even at the end of the project. The systemization of data collection combined with the anonymity given to participants minimizes the halo effect that some key opinion leaders can have on other participants, and enables participants to re-adjust their opinions more freely throughout the process. It is also relatively inexpensive.
However, the Delphi method results are tied to the competency of the moderator managing the group. An inexperienced moderator might lead the group to a false consensus, or lose participants as the group progresses.
Also, there is a tendency to eliminate more radical options and focus on more mainstream compromises. Finally, the method is more time-consuming and lengthy than a traditional focus group.
In this method, the bulk of the work is done by the moderator. He is responsible for generating the initial questionnaire, following up with participants, compiling the information as it is generated, and animating the panel.
He is also responsible for coaching participants and training them if they are not familiar with this methodology. His skills in written communication will have a direct impact on the success of the panel.
The researcher should be careful when recruiting participants and he should make sure they have been informed that the process is time-consuming. He should account for participant attrition, and plan accordingly by initially over-recruiting.
While some researchers advocate for the recruitment of up to 30 participants initially, too many participants lead to complexity issues in data compilation and coordination: up to 15 participants is a good starting point, which takes into account participant attrition.
One thing to note is that the Delphi method is recommended for use in a healthcare setting as a reliable means of determining a consensus for a defined clinical problem but might have limited practicality in market research situations looking to generate some market-oriented data (such as market size and growth).
Collecting data through observation enables the researcher to monitor and see how a product is used or a situation is handled in the real world. While the researcher is watching how individuals react in everyday occurrences, he gains valuable insight.
One of the main strengths of observation is that the data being collected does not depend on the impressions or knowledge of the individuals being observed, but rather on the interpretation of the researcher of the action they are performing.
Participation observation allows interaction while recorder observation is easier to analyze. For example, a market researcher could work as a volunteer in a hospital to better understand the patient’ s waiting for experience, chatting with patients who have been waiting for a while.
In this case, the observer has to be careful not to “ contaminate” the observation experience and influence responses, and has to be mindful of potential bias: it is quite easy to become attached to individuals, thus biasing the evaluation process.
Finally, the presence of the participant– observer can change normal dysfunctional patterns or functions, and the researcher should account for the impact that his presence will have on normal processes.
Coming back to the earlier example on advanced insulation, my client was developing a very robust insulation container, and wanted to see if it could be used by fishermen. To understand how fishermen use existing insulation containers in their everyday setting, I went to a dock and observed fishermen coming back from fishing trips.
The objective was to record and observe how the fishermen handled their insulation containers after a day at sea.
The observations were surprising: in some situations, the fishermen would simply throw their containers overboard when arriving at the docks from the sea. I also noticed that quite a number of containers were pierced by the forklift carriers.
As such, while in literature, competitors claimed that their products have a useful life of 10 years, our observation at the dock revealed that everyday usage was brutal and incompatible with the product my client was developing.
One key difference between my client’ s product and traditional insulation is that a normal insulated container can be pierced by a forklift, “ patched” with insulation foam and some duct tape, and then sent back to the docks.
It would still have an estimated 75%– 80% efficiency rating, whereas my client’ s technology required 100% product integrity. The industry’ s definition of functional included heavily damaged containers, a fact that observation brought to light.
There are some conditions for using the observation method. When choosing this methodology, the researcher should make sure that the information is observable, or infer-able: choosing to observe a customer’ s purchasing decision process wouldn’ t be possible, but observing how they use a product is possible.
Also, the phenomena to observe must be frequent, repetitive, or predictable. If not, it becomes too costly to observe. Finally, the observation must be relatively short in duration to be cost-effective.
As mentioned earlier, observation has a number of strengths over other tools. First, it allows direct access to the person being observed without the use of intermediaries.
As the researcher is observing, there are fewer opportunities for the subject to respond dishonestly. Finally, it allows the researcher to observe the subject in his normal environment, which allows him to notice elements that the subject himself is unaware of.
The disadvantages are also pretty steep. First, as the observer is collecting the information, he is inserting himself as a part of the data collection equation. His point of view, past experiences, and even mental state (fatigue, emotions) can all taint the information he is collecting.
Use in Life Sciences
Observation can be used in a variety of situations. For example, it can be used in environments with patients, observing how medical devices are being used in the healthcare environment, or by a patient in a home setting. It can also be useful to learn how patients are interacting with doctors in certain situations.
One of the emerging and interesting variations in observation is the use of wearable technologies as a means to observe and understand true patient behavior. Wearable technologies are devices that are worn by individuals, and that monitor the person’ s movement, heart rate, sleeping pattern, and more.
There are many different devices that can be worn on wrists, like vests, on footwear, and more. Contract research organizations (CROs) are already using them as a way to monitor participants in research studies, and the U.S. Food and Drug Administration has issued guidance presenting regulatory views on the use of mHealth technologies in clinical trials.
The increasing prevalence and familiarity of these devices among consumers, as well as the increase in accuracy, has attracted interest from many of the big players in this space.
There is also a relevant trend in the use of the participant’ s own mobile device in clinical trials, called Bring Your Own Device (BYOD) studies. As of 2016, Kara Dennis, managing director of Mobile Health (mHealth) at Medidata, stated that the concept of BYOD has reached “ an inflection point,” noting that one-third of the company’ s mobile health trials include a BYOD or hybrid BYOD component.
In these trials, subjects can use their own smartphone or tablet to complete study-related tasks or if they do not have a qualifying device, they can be provided with one.*
Benefits to BYODs include lower training costs and lower setup costs (users are already familiar with their devices), but there is the concern over Big Brother anxiety (as some users find continuous monitoring unsettling).
Besides, the Scanadu story from 2016 (where users who purchased a device for a clinical study were not told it would be shut down after a predetermined date to comply with FDA regulations) might stop the whole BYOD space in its tracks, especially in situations where consumers have a vested interest in the device.
Wearable technologies automate data collection as the researcher does not need to question participants. He can simply monitor their behavior through the device.
They also produce much more precise data as participants do not have to estimate answers and there are no issues relating to recalling answers; the researcher has access to the exact data precisely calculated by the device, which he can obtain through an Internet connection.
As such, this method mitigates one of the key disadvantages of observation, which is the time necessary to observe participants.
The HealthKit Wellness App developed by Apple is an interesting tool dedicated to wearables; it consolidates health data from the iPhone, Apple Watch, and third-party apps, turning the phone into a patient-monitoring tool.
It can be programmed to monitor a number of metrics, such as heartbeat and movement, and can send automated messages once a predetermined condition is reached.
The increasing use of consumers’ wearable fitness trackers can also be a boon for companies wishing to understand consumer patterns. And there’ s no telling what the future holds in this space.
Imagine using Google Glasses (a head-mounted display shaped in the form of glasses that can record activity) to monitor a person’ s shopping experience, and later analyzing shopping patterns.
Or feasibly, a company could use 3D virtual tools to enhance focus group participants’ experiences by simulating environments. The emergence of and rapid changes in technology will play a role in how market research observation occurs, and how data can be collected.
Mystery shopping is a market research activity where trained individuals are mandated to evaluate the quality of service or compliance with the regulation. To do so, these individuals are sent to experience the organization from the outside, as an everyday customer.
When performed, the locations that are the subject of the mystery shopping activity are not aware that they are being evaluated. This allows the head organization to better appreciate how it is performing, and to adjust processes and internal procedures accordingly.
The mystery shoppers act as much as possible like every-day consumers, asking questions, purchasing products, or behaving following certain scenarios, and then reporting their experiences back to the market researcher coordinating the effort. Mystery shoppers can investigate a number of items such as
The customer experience (number of employees, ease of service, the attitude of employees)
The facility (cleanliness, ease of navigation)
The purchasing experience (time spent to purchase a product, ease of transaction)
The post-purchasing experience (ease of returning the product or using the return policy)
While it is possible to do mystery shopping in-house, it is more efficient to hire third-party companies or freelancers to engage in mystery shopping. They are more likely to reflect the point of view of an unattached observer, unlikely to nuance their observations with their own insider knowledge, and less likely to be recognized by staffers being evaluated.
Finally, it is possible to use mystery shopping to evaluate competitors, but a certain number of ethical guidelines should be respected. First, the mystery shopper should not record competitors’ employees with a recording device as these employees have not given their consent.
Also, the length of time spent should be the equivalent of a normal market transaction (so as to not waste competitors’ resources unethically). Finally, the evaluation scenario should not require a follow-up call from the company being observed, unless this is a normal part of this transaction.
Globally, if the mystery shopper is asking questions that a shopper usually asks (e.g., price, quality, or availability of goods) and if no confidential business information is sought or revealed, then the common sense rule of “ no harm, no foul” applies.*
Even then, if these guidelines are respected, some experts believe that mystery shopping competitors are walking into a gray area of market research, putting you at risk of liability.
Use in Life Sciences
Mystery shopping is quite useful for mature organizations that are already generating sales (or offering a service), and need to gain a better understanding of their customer experiences. It is also quite popular in healthcare facilities, as well as in new healthcare mobile application technologies.
In mature organizations (generating sales), mystery shopping is useful to evaluate the customer process and identify pain points in the inquiry, purchasing, and maintaining phase. If you have issues generating recurring purchases, consider a mystery shopping initiative to evaluate your own internal processes.
Healthcare facilities use mystery shopping to evaluate the level of service they are offering. It provides insight into the patient’ s experience and identifies gaps in the service chain. As such, a typical healthcare mystery shop could include
1. Scheduling an appointment with a healthcare provider
2. Visiting the healthcare center
3. Completing a medical consultation
4. Reporting back on the patient experience
Also, mystery shops are often used by technology companies developing digital applications. For example, some firms will engage mystery shoppers to download, install, and register their app and then report on the ease of use, issues during installation, and overall feedback.
Finally, some telemedicine apps will use mystery shoppers in an effort to evaluate the application’ s ease of use and solidity. They will use mystery shoppers to engage with doctors, inquire about medical conditions, and even have the mystery shoppers use the app to purchase medications (OTC and prescription) online.
Typically, various elements are being verified, from the customer experience to the quality of service all the way to adherence to regulation (is the customer able to purchase restricted products, or is regulatory compliance enforced?).
The Market Research Checklist
Sometimes, you want your research to go beyond the standard question of: “ What’ s my market?” To help during the brain-storming and planning phase, here is a short list of question that can be useful to orient your research efforts.
Questions break down into three distinct categories: your corporation (the market, the product), your consumers, and your competitors.
Who is the purchaser of the product/service? Who is the decision, maker?
What is their purchasing process? How can you reach them?
Who uses the product/service?
Is the purchaser different than the end user? How do these two stakeholders interact?
What is the product/service replacing?
What is the user presently using to fill their current need? What do potential customers/end users think of the
What is the profile of your customers (location, age, gender, income level, etc.)?
What are their needs? What need does the product specifically target?
What are the customer service and retention strategies that are needed?