how to write qualitative research proposal and how to critique qualitative research paper how are qualitative and quantitative research methods similar
Qualitative Research Practice: Implications for
the Design and Implementation of a Research
Impact Assessment Exercise in Australia
Australian Government—Department of Education
Prof. Lisa M. Given
School of Information Studies
Research Institute for Professional Practice, Learning and Education
Charles Sturt University
Dr. Denise Winkler
Ms. Rebekah Willson
School of Information Studies
Charles Sturt University
On 19 June 2013, the former Department of Industry, Innovation, Climate Change, Science,
Research and Tertiary Education (DIICCSRTE) and the Australian Research Council co-released
Assessing the wider benefits arising from university-based research: Discussion paper (the
‘discussion paper’). The discussion paper sought “the views of interested parties regarding a
future assessment of the benefits arising from university-based research. The proposed
assessment will include a strong industry focus and will be designed to complement the
assessment of academic impact being undertaken through ERA the Excellence in Research for
Also in 2013, the Commonwealth engaged content experts to investigate specific aspects of a
research impact assessment methodology. Prof. Lisa M. Given was contracted to provide
evidence and analysis on the possible contribution that qualitative inquiry can bring to the
design, development and implementation of a research impact assessment exercise in Australia.
The primary goal of the project was to explore the implications for the use of research impact
The project included three phases:
1) An environmental scan of the literature on qualitative inquiry and research impact;
2) A public workshop with key stakeholders (e.g., researchers, university administrators,
research office staff, communications staff, and others who may be involved in a
research impact assessment process) from institutions across Australia; and,
3) Qualitative interviews with key stakeholders involved in university-based research.
The environmental scan of relevant literature provided data for the design of an all-day
workshop, held in Sydney, Australia, 8 August 2013; twenty participants were presented with
information and activities to engage with the qualitative design practices outlined in this report
and to reflect on their application to a proposed research impact assessment process. University-
level workshop participants came from various institutions (e.g., University of Sydney; RMIT;
LaTrobe; Central Queensland University) and disciplines (e.g., engineering; chemistry;
sociology; public health). Participants included a mix of early career, mid-career and senior
researchers, Associate Deans Research, communications staff, and Research Office Directors.
Representatives from Universities Australia, the Australian Technology Network and other
industry stakeholders also participated in this event.
The commentary provided at the workshop informed the design of ten, in-depth individual
interviews, where additional information was gathered on the relevant themes. Interview
participants were also recruited from across Australia and represented a mix of disciplines and
career stages. Although some workshop participants and interviewees had very close working
knowledge of impact assessment schemes, worldwide, others had little to no prior understanding
of this type of formal evaluation.
Department of Industry Innovation Climate Change, Science, Research and Tertiary Education (DIICCSRTE)
(2013), p. 4. 2
This report provides an overview of the key findings arising from all three phases of data
collection, with suggestions developed in the context of qualitative inquiry as it could be applied
to research impact case studies. Emergent findings related to research engagement are also
discussed. The discussion paper was a key reference point for the participants in the project,
given the timing of its release. This report presents analysis of the information gathered to inform
the design and development of a research impact assessment exercise, with a particular focus on
the use of research impact case studies, to benefit from available expertise in qualitative
Research Impact Assessment – General Context
Research impact is gaining international attention in the university sector, with increasing calls
for evidence of the economic, social and environmental benefits of publicly-funded research . In
the United Kingdom (UK), for example, this has resulted in an Impact Exercise as part of the
Research Excellence Framework (REF); the first round of that exercise is underway currently. In
Australia, the Go8-ATN Excellence in Innovation for Australia trial was conducted in 2012, to
explore the viability of a similar assessment exercise in this country. The findings of that study
point to a number of challenges with the development of case studies, which may well be
addressed by the use of an appropriately-designed qualitative case study approach. For example,
the report notes that “While some cases were very well written and explained, a number were
poorly written and lacked defined verifiable sources to back up claims.” Further, many cases
were based on prospective, rather than demonstrated, impact, and the time and resources needed
to gather data linking research to impact were described as significant.
Recent reports examining the value of research impact assessment also mention the potential
value of research impact assessment exercises for providing material that can be used, in other
ways, to highlight an institution’s research reputation and the value of research. They note that
Australian “universities have not been pro-active in articulating and communicating the impact
of this research in a manner that is readily understood by the broader community.”
What is Research Impact?
The discussion paper suggests, as a definition for research impact that it is those “positive
economic, social and environmental changes that can be attributed to university research.” The
aims of a research impact assessment process, as outlined in the discussion paper, may be to:
1. Demonstrate the public benefits attributable to university-based research;
2. Identify the successful pathways to benefit;
See Appendix 1 for details on the data analysis process used to create this report.
See for example Grant et al. (2009); Guena & Martin (2003); Morgan Jones et al. (2013); and, Group of Eight
Department of Industry, Innovation, Science and Research (2011).
Group of Eight and Australian Technology Network of Universities (2012), p. 6.
Ibid., pp. 6-7.
Group of Eight (2011); Group of Eight and Australian Technology Network of Universities (2012); and, Morgan
Jones et al. (2013).
Group of Eight and Australian Technology Network of Universities (2012), p. 5.
DIICCSRTE (2013), p. 5. 3
3. Support the development of a culture and practices within universities that encourage and
value collaboration and engagement; and,
4. Further develop the evidence base upon which to facilitate future engagement between
the research sector and research users, as well as future policy and strategy.
What are the Expected Outcomes of a Research Impact Assessment Exercise?
Further, the paper notes that the outcomes of such an exercise include:
1. Providing an evidence base for decision making by universities, government and industry
2. Promoting engagement both between university researchers and potential research users
and in the sector;
3. Promoting research outcomes and engagement strategies of Australia’s publicly funded
4. Providing an evidence base for benchmarking standards within the university sector; and,
5. Linking outcomes to funding allocations.
Overall, the discussion paper notes that the principles for the design and implementation of a
research impact assessment are to provide useful information to universities (with information
collected and assessed at the institution level, with some disciplinary granularity), while
minimising the administrative burden of such an exercise. The aim of the exercise is to
encourage research engagement and collaboration outside of academe, to encourage research that
benefits the nation and to involve research users in the assessment process.
What Measures are Proposed to Assess Research Impact?
The discussion paper proposes that the research impact exercise consist of two distinct
methodologies – i.e., the collection and assessment of: 1) research engagement metrics; and, 2)
research benefit case studies. Research engagement metrics are proposed as indicators of
pathways to research benefits and should meet the following criteria:
• Be quantitative, research relevant, verifiable and comparable;
• Be repeatable and time-bound;
• Be sensitive to disciplinary differences; and,
• Quantify relevant pathways to research benefits.
Research benefit case studies “are a narrative method whereby an institution is able to describe
research benefits.” A case study-based assessment should be designed to:
• Include key information to enable effective and verifiable comparison;
• Have evidence supporting the claim(s) made; and,
• Capture and encourage cross-sectoral engagement.
Ibid., p. 6.
Ibid., p. 6.
DIICCSRTE (2013), pp. 6-8.
Ibid., p. 9.
Ibid., p. 14.
Ibid., p. 14. 4
The discussion paper notes that each institution might only submit a “limited sample” of case
studies; separate research areas and/or separate institutions could submit joint case studies.
Although there are no prescribed metrics or data to include, institutions would provide “any
relevant and verifiable data” in the case study, so that claims made are verifiable. The case
studies would be assessed “primarily by research end-users” on panels created for that purpose.
The criteria for assessment might include:
• Reach (i.e., the spread or breadth of the reported benefit);
• Significance (i.e. the intensity of the reported benefit);
• Contribution (of the research to the reported benefit); and,
• Validation (i.e., key impact claims are able to be corroborated).
1. Qualitative Research Practice – An Overview
A key goal of this report is to provide details on the current thinking within qualitative inquiry on
research assessment, including measures of research impact. As far back as 1994, “the Australian
Vice-Chancellors’ Committee proposed that qualitative aspects should be incorporated” into
processes of research evaluation. Recently, public responses to the discussion paper have
highlighted the need for qualitative approaches to impact assessment, in particular. This first
section of the report is designed to inform a non-expert audience about relevant components of
the broader world of qualitative assessment (e.g., the nature of rigour in qualitative research; use
of specific methodologies and methods; etc.).
1.1 Qualitative Research Paradigm
Qualitative research is grounded in an epistemological commitment to a human-centred approach
to research, highlighting the importance of understanding how people think about the world and
how they act and behave in it. Key principles of qualitative inquiry include gathering data that
are occurring naturally, exploring meanings (rather than behaviours, alone), and crafting studies
that are inductive and hypotheses-generating, rather than ones that involve hypothesis testing.
Qualitative studies typically describe phenomena about which little is known; they capture
meaning (such as individuals’ thoughts, feelings, behaviour, etc.), instead of numbers and
describe processes rather than outcomes. To understand how individuals make sense of their
worlds, researchers ask people, directly, what they believe to be important about the topic or
issue under study. Qualitative projects are typically designed to:
• Acknowledge that knowledge is socially constructed and inextricably linked to peoples’
backgrounds, histories, cultural place, etc.;
• Present an inductive understanding of participants’ experiences;
• Reflect a dynamic, reflective and continuous process;
• Embrace context, bias and subjectivity;
Ibid., p. 14.
Ibid., p. 17.
Guena & Martin (2003), p. 294.
Australian Open Access Support Group (2013), para. 16; Knowledge Commercialisation Australia (2013), p. 5.
Social Sciences and Humanities Research Ethics Special Working Committee (SSHRESWC) (2008), p. 2.
Silverman (2000), p. 8.
Creswell (2007), pp. 37-39; Palys & Atchison (2008), p. 9.
Palys (2008), p. 9. 5
• Focus on research partnerships/engagement; and,
• Foster the emergence of concepts, theories and strategies, in the course of the research.
1.2 Qualitative Methodologies – Relevant Approaches for Documenting Impact
To achieve these goals, qualitative researchers use a range of methodologies (e.g., grounded
theory; phenomenology; case study; narrative inquiry) and methods (e.g., interviews;
observation; focus groups; diaries) in their projects. The case study methodology has a long-
standing history in qualitative research practice. A case study design “aims to understand social
phenomena within a single or small number of naturally occurring settings. The purpose may be
to provide description through a detailed example or to generate or test particular theories.”
Various stakeholders may be involved in the design and implementation of the project, as
participants and/or as co-researchers in the investigation. Although the design and
implementation of specific methodologies will differ, many qualitative projects start by listening
to individuals engaged in the situation.
Triangulation of methods is often used to draw on the strengths of specific methods to best
explore a phenomena from multiple points of view. For example, a case study of a classroom
environment might include:
• Interviews with various stakeholders (students, teachers, parents, etc.);
• Observations of classroom teaching, playgrounds, and the teachers’ lounge;
• Textual analysis of instructional materials and students’ work; and,
• Other methods, to capture evidence from various data sources.
Data are gathered using many different data collection tools (e.g., fieldnotes; audiorecordings;
videorecordings; photographs; etc.) and may extend over many hours, weeks, or years. Multiple
sources of evidence are tracked, with materials organised to maintain an appropriate chain of
evidence. Participants are included in the study using various sampling approaches (e.g.,
maximum variation; purposive; snowball) to “cover the spectrum of positions and perspectives in
relations to the phenomenon one is studying.” Given the depth of data collection and analysis,
sample sizes are typically small; theoretical sampling to saturation of analytic themes with a
particular set of participants is common.
Many qualitative research projects are designed, very purposefully, as ‘community-based’
studies. Community-based participatory research, for example, is “an orientation to research that
focuses on relationships between academic and community partners, with principles of
colearning, mutual benefit, and long-term commitment and incorporates community theories,
participation, and practices into the research efforts.” In documenting research impact, for
example, the main community members would include researchers and end users of the research
SSHRESWC (2008), p. 3.
Bloor & Wood (2006), p. 28.
See for example Jordan (2008), p. 602 and Finley (2008), p. 98.
Clandinin & Caine (2008).
Yin (2014), pp. 123-126.
Palys (2008), p. 699.
Wallerstein & Duran (2006), p. 312. 6
outcomes. A community-based approach has collective action to produce change as one of its
primary goals; it typically involves:
• “values, strategies, and actions that support authentic partnerships, including mutual
respect and active, inclusive participation;
• power sharing and equity;
• mutual benefit or finding the “win-win” possibility; and
• flexibility in pursuing goals, methods, and time frames to fit the priorities, needs, and
capacities of communities.”
Qualitative content analysis is another strategy for gathering and analysing data, which is used
for the exploration of textual data sources (e.g., photographs; websites; policy documents, etc.).
Qualitative content analysis “focuses on interpreting and describing, meaningfully, the topics and
themes that are evident in the contents of communications when framed against the research
objectives of the study.” Increasingly, this type of analysis is also being applied to various
social media, including content appearing on Facebook postings, in Twitter feeds, and on blogs.
In documenting research impact, researchers could use qualitative content analysis to examine
research users’ discussions of impact or to explore policy documents developed based on the
Qualitative researchers are able to extend their analyses beyond the data gathered from human
participants, directly, to a range of other data sources available on a specific topic. Texts are
often used as part of the process of triangulation of data sources, as well, to complement
evidence gathered during interviews, focus groups, observational research, and with other
methods of data collection.
1.3 Qualitative Communication Strategies – Telling the Story of Impact
The results of qualitative research are presented in narrative form and designed to “give voice” to
the participants and texts involved in the project, including multiple and/or conflicting
interpretive positions. The end result is a continuous, narrative account of people’s experiences
of a phenomenon, whether a complete life history or a discrete, singular event. In documenting
research impact, for example, the narrative would include research users’ own accounts of the
impact of the research outcomes, alongside the research team’s analysis of impact. The goal of
the narrative account is one of “shaping and organising experiences into a meaningful whole.”
Qualitative writing conventions often include the following:
• Explicit statements on the researcher’s part in the narrative (i.e., embracing subjectivity);
• Participants’ voices included as part of the narrative;
• A range of views on the topic (including anomalies or ‘negative findings’); and,
Jones & Wells (2007), p. 408.
Williamson et al. (2013), p. 427.
Fabian (2008), p. 944.
Bloor & Wood (2006), p. 120.
Chase (2011), p. 421. 7
• Evidence drawn from multiple data sources.
There are many texts that explore the nature of qualitative writing in depth. Generally, in
qualitative writing, researchers determine the point of view and the voice to present, with some
features (e.g., first vs. third person) dependent on disciplinary and/or theoretical practices
relevant to the investigation. Concrete examples, quotes and other forms of evidence are
provided to support the claims being made, with all details formed into a coherent story that will
resonate with the specific reader being addressed in a particular piece of writing (e.g., academic;
community member; student).
In this way, the narrative will change depending on the audience being addressed; the narrative is
tailored to suit the audience, content, message and dissemination medium. Some quantitative
data may be presented, as well; however, qualitative findings are typically presented in narrative
form where the results and discussion are woven together, in a coherent whole. Dissemination
may be through traditional, text-based media, such as journal articles, community newsletters,
etc., or through non-traditional (often multimedia) means, including plays, videos, poems, etc.
1.4 Qualitative Rigour – Evaluating the Story of Impact
Qualitative research data are gathered systematically with strategies designed to ensure rigour
during data collection, analysis and writing the results of the project. Qualitative researchers
across disciplines have developed practices, over several decades, to ensure the rigour of their
work. Triangulation of data sources, the use of multiple research methods, involvement of
multiple data coders, active engagement of participants with opposing views, member checking,
and extended observation are just a few of the strategies employed to enhance rigour.
For example, in a study of young children’s experience in a hospital emergency ward, a
qualitative researcher will explore the topic from multiple perspectives using triangulation of
sources and methods. Interviews may be conducted with the children and their family members.
Focus groups may be held with physicians, nurses, and other healthcare practitioners.
Observational data may be gathered in hospital waiting rooms, to see what children, their
families and healthcare providers do while children are waiting to be seen by a physician. A
content analysis of published documents (e.g., hospital policies; brochures provided to parents)
may be completed, to see how children’s needs and experiences are addressed. These various
data are analysed for patterns and themes, providing evidence of children’s experiences in the
emergency ward, and pointing to areas for potential change in practice to suit children’s needs.
Over time, and with appropriate resources, research data could then be gathered using similar
methods to demonstrate the impact of the research. Interviews and focus groups could be
conducted with healthcare practitioners to explore changes to practice arising from the research
results; additional observational and interview data could provide evidence of how those changed
healthcare practices are affecting children and their families. In designing a case study about the
See for example Yin (2011); and, Ely (2007).
See for example Sandelowski (1998); Wolcott (2009); Pratt (2009); Boylorn (2008); Keen and Todres (2007);
and, Saldana (2011).
Saumure & Given (2008). 8
impact of this research, triangulation of methods, sites, sources and participants would be very
similar. However, the focus of the investigation will change, to gather data about the impact of
the original study results. New research questions need to be investigated, with new data
gathered to demonstrate evidence of impact. Research questions might include the following, to
document changes that have occurred since the original research was conducted:
• How have healthcare practitioners’ practices changed?
• How have children’s experiences of care in the hospital emergency room changed?
• How have information sources provided to children and their families changed?
• How have hospital policies changed?
Triangulation is a technique commonly used in qualitative projects. Table 1 provides an
overview of how triangulation might look in a study of children’s experiences in the hospital
emergency room. With each method, site, data source and participant group, the size and scale of
the study – and of the evidence gathered for analysis – grows, tremendously. For this reason,
qualitative projects can extend over several weeks, months or years. Given the emergent nature
of qualitative analysis, choices about what sources or participants to add to the study may be
made in the field, once initial data are gathered.
Table 1: Research Problem: How do young children experience care in the hospital emergency room?
Triangulation Triangulation Triangulation of Sources Triangulation of
of Methods of Sites Participants
Interviews Hospital Transcripts Children, families,
Interviews Child’s home Transcripts Children, families
Observation Hospital Video-recordings & research Children, families,
field notes healthcare practitioners
Observation Child’s home Video-recordings & research Children, families
Content Hospital Policy documents, patient Healthcare practitioners,
analysis brochures, etc. hospital library
Content Child’s Home Parenting guides, library Children, families, school &
analysis resources, websites, etc. public libraries
These strategies have been refined over several decades, across disciplines, and using various
qualitative methodologies. In community-based participatory research, for example, assessment
of quality and rigour are embedded in the design and implementation of projects that involve
users, directly, in the research. Similarly, in case study approaches, triangulation, prolonged
engagement, and other measures of rigour are central to the research practices employed to
investigate a specific situation, location or phenomenon.
Yvonna Lincoln and Egon Guba (1985) outlined the concept of ‘trustworthiness’ and the various
criteria that mark rigourous qualitative research, in a landmark work that remains highly
See for example Waterman et al. (2001); Lee et al. (2008); and, Quigley et al. (2000) 9
influential among qualitative researchers. In place of quantitative terms used to denote rigour
(e.g., validity, generalisability and reliability), which are not appropriate for use in qualitative
studies, new terms were introduced:
• Confirmability; and,
Each of these criteria is marked by several techniques that may be used to ensure rigour and
quality in research design and implementation, such as:
• Triangulation. The use of multiple methods, research sites, data sources and participants
to investigate a research problem from various perspectives;
• Peer debriefing. During data collection and analysis, a researcher will consult with peers
(at times, sharing excerpts of datasets) to seek advice on development of research themes.
Peers may be co-investigators in the project or independent scholars, with expertise in the
methods being employed;
• Audit trails. Researchers keep field notes during data collection and analysis, which
track decisions made about evidence gathered throughout the project;
• Member checking. Researchers may consult with participants or other group members to
see if the analysis resonates with these individuals; and,
• Prolonged engagement. Researchers may spend weeks, months or years working in
particular research sites, to gain as much knowledge as possible from the perspective of
the research participants.
Researchers need to decide which techniques are appropriate for a given study, depending on the
methodology and overall design of the project, as well as access to particular sources of data.
2. Qualitative Impact Workshop & Interviews – Findings
This section of the report presents key findings arising from the public workshops and individual
interviews. These findings are addressed in the context of qualitative inquiry, generally, with
references to relevant literature provided; where appropriate, the discussion paper is referenced,
given its influence on the discussions. The section begins with an exploration of research
engagement, since qualitative practices can be applied to an assessment of pathways to
engagement, as well to case studies. Next, an exploration of qualitative inquiry in research
impact case study design, development and implementation is provided. Quotes from interview
participants are provided at the start of and within each sub-section to illustrate key points of
evidence. This approach is in keeping with qualitative writing practices, where participants’
voices are central to the interpretation of the data presented.
Lincoln and Guba (1985), pp. 301-327. 10
2.1 Extending Beyond Metrics to Document Research Engagement Using Qualitative Practice
“I’ve had major pick up of research by companies and also by media because it’s gone
out by Twitter... And then I actually have people from corporations emailing me directly
and saying, ‘Can I get a copy of the article?’ and we start conversations.”
Overwhelmingly, participants expressed a preference for a multifaceted, “holistic style of
assessment,” which would not reduce the evaluation to an exercise solely reliant on numbers and
metrics. They agreed that metrics could be useful, where available, but needed to be
complemented by qualitative approaches to data gathering. This applied not only to the case
studies, but also to the strategy proposed in the discussion paper for documenting research
engagement, as noted in the appendix of the discussion paper (e.g., consultancies; patents;
licenses; etc.). Participants raised a number of concerns, many of which have been highlighted
previously in other publications:
• Research engagement metrics are not appropriate across all disciplines;
• Although some data may be easily tracked (e.g., patents), these do not necessarily lead to
• Institutions and researchers do not have ready access to most metrics, especially over the
longer term, given the time lags that exist between research outcomes and research ‘use;’
• As researchers change institutions, tracking data on research engagement and/or impact is
• When research users are unknown to the researcher and/or to the institution, tracking
engagement/impact data is almost impossible.
The plan to use Socio-Economic Objective (SEO) classification as a unit of evaluation for impact
assessment was also discussed by participants in this study. The discussion paper notes that
“SEO classification allows R&D research and development activity to be categorised according
to the intended purpose or outcome of the research, rather than the processes or techniques used
in order to achieve this objective. The purpose categories include processes, products, health,
education and other social and environmental aspects that R&D activity aims to improve.”
However, there are a number of limitations that may be associated with SEO codes:
• The actual, demonstrated impact of the research (especially years later) may be quite
different from what the researcher intended or believed would occur when SEO codes
were designed at the project’s outset;
• Relying on the SEO alone may leave a number of research users unidentified in the
process of gauging research impact;
• Limiting potential users to those included in the SEO code will affect the credibility of
the assessment, as involving all types of research users is needed for appropriate
• Some disciplines do not have clear links to R&D, as defined by the SEO classification
scheme, which would constrain the potential identification of research impact.
DIICCSRTE (2013), p. 22.
See for example Group of Eight (2011) and Grant et al. (2009).
DIICCSRTE (2013), p. 22.
The participants agreed that using qualitative research approaches to document engagement – in
addition to impact – would enrich the metrics-based approach by extending the body of evidence
available to assess pathways to impact. However, even where participants mentioned ongoing
and successful engagement with research users, they noted that they did not track details of that
engagement in systematic ways that would meet the standard for providing verifiable evidence.
As tracking and analysing such data falls outside of researchers’ everyday practices, they noted
the time, funding and technical supports needed to gather these data – particularly longitudinally.
• Include qualitative measures of research engagement, alongside metrics, to capture a
more complete range of potential pathways, across disciplines;
• Identify other engagement metrics (e.g., altmetrics ), including those that can provide
qualitative data for further analysis (e.g., twitter feeds of users discussing research);
• Include strategies for identifying research users during and/or after the completion of the
project, not only at the project proposal stage (i.e., when SEO codes are applied);
• Encourage the use of qualitative content analysis to assess textual data provided by
research users (e.g., forum postings about a revised policy document); and,
• Encourage the use of qualitative methods (e.g., interviews; journals) to track research
users’ experiences, directly, from the time that projects are developed.
2.2 How Useful and Appropriate are Qualitative Case Studies for Documenting Impact?
“Case study methodology…has the potential to be very robust, to be trustworthy,
and…when done properly, people who perhaps are more comfortable with positivist or
quantitative research, can feel more comfortable”
Workshop attendees and interview participants also discussed the usefulness and viability of
reporting case studies of research benefit. Participants supported the idea that metrics, alone,
should not form the basis of a research impact assessment exercise; they also supported the
development of case studies or other approaches to illustrate the benefits of university research.
However, they were unclear as to who best would create these documents and/or the types of
data that should inform their development.
Overall, most participants were unfamiliar with case study methodology as it is enacted in
qualitative research. Concerns were raised, initially, about the objectivity and rigour of such
cases (i.e., when relying on a ‘lay’ understanding of case study, as it is often termed in practical
settings ), highlighting the lack of knowledge of appropriate research practices for case study
development. Those familiar with case study methodology were adamant that this approach
would be beneficial to a research impact assessment exercise and – if enacted properly – would
be just as rigourous and trustworthy as other (i.e., metrics-based) approaches.
For details see Priem et al. (2011).
Blatter (2008). 12
Overall, a number of questions were raised about the use of case studies for impact assessment:
• How can case studies be compared across institutions and disciplines? What is the goal of
this type of assessment?
• How much time and money will be needed (for researchers and others, such as research
office staff) to gather data and develop cases studies?
• Who will write the case study narratives? Researchers have some knowledge, but lack the
communications’ expertise needed to write for the proposed audience for the cases (e.g.,
panels comprised primarily of research users).
The burden on researchers and institutions in preparing case studies was the overriding concern
raised in the workshop and the interviews. However, participants also worried that case studies
might be eliminated from an assessment exercise (which would then rely solely on metrics) due
to the perception that cases are “too cumbersome” to prepare. Participants expressed a preference
for both qualitative and quantitative measures of research impact, with the appropriate resources
provided to gather and report the necessary data. This point was also raised in the Excellence in
Innovation for Australia trial, as universities reported challenges with the time and resources
needed to trace information from research to impact.
• Ensure that the design and assessment of case studies conforms to the research principles
and practices of qualitative case study methodology; and,
• Involve qualitative research experts in the design of data collection, analysis and writing
practices for case study development.
2.3 How do we Define ‘Research Impact’ for use in Qualitative Case Study Designs?
“Impact is something that is judged not by the person who generates the new
knowledge…but by the recipients of new knowledge of whether they find that actually it
makes a difference to the economic generation of a nation or the life of an individual.”
Overall, there was also a lack of understanding, intuitively, about the definition of ‘research
impact’ that was being used to shape an assessment exercise. This has implications for
qualitative design of case studies, as researchers must ensure that data gathered are appropriate
and verifiable. The discussion paper defines research benefits as “positive economic, social and
environmental changes that can be attributed to university research,” and notes that such benefits
“do not include changes to the body of academic knowledge but may include improvements
within universities, including on teaching or students, where these extend significantly beyond
the university.” Only a few participants understood that the definition of impact was to explore
a demonstrated change outside of academe, such as changing a child’s experience in a classroom
or developing an innovation that is used by industry.
Overwhelmingly, participants discussed the concept of ‘impact’ in academic terms, referring to
the impact factor of journals, citation rates, and other traditional measures of academic impact.
Where research users were mentioned, participants used such terms as “community engagement”
Group of Eight and Australian Technology Network of Universities (2012), pp. 6-7.
Ibid., p. 5. 13
or “outreach” and focused, primarily, on dissemination of research results (e.g., media coverage
about research), rather than tracking demonstrable change. The focus on academic impact is
quite common in research publications and in many of the support materials available to
researchers to support measurement of ‘research impact,’ so this finding is not surprising.
However, this has significant implications for the types of data to which researchers (and
institutions) can point in tracking evidence of impact as defined in the discussion paper.
Documenting and tracking evidence of research impact outside of academe typically falls outside
the boundaries of a researcher’s daily practice, making ready access to verifiable data a
challenging proposition. The types of data required to demonstrate evidence of research impact
outside of academe are not typically tracked or even available, to either researchers or their
• Change the term ‘Research Impact’ to ‘Impact outside Academe’ to make the intention of
this measurement exercise clear and to distinguish it from academic measures of research
• Provide examples of the types of evidence that researchers could provide to document
demonstrated change in the community (e.g., interview data documenting tool use).
2.4 What Impact Data can be Collected, Stored and used in Qualitative Case Studies?
“Where are the resources going to be to not only compile these research impact cases,
but to try to collect the data to support them?”
The lack of available data noted in discussions of research engagement also extends to the lack
of evidence on hand to document research impact in qualitative case studies. Participants noted
that the “uptake” of research lies in the hands of other people, beyond the research team (such as
industry partners or individual citizens). Involving these research users in qualitative data
gathering exercises about research impact is vital to building the evidence base of impact data. It
may be possible, for example, to interview people who have benefited from the research, to track
trade publications mentioning the application of research in practice, or to maintain ongoing
discussions with policy-makers and others who have applied research findings to their activities.
However, participants took issue with a number of presumptions in the discussion paper, which
would affect the data collection process:
1. That research users may not be aware of, nor can they document, the link between
research projects and their current policies/practices; and,
2. That researchers may not be aware of, or in contact with, the various research users who
may apply the results of their research.
In addition to time lags and other challenges noted previously in documenting evidence of
impact, workshop participants and interviewees discussed the challenges for researchers and
institutions in gathering and reporting data that sit outside the boundaries of current research
activity. Overall, they questioned whether the “evidence base” for documenting research
See for example Griffith University Library (2013).
Group of Eight (2011); Grant et al. (2009).
DIICCSRTE (2013), p. 6. 14
impact was already being captured and documented by researchers and/or institutions, or
whether the type of evidence required could be gathered, at all. Participants noted that research
users may be influenced by many different sources of knowledge and, therefore, not able to draw
clear links to what specific element of a project had an impact on their work. Similarly, although
researchers share the results of their work publicly, they may not be able to track the influence of
that research, let alone gather and report verifiable evidence of its impact. Even where research
processes are very direct (e.g., funded by an industry partner), the evolution of ideas over many
years makes tracking and reporting impact data very difficult.
Where evidence can be gathered, data collection should ideally occur while the research is
ongoing and then continue after the project has ended. In case study designs, “qualitative
researchers collect data themselves through examining documents, observing behavior, and
interviewing participants. They may use a protocol – an instrument for collecting data – but the
researchers are the ones who actually gather the information. They do not tend to use or rely on
questionnaires or instruments developed by other researchers.” Purely retrospective data
gathering exercises, particularly when assessing impact many years beyond the end of a research
project, cannot provide a complete picture (or verifiable evidence) of the impact of the research.
Ideally, evidence needs to be gathered both during and following project implementation, so that
impact can be tracked ‘as it happens’ and then follow that impact into the future, long after the
project ends. This is important, as people’s memories will fade, and as key data required to
verify the origin of an impact may be lost, over time.
Participants noted many issues affecting availability and use of impact data:
• The take up of research is in the hands of research users (including those with no direct
connection to the research) and may occur without the researchers’ knowledge and/or
involvement. As a result, unless research users document or advertise their take up of
research, impact is very difficult – if not impossible – to track;
• Funding, staff time, and other resources necessary for capturing evidence of research
impact outside of academe are not available within existing research project budgets
and/or institutional operating budgets;
• Gathering data related to research impact constitutes its own, separate process, and not
one that is currently captured by existing data sources;
• Relocation and attrition of research staff affects the availability of information and
funding needed to gather evidence linking research projects with impact, particularly over
long periods of time;
• Research users (e.g., industry partners; individual participants) may not be reachable in
the future and/or may no longer have information available to provide evidence of
• Researchers and institutions do not have access to the infrastructure needed for ongoing
collection, storage and analysis of impact data.
Many of these problems have also been noted in initial feedback on the UK’s new REF process.
As one researcher noted, “The major difficulty in writing impact case studies…was acquiring the
necessary evidence of research impact between 1 January 2008 and 31 July 2013 because much
Creswell (2007), p. 38. 15
of it was information that institutions did not own or record for other purposes (such as the effect
of research on public policy).”
Although these issues are important to address in the design of research impact case studies, the
merits of this approach for documenting and sharing the impact of research outcomes on research
end users cannot be understated. These issues can be remedied in a well-designed and well-
implemented approach to case study, as used by qualitative researchers. For example, if
institutions were required to provide a sample of research impact case studies they could choose
to focus on those projects where verifiable data were readily at hand. Over time, and with
sufficient resources, ongoing collection of data could also be integrated into researchers’ routine
• Create national funding schemes to support data gathering about the impact of research
(e.g., longitudinal data tracking; qualitative research with community stakeholders); and,
• Encourage grant recipients, where possible, to plan ways to document evidence of
research impact throughout all phases of a research project; and,
• Create communications channels for research users to share how research has changed
their practices (whether personal or at an organisational level).
2.5 Who Should Produce Qualitative Case Studies? How Should they be Designed, especially
for General/Mixed Audiences?
“The researcher would be involved, but there would certainly need to be specific writers
who have training in writing these kinds of things.”
The difficulty of crafting compelling narratives of research impact has been highlighted in the
literature and by the participants in this project. In addition to the general qualitative writing
practices outlined previously in this section (e.g., including participants’ voices in the narrative;
providing evidence of triangulation), specific strategies for designing case study narratives are
outlined in the research literature. Attention should be paid, for example, to the flow of the
document, as well as to the content; description and analysis must be provided, along with
evidence to support claims. In writing a case study, a typical design includes:
1) opening with vignettes to draw the reader into the case;
2) identifying the issue, purpose, and method of the case to give the reader background;
3) providing extensive description of the case and context;
4) presenting the main issues to let the reader understand the complexity of the case;
5) discussing the issues in a deeper way with some evidence provided;
6) making assertions and summarising what the author understands about the case and
conclusions arrived at; and,
7) closing with a vignette to remind the reader of the experience with the case.
Jump (2014), para. 10.
Group of Eight and Australian Technology Network of Universities (2012), p. 7 and Higher Education Funding
Council for England, Scottish Funding Council, Higher Education Funding Council for Wales, and Department for
Employment and Learning, Northern Ireland (2010), pp. 16-17.
Creswell (2007); Stake (1995). 16
However, many researchers are not familiar with the process of case study narrative
development, nor will they have the expertise needed to write these documents for general and/or
mixed audiences (i.e., to communicate with a diverse impact assessment panel). Identifying who
should be involved in the writing of these narratives is a key component of the process. Although
the participants in this project agreed that researchers must be involved, directly, in the case
study development, communications staff, research administrators and others need to be
involved as part of the case study development team. The preference for a team-based approach,
including the following types of members, was noted:
• The researcher(s) to be profiled in the case study narrative, as the person with direct
knowledge of the research itself and the evidence of impact;
• Professional writers (e.g., university communications staff), who understand the research
context and have the skills needed to translate knowledge for a general or mixed
• Qualitative researchers to guide the development of the case study narrative, including
strategies for writing for diverse audiences; and,
• Research users representing various audiences for the case study (e.g., industry
representatives; citizens) to provide feedback on the content and design.
Once identified, case study developers must ensure that case study narratives are written in ways
that speak to a broad cross-section of potential audiences. As assessment panels may involve a
mix of research users, researchers, clinicians, or other stakeholders, the challenge is to present
the case study in ways that will be understood by all groups. As a recent article on the UK’s 2014
REF exercise notes, this is not easy to achieve; one researcher stated that the “requirement to
satisfy three audiences at once as case studies will be examined by a range of assessors” is
Universities could develop expert teams to work with the researchers to be profiled in the impact
case studies; in this way, the researcher’s knowledge of the research itself can inform the design,
while qualitative researchers, research users and communications staff can provide guidance on
the development of the narrative for a general audience. However, participants stressed that such
teams did not currently exist within their institutions. Although some current staff may have the
requisite skills (and could be seconded to these teams), new staff may need to be hired to develop
these cases. Similarly, a diverse group of research users would need to be recruited to review
various types of research projects, across disciplines.
• Encourage universities to use a team-based approach to case study development,
including qualitative experts, professional writers, researchers, and research users, for
effective case study development; and,
• Create a case study development ‘best practices toolkit’ to guide the development of
compelling narratives, drawn from the qualitative research and communications
Jump (2014), para. 11. 17
2.6 Who is a ‘Research User’?
“There’s a lag effect…so how many years later that your original article…how that’s
picked up and how governments or agencies in community and service make use of a
concept or an idea. I think that’s going to be even more longer term and you’re not sure
where that’s going to happen.”
Participants in the workshop and interviews talked, at length, about the nature of the research
‘end-user,’ given the importance of the user’s role in both capturing data on research impact and
in serving as members of potential panels to assess the impact of a university’s research. Just as
qualitative research is designed to ‘give voice’ to participants engaged in the inquiry, research
impact case studies must involve the users of research in their design. The challenge, however, is
identifying the various potential research users who may benefit from research, a point that has
been raised in other documents, as well.
Although projects designed with an industry or community partner, for example, may initially
have a very clear and direct research user in mind, the project may also have an impact on other
groups and individuals, in future, both locally and internationally. Ensuring that case study
development teams and research assessment panels have representation from a range of potential
end-users is paramount. Some reports note that engaging research users in the impact assessment
process involves time and resources for those individuals, as well as for the universities and
• Encourage the involvement of a broad mix of research users in providing feedback during
all aspects of case study development and assessment;
• Ensure case study assessment panels include a wide range of research users, across
various impact contexts and user types; and,
• Encourage appropriate design of case study narratives to allow the message to be
communicated to and assessed by a cross-section of audiences.
2.7 How Should Qualitative Case Studies be Assessed?
“There’s going to be so much variation in what institutions submit, I don’t know whether
you’ll be able to compare them. You won’t be comparing apples with apples.”
Participants also questioned the process of assessing case study narratives and whether these
were to be used as comparators across institutions or assessed solely on their own merits.
Although comparative case study approaches exist, these studies are designed to be comparative
(at a content level) from the outset; in effect, “the main feature is that the same case (or its
interpretation) is repeated two or more times, in an explicitly comparative mode.” For this
reason, although case studies provide a robust approach to documenting evidence of research
impact, attempting to compare across cases raises the same problems highlighted in the
proverbial ‘apples and oranges’ comparison. Comparing the impact of a medical intervention to
Grant et al. (2009).
Morgan Jones et al. (2013), p. 20.
Yin (2014), p. 188. 18
the impact of an educational intervention raises issues, as the sites, participants, and data sources
on which the cases are developed are very different. Similarly, comparing two medical
interventions may be problematic, as the sites, participants and data sources will vary. Individual
case studies are designed to be assessed only on their own merits, based on the processes of data
collection and analysis, as well as the narrative product that is created. In comparing research
processes and products across cases, this is best done with research designs using the same
methodologies and methods. For example, projects designed using grounded theory
methodologies involving triangulation of qualitative methods (such as interviews and document
analysis) may be comparable, even across different disciplines, settings and populations.
While the process of gathering evidence to support case studies may be evaluated using the
trustworthiness criteria outlined previously (i.e., credibility, transferability, dependability,
confirmability and reflexivity), qualitative researchers also assess the product (i.e., the case study
document) based on the quality of the narrative. Together, these elements provide a strategy to
evaluate the quality of the case study narrative, particularly when assessing cases discussing very
different topics and providing different types of evidence of research impact. Lincoln and Guba
(2002) provide four classes of criteria with which to evaluate case study reports:
• Resonance – “criteria that assess the degree of fit, overlap, or reinforcement between the
case study report as written and the basic belief system undergirding that alternative
paradigm which the inquirer has chosen to follow;”
• Rhetoric – criteria “relevant to assessing the form, structure, and potential characteristics
of the case study,” including unity, organisation, simplicity/clarity, and craftsmanship;
• Empowerment – criteria “assessing the ability of the case study to evoke and facilitate
action on the part of the readers,” including fairness, educativeness, and actionability.
• Applicability – criteria that “assess the extent to which the case study facilitates the
drawing of inferences by the reader that may have applicability in his or her own context
At a practical level, then, case studies need to be assessed on the quality of their data collection
and analysis processes. The case documentation would need to address issues of triangulation,
peer debriefing, member checking, or other techniques designed to enhance rigour of data
collection (as discussed in section 1.4 of this report). Similarly, the case study product would
need to be assessed for the merits of the communication strategy employed, with particular
attention paid to qualitative writing conventions (as discussed in section 1.3).
• Apply case study methodology assessment criteria at both the process (evidence
gathering) and product (narrative) stages of development; and,
See for example Creswell (2007); Lincoln and Guba (2002); and, Stake (1995).
Lincoln and Guba (2002), p. 4.
Ibid., p. 5.
Ibid., p. 8.
Ibid., p. 9. 19
• Create a case study assessment ‘best practices toolkit’ to guide the design and evaluation
of the process (evidence gathering) and product (narrative), drawn from the qualitative
2.8 What Other Strategies could be used in place of a Formal Exercise?
“Perhaps social network analysis to look at the strong networks of the university that
have been created through research activities to determine how well connected the
university is in its community, how influential the university is in particular institutions
Workshop participants and interviewees also discussed potential alternatives to a formal
assessment exercise on research impact. Individuals identified a number of ideas that could be
implemented nationally in Australia (whether at institutional, funding agency and/or government
• Reward systems (e.g., academic prizes) to recognise research that has had an impact in
• Public awareness campaigns (e.g., dedicated websites) to share the stories of research
impact, more broadly; and,
• Outreach programs (e.g., social media strategies) to engage research users during projects
and after projects are completed.
These ideas mirror many of the new and existing strategies in place in other countries, as well as
at local and state levels within Australia. Such strategies are intended to foster collaboration
between researchers and their communities, as well as to share the impact of university-based
research with the public. Existing initiatives may serve as models for a nationwide approach to
documenting and celebrating research impact across Australia, particularly as alternatives to a
formal evaluative process. The following are examples of various types of research impact
activities, across sectors, in Australia and elsewhere:
Granting Agency Initiatives
Grains Research and Development Corporation (GRDC) (Australia)
The GRDC publishes a bi-monthly newspaper, hosts video/TV series, and hosts radio programs
designed to share stories of impact arising from funded research project. Their flagship
publication Ground Cover, for example, “provides technical information for grain growers,
including updates on research, trials, new varieties, farmer activities and case studies.”
Celebrating Impact (United Kingdom)
The UK’s Economic and Social Research Council presents an annual prize to researchers whose
funded projects have had an impact on society. The website presents information on each year’s
award winners, including videos about the projects and their outcomes. The agency also provides
Australian Government, Grains Research and Development Corporation (2014), para. 1.