Research Proposal and Null Hypothesis with Example (Tutorial 2019)

 Research Proposal

Writing a Research Proposal Tutorial 2019

If one of the requirements for this class is to write a research proposal, then you have come to the right place. This blog will lead you through the process you need to take to write a research proposal with examples.

 

You will also learn the importance of framing a question in a clear, logical manner so that it is easier to answer to make the Null HypothesisWriting a proposal is not an easy task for anyone, and it may be especially difficult if you have not written one before or if you have not done much writing.

 

This tutorial explains the how to write Research Proposal and Null Hypothesis for your proposed research or thesis with Examples. And explains the Format of a Research Proposal in 2019.

 

If you actually follow through and complete the proposed research, you will be making a significant contribution to your field. With these words of encouragement, the following are the major steps to follow in the writing of a proposal, beginning with what a proposal looks like.

 

These, and thousands more like them, all about writing, funding, creating, and seeing through to the research stage of research proposals, are available in your library and online. Spend some time browsing and reading these before you begin serious writing—it will be well worth your while.

 

The Format of a Research Proposal

Format Research Proposal

Knowing how to organize and present a proposal is an important part of the research craft. The very act of putting thoughts down on paper will help you clarify your research interests and ensure that you are saying what you mean. Remember the fellow on the television commercial who said, “Pay me now or pay me later”?

 

The more work and thought you put into your proposal, the easier it will be to complete the research later. In fact, many supervising faculties suggests that a proposal’s first two or three chapters be actually the same as the entire finished thesis or dissertation—putting you way ahead of the game.

 

The following is a basic outline of what should be contained in a research proposal and a few comments on each of these sections. Keep in mind that proposals can be organized differently and, whatever you do, be sure that your professor approves of your outline before you start writing.

 

I. Introduction

A. Problem statement

B. The rationale for the research

1. Statement of the research objectives

C. Hypothesis

D. Definitions of terms

E. Summary, including a restatement of the problem

II. Review of the relevant literature (the more complete it is, the better)

A. Importance of the question being asked

B. Current status of the topic

C. The relationship between the literature and the problem statement

D. Summary, including a restatement of the relationships between the important variables under consideration and how these relationships are important to the hypothesis proposed in the introduction

 

Implications and limitations

If you have looked at someone else’s thesis or dissertation, you might notice that this outline is organized around the same general sequence of chapter titles—introduction, review of the literature, methodology, results, and discussion.

 

Because this is only a proposal, the last two sections cannot present the analysis of the real data or discuss the findings. Instead, the proposal simply talks about the implications and limitations of the study, and the last part (V) contains all the important appendices.

 

The first three sections of the finished proposal form a guideline about what the proposal should contain: introduction, review of the literature, and method. The rest of the material (implications and such) should be included at your own discretion and based on the wishes of your adviser or professor. Keep in mind that completing the first three sections is a lot of work.

 

However, you will have to gather that information anyway, and doing it before you collect your data will give you more confidence in conducting your research as well as a very good start and a terrific road map as to where you are going with your research.

 

Elements of the Proposal

Proposal requirements

Proposal requirements vary according to the role of the proposal and by the institution. But generally you will be required to include some combination of the following:

 

Title – Go for clear, concise and unambiguous. Your title should indicate the specific content and context of the problem you wish to explore in as succinct a way as possible.

 

Summary/abstract – Proposals often require a project summary, usually with a very tight word count. The trick here is to briefly state the what, why and how of your project in a way that sells it in just a few sentences – and trust me, this can take quite a few drafts to get right.

 

Aims/objectives

Research question 

Most proposals have one overarching aim that captures what you hope to achieve through your project. A set of objectives, which are more specific goals, supports that aim.

 

Aims and objectives are often articulated in bullet points and are generally ‘to’ statements: for example, to develop …; to identify …; to explore …; to measure …; to explain …; to describe …; to compare …; to determine ….

 

In management literature you are likely to come across ‘SMART’ objectives – SMART being an acronym for specific, measurable, achievable, relevant/results-focused/realistic and time-bound.

 

The goal is to keep objectives from being airy-fairy or waffly; clearly articulating what you want to achieve aids your ability to work towards your goal.

 

Research question/hypothesis

hypothesis

A well-articulated research question (or hypothesis) should define your investigation, set boundaries, provide direction and act as a frame of reference for assessing your work.

 

Any committee reviewing your proposal will turn to your question in order to get an overall sense of your project. Take time to make sure your question/hypothesis is as well defined and as clearly articulated as possible.

 

Introduction/background/rationale 

The main job of this section is to introduce your topic and convince readers that the problem you want to address is significant and worth exploring and even funding.

 

It should give some context to the problem and lead your readers to the conclusion that, yes, research in this area is absolutely essential if we really want to work towards situation improvement or problem resolution.

 

Literature review

Literature review 

A formal ‘literature review’ is a specific piece of argumentative writing that engages with relevant scientific and academic research in order to create a space for your project.

 

The role of the literature review is to inform readers of developments in the field while establishing your own credibility as a ‘player’ capable of adding to this body of knowledge. This is a tough piece of writing with a very tight word count, so be prepared to run through a few drafts.

 

Theoretical perspectives

This section asks you to situate your study in a conceptual or theoretical framework. The idea here is to articulate the theoretical perspective(s) that underpin and inform your ideas, and, in particular, to discuss how ‘theory’ relates to and/or directs your study.

 

Methods 

Some form of ‘methods’ will be required in all proposals. The goal here is to articulate your plan with enough clarity and detail to convince readers that your approach is practical and will lead to credible answers to the questions posed. Under the heading of methods you would generally articulate:

 

  • the approach/methodology – for example, if you are doing ethnography, action research or maybe a randomized controlled trial;
  • how you will find respondents – this includes articulation of population and sample/sampling procedures;
  • data collection method(s) – for example, surveying, interviewing and document analysis;
  • methods of analysis – whether you will be doing statistical or thematic analysis and perhaps variants thereof.

 

Limitations/delimitations

convince readers

Limitations refer to conditions or design characteristics that may impact the generalizability and utility of findings, such as small sample size or restricted access to records. Keep in mind that most projects are limited by constraints such as time, resources, access or organizational issues.

 

So it is much better to be open about ‘flaws’ than leave it to assessors who might be much more critical. Delimitations refer to a study’s boundaries or how your study was deliberately narrowed by conscious exclusions and inclusions, e.g. limiting your study to children of a certain age only, or schools from one particular region.

 

Now, remember that your overarching goal here is to convince readers that your findings will be credible in spite of any limitations or delimitations. So the trick is to be open about your study’s parameters without sounding defensive or apologetic.

 

It is also worth articulating any strategies you will be using to ensure credibility despite limitations.

 

Ethical considerations

Ethical

Whenever you are working with human participants there will be ethical issues you need to consider

 

Now if this were an application for an ethics committee you would need to focus much of your proposal on ethical issues.

 

But even if this were a proposal for admission, your readers would still need to be convinced that you have considered issues related to integrity in the production of knowledge and responsibility for the emotional, physical and intellectual well-being of your study participants.

 

Timeline 

timeline

This is simply superimposing a timeline on your methods and is often done in a tabular or chart form. The committee reading your proposal will be looking to see that your plan is realistic and can conform to any overarching timeframes or deadlines.

 

Budget/funding

funding

This is a full account of costs and who will bear them. While not always a required section for ethics proposals or proposals for academic student research, it will certainly be a requirement for a funding body. Now it is definitely worth being realistic – it’s easy to underestimate costs.

 

Wages, software, hardware, equipment, travel, transcription, administrative support, etc. can add up quite quickly, and run short of money mid-project is not a good option.

 

But also keep in mind that if you are tendering for a commissioned project, it’s a good idea to get a ballpark figure of the funding body’s budget. This will put you in a position to design your methods accordingly and hopefully make you competitive.

 

References – This can refer to two things. The first is citing references in the same way as you would in any other type of academic/professional writing. Believe it or not, it’s often missed.

 

The second is that some committees want a list of, say, 10 or 15 primary references that will inform your work. This information can help a committee assess your knowledge and give its members a clearer indication of the direction your study may take.

 

Writing Purposively

Writing Purposively

It is important to recognize that a proposal should never be sloppy, rushed or thrown together at the last minute. It needs to be a highly polished and well-constructed piece of writing.

 

Remember: the clarity of your thoughts, the veracity of your arguments and the quality of your writing will be used to judge your potential as a researcher.

 

The following tips should help you craft a winning proposal:

See if you can get access to a few successful proposals – If possible, seek out proposals that have gone through the committee you are applying to, or to as similar a committee as possible. The institution assessing your application may have proposals online. If they don’t, then I would google ‘research proposal example’.

 

You can combine that with the level of study (Ph.D., undergraduate) and/or your broad area of study (business, sociology, policy). But keep in mind that not all proposals up on the Internet are good ones! You can also refer to the examples in the books cited at the end of the chapter.

 

Find a voice – The convention here is the third person; however, using ‘I’ to state what you will do is now more commonly accepted. Also, remember to write in the future tense. A proposal is about what you will do, not what you are doing now, or have done in the past.

 

‘Write tight’ – Your writing needs to be concise and succinct, direct and straightforward. Avoid rambling and/or trying to show off by using unnecessary jargon.

 

Write enough – Somewhat paradoxical to the above, you also need to make sure you write a sufficient amount for assessors to make judgments. Write for the ‘non-expert’ – Your proposal needs to be ‘stand-alone’ and be comprehensible to someone potentially outside your field.

formal proposal

Do your homework – The last thing you want in a short formal proposal is ‘mistakes’. Get your facts right, make sure you don’t have gaping holes in your literature, and make sure any references to theory and/or methods are accurate.

 

Don’t over-quote – Generally, the writing expected is so tight that you probably won’t have enough room for too many direct quotes. Keep the words and ideas yours, supported by the literature.

 

Don’t let the deadline sneak up on you – Plan on finishing early so that you have time to review and redraft. Remember: deadlines are often inflexible, and this is a case where you do not want to have to rush and let quality suffer.

As discussed below, be prepared to draft and redraft.

 

Drafting and Redrafting

Drafting

The best advice here is to leave yourself enough time to get feedback and redraft, if possible, more than once. Remember: even if your reader does not understand the details, the overarching arguments should make sense to the non-expert – so don’t hesitate to ask a peer, parent, friend, etc.

 

If they can follow the proposal and if it makes sense. But if you have access, I certainly recommend seeking the advice of someone who has experience in research and research proposals.

 

Obstacles and Challenges

Challenges

So if you do all of the above, surely you are bound to impress? It should all be smooth sailing, shouldn’t it? Well, hopefully, that will be the case. But there are a couple of sticky situations you may need to negotiate.

 

When Your Design Does Not Fit Proposal Requirements

If you have read this far, you know how important I think it is to give a committee what it asks for. But what if your research design simply does not fit in with the committee’s requirements?

 

Now, this is likely to be the case in ‘qualitative’ research where terms like hypothesis, variables, validity, and reliability may not be appropriate to your study, but may nonetheless be required ‘sections’ in your proposal.

 Proposal Requirements

 

Unfortunately, there can still be a bias towards the quantitative paradigm, the legacy of which can be reflected in proposal proformas and even committee expectations.

 

If this is the case, I would suggest seeking the advice of someone who has worked with the committee to see how it tends to handle such dilemmas – each committee will have a different approach.

 

If, however, you cannot get this insider information, or are told ‘Just do the best you can’, I would suggest remembering the bigger agenda of the proposal: that is, to demonstrate the merits of the research question, the merits of the proposed methods and the merits of the researcher.

 

So, regardless of paradigm, you will need to show you are confident with the theoretical, conceptual and methodological landscape you are proposing to enter.

 

To that end, write confidently, not aggressively nor apologetically. If the committee wants a hypothesis, yet it is not appropriate, you have the option of writing ‘N/A’ and giving justification for inappropriateness. If the committee wants you to list variables but your study is more exploratory, say so.

 

If validity, reliability or generalizability is inappropriate, confidently talk about credibility indicators that are more appropriate. Any committee worth its weight will be able to spot a researcher who knows what he or she is talking about, even when it doesn’t fit with the committee’s expectations/jargon.

 

When Your Design Is Emergent

study

Another major dilemma is when you are proposing a study that will have evolving methods that cannot be fully articulated at the time proposal applications are required.

 

This is particularly problematic for ethics proposals, which are used to protect the dignity and welfare of the ‘researched’ as well as protect the researcher and home institution from legal liability.

 

These proposals often demand a full account of methods, which often includes appending things like surveys and interview schedules.

 

Once again ‘qualitative’ researchers who wish to use conversational/unstructured data-gathering techniques that are not fully predetermined will face a dilemma.

 

Those undertaking action research can also struggle as their methodological protocols are based on stakeholder collaboration in multiple cycles in which methods are conducted in multiple phases, with each phase determined by what has happened previously.

 

For example, key informant interviews may be used to inform survey design, or survey results may determine the questions used in in-depth interviewing.

 

The best strategy here is to be open and knowledgeable about your approach. Show that your design is not haphazard or ill-considered. Show that even if you cannot articulate all the specifics, your required flexibility is planned and you have a defined framework. Show the committee forethought. Offer, if possible, indicative questions.

 

And finally, show that you can link your approach back to accepted methodological literature. If you can manage to make such arguments your chances of success will be greatly enhanced.

 

Of course, even if you are able to make such arguments there is the possibility that the committee will require further information. If this is the case, you can attempt to add more definition to your methodological plan.

 

But if your overarching design makes this impossible and your committee is immovable, you will need (1) to see if it is possible to put in a supplementary application as your methods evolve; or (2) to talk to your supervisor about required methodological modifications.

 

When You Want to or Need to Change Direction/Method

Suppose you are all set to interview 15 CEOs, but, try as you might, you just can’t get more than three to participate. Or suppose you plan on surveying 1,000 homeless people, but after much effort, you only have 36 surveys returned.

 

Or imagine that you have undertaken a much more comprehensive literature review than included in your proposal and you realize that the survey questions you originally proposed are way off target.

 

What do you do? Well, from a methodological standpoint, you improvise. You think about your question, talk to your supervisor and determine the most ‘doable’ way to get some credible data. But disappointingly, most students in this situation simply charge ahead and change their study protocols without further committee consultation.

 

And while this may be the path of least resistance, it is not recommended. If your application represents a ‘contract’ to do a job for a funding body, for example, you need to inform it of shifts in your approach.

 

Updating ethics applications is equally important. Not only do you want an outside committee to see that you will not threaten the dignity and well-being of the researched, but you also want to ensure that you have protected yourself and your institution from potential lawsuits.

 

What do I do if my proposal is knocked back?

That is a very difficult situation. One that often leads to an emotional response: anger, disappointment, feeling disheartened, etc. A knockback is never easy, but it is particularly difficult when it is one that sees you having to ‘reassess’ where you are going. I think the best advice here is to take a deep breath and regroup.

 

Once you have worked through the emotional side, it will be time to get information; to figure out where you went wrong, and what you need to do now. Read the feedback carefully, seek clarification, talk to others. The more you know, the better position you will be in to avoid pitfalls in the future.

 

Appearance

Although the words in your proposal are important, the appearance of your proposal is also important. What you say is more important than how you say it, but there is a good deal of truth to Marshall McLuhan’s statement that the medium is the message.

 

Here are some simple, have any doubts about presentation (and if you don’t have any other class guidelines), follow the guidelines set forth in the sixth edition of the Publication Manual of American Psychological Association (APA 2009).

 

  • All pages should be typed with at least 1-inch margins on top, bottom, left, and right to allow sufficient room for comments.
  • All pages should be double-spaced.

 

All written materials should be proofread. This does not mean just using a spell checker. These marvels check only your typing skills (to, two, or too?), not your spelling or grammar. So, proofread your paper twice—once for content and once for spelling and grammatical errors. And, it would not be a bad idea to ask a fellow student to read it once.

 

  • The final document should be paper clipped or stapled together, with no fancy covers or bindings (too expensive and unnecessary).
  • All pages should be numbered with a running head (all of which is right justified) and a page number.
  • For example, APA guidelines do not require the author’s name on each page because the review for journals is blind. Your professor, however, needs your name on each page.

 

Evaluating the Studies You Read

As a beginning researcher, you might not be ready to take on the experts and start evaluating and criticizing the work of well-known researchers, right? Wrong! Even if you are relatively naive and inexperienced about the research process, you can still read and critically evaluate research articles.

 

Even the most sophisticated research should be written in a way that is clear and understandable. Finally, even if you cannot answer all the questions listed below to your satisfaction at this point, they provide a great starting place for learning more. As you gain more experience, the answers will appear.

 

When you begin to go through research articles in preparation for writing a proposal (or just to learn more about the research process), you want to be sure that you can read, understand, and evaluate the content.

 

straightforward tips about proposal preparation          

So what makes good research?  Among a survey of research experts, they found the following shortcomings (in order of appearance) to be the most pressing criticisms. Even though this article is almost 16 years old, the findings are still relevant to any proposal.

 

  • The data collection procedure was not carefully controlled.
  • There were weaknesses in the design or plan of the research.
  • The limitations of the study were not stated.
  • The research design did not address the question being asked by the researcher(s).
  • The method of selecting participants was not appropriate.

 

  • The results of the study were not clearly presented.
  • The wrong methods were used to analyze the information collected.
  • The article was not clearly written.
  • The assumptions on which the study was based were unclear.
  • The methods used to conduct the study were not clearly described or not described at all.

 

This is quite a series of pitfalls. To help you avoid the worst of them, you might want to ask the following set of questions about any research article.

 

[Note: You can free download the complete Office 365 and Office 2019 com setup Guide for here]

 

Planning the Actual Research

Actual Research

You are well on your way to formulating good, workable hypotheses, and you now know at least how to start reviewing the literature and making sense out of the hundreds of available resources.

 

But what you may not know, especially if you have never participated in any kind of research endeavor, is how much time it will take you to progress from your very first visit to the library to your final examination or submission of the finished research report. That is what you will learn here.

 

Although you still have plenty to learn about the research process, now is a good time to get a feel for the other activities you will have to undertake in order to complete your research project. It is also helpful to get a sense of how much time these activities might take.

 

First the activities.

The activities are grouped by the general headings previously discussed. Now for computing how much time the process will take.

 

One effective way to do this is to estimate how much time each inpidual activity (writing the literature review, collecting data, etc.) will require, using some standard measure, such as days, keeping in mind that sometimes things go.

  • Just as planned
  • Not as well as planned
  • Not well at all (which usually is the rule, rather than the exception).

 

Now take the average of these values. To be more precise, let’s break workdays into 4-hour chunks (for morning and evening) and call each chunk one unit of time. There are then 10 units of time in 1 week.

 

If you enter as a spreadsheet (using a program such as Excel), you can easily sum the columns as you fiddle and tinker with the amount of necessary time.

 

For example, let’s look at a search through primary sources (as part of the literature review) and estimate that it will take you

  • 4 days, or 8-time units, if things go great
  • 6 days, or 12 units, if things do not go exactly as planned
  • A whopping 8 days, or 16 units, if things do not go well at all

 

Once you have these estimates, average them for the activity, and you will have a singular estimate of how long anyone activity should take, such as

(8 + 12 + 16) = 12 units

3 or 6 days, which is about one very full week’s work (if you work on Saturday or Sunday).

 

If you want to be even more precise, weight the estimates. For example, let’s say that you anticipate having trouble finding a sample, and at best you can expect things to go only okay. Writing the descriptive section, though, should be a snap. You should weight the “not as well as planned” estimate two or three times greater than the others.

 

being asked

research hypothesis

  • But you should not make a final decision until you examine its guidelines on the intended testing population, requirements for administration, costs, and so on. You can usually get a sample packet either at no cost or at a minimal cost from the test developer or publisher.
  • Obtain the latest version of the test. Publishers are always changing test materials, whether it is a repackaging of the materials or a change in the actual normative or reliability and validity data. Just ask the simple question, “Is this the latest version available?”

 

  • The test needs to be appropriate for the age group with which you are working. If a test measures something at age 10, it does not mean it will be equally reliable and valid at age 20, or even that it will measure the same underlying construct or behavior at that age. Look for other forms of the same test or another test that measures the same construct for the intended age group.

 

Finally, look for reviews of the test in various journals and reference sources, such as at the Buros Institute, which lists thousands of tests on just about everything, and the Mental Measurement Yearbook, which is also published by the Buros Institute.

 

Both these publications contain extensive information about different types of tests including administrative procedures, costs, critical reviews of the tests by outside experts, and so on. Examine these critical reviews before you decide to adopt an instrument.

 

Selecting a Dependent Variable

Dependent Variable

You have read at several places in this volume how important it is to select a dependent variable or an outcome measure with a great deal of care. It is the link between all the hard preparation and thinking you have done and the actual behavior you want to measure.

 

Even if you have a terrific idea for a research project and your hypothesis is right on target, a poorly chosen dependent variable will result in disaster.

 

The following nine items are important to remember when selecting such a variable. Use the following as a checklist when you search through previous studies to find what you need.

 

  •  Try to find measures that have been used before. This gives them credibility and allows you to support your choice by citing previous use in other research studies.

 

Ensure that the validity of the measure has been established. Simply put, don’t select dependent variables whose validity either has not yet been established or is low.

 

Doing so will raise too many questions about the integrity of your entire study. Remember, you can find out if a test has been shown to be valid through a review of other studies where the test has been used or through an examination of any manuals that accompany the test or assessment tool.

 

  • Ensure that the reliability of the measure has been established. As with validity, reliability is a crucial characteristic of a useful dependent variable.

 

  • If the test requires special training, consider the time and the commitment it will take to learn how to use it. This does not mean simply reading the instructions and practicing the administration of a test.

 

Selecting a Sample

Selecting a Sample

Many researchers feel that there is nothing more important than selecting a sample that accurately reflects the characteristics of the population they are interested in studying.

 

Yet sample selection can sometimes be a risky business, with all kinds of questions needing to be answered before you can make any moves toward the sample selection process. Here is a list of factors to keep in mind:

 

1. Imagine yourself trying to find a suitable pool of candidates from which to select a sample, and multiply the number of other people trying to do the same thing in your community by 100.

 

That is a small estimate of how many people in every university community are looking for a sample to include in their study. Where can you look? Try some of the following:

  • Church and synagogue groups
  • Boy and Girl Scouts
  • Retirement homes and communities
  • Preschools
  • Singles clubs
  • Special interest and hobby groups
  • Fraternal organizations

 

survey

2. Remember, you do not want to select any group that is organized for a particular reason if that reason is even remotely related to what you are studying.

 

For example, you would not select members from the Elks Club for a study of loyalty or friendship or parents who send their kids to private schools for a survey on attitudes toward supporting public education, unless the selection of such samples is an important part of your sampling plan.

 

3. Approach candidates with a crystal clear idea of what you want to do, how you want to do it, and what they will get in return (a free workshop, the results of the study, or anything else you think might be of benefit to them).

population

Similar to the previous point, the population must match the characteristics of those groups you want to study. It might go without saying (but I’ll say it here any- way), but selecting a sample from a poorly identified population is the first major error in sample selection. 

 

If you want to study preschoolers, you cannot study first graders just because the two groups are close in age. The preschool and the first-grade experience differ substantially.

 

The type of research you do will depend on the type and size of the sample you need. For example, if you are doing case study descriptive research, which involves long, intense interviews and has limited generalizability (which is not one of the purposes of the method), you will need very few participants in your sample. If you are doing a group differences study, you will need at least 30 participants in each group.

 

A highly reliable test will yield more accurate results than a homemade essay exam. The less reliable and valid your instruments, the larger the sample size that will be required to get an accurate picture of what you want.

 

Consider the number of financial resources at your disposal. The more money (and resources in general) you have, the more participants you can test. Remember, the larger the sample (up to a point) the better, because larger samples come closer to approximating the population of which they are apart.

 

The number of variables you are studying and the number of groups you are using will affect the sample selection process. If you are simply looking at the difference in verbal skills between males and females, you can get away with 25–30 participants in each group.

 

If you add age (5- and 10-year-olds) and socioeconomic status (high and low), you are up to six different possible combinations (such as 5-year-old girls of high socioeconomic status) and up to 6 × 30, or 180, subjects for an adequate sample size.

 

Selecting an Inferential Statistic

Inferential Statistic

Selecting an inferential test is a task that always takes care. When you are first starting out, the choice can be downright intimidating. 

 

You can learn about some of the most common situations, such as testing the difference between the means of two or more groups and looking at the relationships between groups. In both cases, the same principles of testing for the significance of an outcome apply.

 

Protecting Human Subjects

Most organizations that sponsor research (such as universities) have some kind of committee that regularly reviews research proposals to ensure that humans (and animals) are not in any danger should they participate.

 

Before investigators begin their work, and as part of the proposal process, an informed consent form is completed and attached to the proposal. The committee reviews the information and either approve the project (and indicates that human subjects are not in danger) or tries its best to work with the investigator to change the proposed methods so that things proceed as planned.

 

Summary

When the time comes to write a proposal, here is the quote you want to paste over your desk:

Pay me now or pay me later.

And that is the truth. Successful scientists will tell you that if you start out with a clear, well-thought-out question, the rest of your proposal, as well as the execution of your research, will fall into place. On the other hand, if your initial question is unclear, you will find yourself floundering and unable to focus on the real issue.

 

Work on writing your proposal every day, read it over, let it sit for a while, have a friend or colleague glance at it and offer suggestions, write some more, let it sit some more. Get the message? Practice and work hard, and you will be well rewarded.

 

One day, you may have the opportunity to submit a manuscript by yourself or with a co-author for publication. If you have lived right, the manuscript may be accepted, and won’t you (and your parents and professor) be proud!

 

There are many ways to organize a manuscript, and most journals require that manuscripts be submitted according to specific guidelines. In the social and behavioral sciences, the Publication Manual of the American Psychological Association (6th ed., 2009) is the standard. This chapter is all about preparing a manuscript for submission according to those guidelines.

 

Although there is no substitute for buying this manual (it costs about $29, but your department or adviser probably has one), this chapter provides the basics of how a manuscript should be organized, formatted, and mechanically prepared to meet APA guidelines.

 

To help you out, included is an example of pages from a manuscript prepared in the correct fashion. The manuscript (and the study on which it is based) was completed by one of the author’s students who took a class very much like the one you are taking now. Following some general guidelines about manuscript preparation, you will see the manuscript, annotated with tips and hints. Just follow along.

 

Results

Results

References

The references comprise the sources that were consulted during the course of the research and the writing of the manuscript. References can be anything from a book to a Web site, and all references must be entered in the reference list in a particular format.

 

Summary

Summary

That’s it for preparing a manuscript according to APA guidelines and Exploring Research as well. I sincerely hope most journals receive hundreds of manuscripts each year. Standardization of some kind helps streamline the process.

 

Here is a mini-guide to some of the most important format rules to keep in mind:

  • 1. Make sure that the type is readable.
  • 2.  Use 12-point Times New Roman for text and Arial for figure captions.
  • 3. All lines, including the headings, must be double-spaced.
  • 4. Allow 1 inch for a margin on the left, right, top, and bottom of the page.

 

  • 5. Pages are numbered as follows:
  • a. The title page is a separate page, numbered 1.
  • bThe abstract is a separate page, numbered 2.
  • c.The text starts on a separate page, numbered 3.
  • d. The references, appendices, author notes, footnotes, and tables all start on separate pages, and the pages are continuously numbered. However, do not number artwork (figures and such)

 

  • 6. The first line of each paragraph must be indented five to seven spaces or one-half inch.
  • 7. The text should be left aligned, leaving a ragged right margin.

 

  • 8. Headings are to be typed as follows. Here is an example of three different levels of headings, which are sufficient for most papers:
  • a. First-level headings are centered upper and lower case.
  • b. Second-level headings are flush left, upper and lower case.
  • c. Third-level headings are indented, boldface, and lower case.

 

  • 9.  Place one space after all punctuation (periods, commas, semicolons, etc.).
  • 10. Do not indent the abstract.
  • 11. Start the list of references on a new page.

 

Hypothesis

 Null Hypothesis

A hypothesis was defined as an educated guess. Although a hypothesis reflects many other things, perhaps its most important role is to reflect the general

 

A good hypothesis provides a transition from a problem statement or question into a form that is more amenable to testing using the research methods we are discussing.

 

The following sections describe the two types of hypotheses—the null hypothesis and the research hypothesis—and how they are used, as well as what makes a good hypothesis.

 

The Null Hypothesis Tutorial 2019

A null hypothesis is an interesting little creature. If it could talk, it would say something like, “I represent no relationship between the variables that you are studying.” In other words, null hypotheses are statements of equality such as:

  • There will be no difference in the average score of ninth graders and the average score of 12th graders on the ABC memory test.
  • There is no relationship between personality type and job success.
  • There is no difference in voting patterns as a function of a political party.
  • The brand of ice cream preferred is independent of the buyer’s age, gender, and income.

 

The null hypothesis is a statement of equality.

 

A null hypothesis, such as the ones described here, would be represented by the following equation:

  • problem statement or the question that was the motivation
  • for undertaking the research study. That is why taking care and time with that initial question is so important.

where: Ho : m9 = m12

 

Such consideration can guide you through the creation of a hypothesis, which in turn helps you to determine the types of techniques you will use to test the hypothesis and answer the original question.

The “I wonder …” stage becomes the problem statement stage, which then leads to the study’s hypothesis. Here is an example of each of these.

 

“I wonder”  It seems to me that several things could be done to help our employees lower their high absentee rate. Talking with some of them tells me that they are concerned about after-school care for their children. I wonder what would happen if a program were started right here in the factory to provide child supervision and activities?

 

  • Ho = the symbol for the null hypothesis
  • m9 = the symbol (the Greek letter mu) for the theoretical average for the population of ninth graders
  • m12 = the symbol (the Greek letter mu) for the theoretical average for the population of 12th graders.

 

This lack of a relationship, unless proved otherwise, is a hallmark of the method being discussed. In other words, until you prove that there is a difference, you have to assume that there is no difference.

 

Furthermore, if there are any differences between these two groups, you have to assume that the differences are due to the most attractive explanation for differences between any groups on any variable: chance!

 

That’s right; given no other information, the chance is always the most likely explanation for differences between the two groups. And what is the chance? It is the random variability introduced as a function of the inpiduals participating as well as many unforeseen factors.

 

For example, you could take a group of soccer players and a group of football players and compare their running speeds. But who is to know whether some soccer players practice more, or if some football players are stronger, or if both groups are receiving additional training?

 

Furthermore, perhaps the way their speed is being measured leaves room for the chance; a faulty stopwatch or a windy day can contribute to differences unrelated to true running speed.

 

As good researchers, our job is to eliminate chance as a factor and to evaluate other factors that might contribute to group differences, such as those that are identified as independent variables.

 

The second purpose of the null hypothesis is to provide a benchmark against which observed outcomes can be compared to determine whether these differences are caused by chance or by some other factor.

 

The null hypothesis helps to define a range within which any observed differences between groups can be attributed to chance (which is the contention of the null hypothesis) or whether they are due to something other than chance (which perhaps would be the result of the manipulation of the independent variable).

 

Most correlation, quasi-experimental, and experimental studies have an implied null hypothesis; historical and descriptive studies may not.

 

For example, if you are interested in the growth of immunization during the last 70 years (historical) or how people feel about school vouchers (descriptive), then you are probably not concerned with posting a null hypothesis.

 

What Makes a Good Hypothesis?

Good Hypothesis

Hypotheses are educated guesses. Some guesses are better than others right from the start. I cannot stress enough how important it is to ask the question you want to be answered and to keep in mind that any hypothesis you present is a direct extension of the original question you asked. This question will reflect your own personal interests as well as previous research.

 

Good hypotheses are declarative in nature and posit a very clear and unambiguous relationship between variables.

 

With that in mind, here are some criteria you might use to decide whether a hypothesis you read in a research report or the ones you formulate are acceptable.

 

Let’s use an example of a study that examines the effects of after-school child-care programs for employees who work late on the parents’ adjustment to work. The following is a well-written hypothesis:

 

Parents who enroll their children in after-school programs will miss fewer days of work in one year and will have a more positive attitude toward work as measured by the Attitude Toward Work (ATW) Survey than parents who do not enroll their children in such programs.

 

Here are the criteria we want to evaluate if a hypothesis is good.

 good hypothesis

1.  A good hypothesis is stated in declarative form and not as a question. Hypotheses are most effective when they make a clear and forceful statement.

 

2. A good hypothesis posits an expected relationship between variables. The example hypothesis clearly describes the relationship between after-school child care, the parents’ attitude, and the absentee rate.

 

These variables are being tested to determine whether one (enrollment in the after-school program) has an effect upon the others (absentee rate and attitude).

 

Notice the word expected in the second criterion? Defining an expected relationship is intended to prevent the fishing-trip approach (sometimes called the shotgun approach) which may be tempting to take but is not very productive.

 

In the fishing-trip approach, you throw out your line and pull in anything that bites. You collect data on as many things as you can, regardless of your interest or even whether collecting the data is a reasonable part of the investigation. 

 

Or, put another way, you load up the guns and blast away at anything that moves. You are bound to hit something. The problem is that you may not want what you hit and, worse, you may miss what you want to hit—even worse (if possible), you may not know what you hit!

 

Good researchers do not want just anything they can catch or shoot—they want specific results. To get such results, researchers must formulate their opening questions and hypotheses in a manner that is clear, forceful, and easily understood.

 

3. Hypotheses reflect the theory or literature upon which they are based. The accomplishments of scientists can rarely be attributed to only their hard work.

 

Their accomplishments also are due to the work of many other researchers who have come before them and laid a framework for later explorations. A good hypothesis reflects this; it has a substantive link to existing literature and theory.

 

In the previous example, let’s assume that the literature indicates that parents who know their children are being cared for in a structured environment can be more productive at work.

 

Knowledge of this would allow a researcher to hypothesize that an after-school program would provide parents the security they are looking for, which in turn allows them to concentrate on work rather than on awaiting a phone call to find out whether Max or Sophie got home safely.

 

4. A hypothesis should be brief and to the point. Your hypothesis should describe the relationship between variables in a declarative form and be as succinct (to the point) as possible.

 

The more succinct the statement, the easier it will be for others (such as your master’s thesis committee members) to read your research and understand exactly what you are hypothesizing and what the important variables are.

 

In fact, when people read and evaluate research, the first thing many of them do is read the hypotheses so they can get a good idea of the general purpose of the research and how things will be done. A good hypothesis defines both these things.

 

5. Good hypotheses are testable hypotheses. This means that you can actually carry out the intent of the question reflected in the hypothesis. You can see from the sample hypothesis that the important comparison is between parents who have enrolled their child in an after-school program with those who have not.

 

Then, such things as attitude and number of workdays missed will be measured. These are both reasonable objectives. Attitude is measured by the ATW Survey (a fictitious title, but you get the idea), and absenteeism (the number of days away from work) is an easily recorded and unambiguous measure.

 

Think about how much harder things would be if the hypothesis were stated as Parents who enroll their children in after-school care feel better about their jobs. Although you might get the same message, the results might be more difficult to interpret given the ambiguous nature of words such as feel better.

 

In sum, complete and well-written hypotheses should

  • be stated in declarative form,
  • posit a relationship between variables,
  • reflect a theory or a body of literature upon which they are based,
  • be brief and to the point, and
  • be testable.

 

When a hypothesis meets each of these five criteria, then it is good enough to continue with a study that will accurately test the general question from which the hypothesis was derived.

 

Samples and Populations

Samples and Populations

As a good scientist, you would like to be able to say that if Method A is better than Method B, this is true forever and always and for all people. Indeed, if you do enough research on the relative merits of Methods A and B and test enough people, you may someday be able to say that, but it is unlikely. Too much money and too much time (all those people!) are required to do all that research.

 

Our goal is to select a sample from a population that most closely matches the characteristics of that population.

 

However, given the constraints of limited time and limited research funds which almost all scientists live with, the next best strategy is to take a portion of a larger group of participants and do the research with that smaller group.

 

In this context, the larger group is referred to as a population, and the smaller group selected from a population is referred to as a sample.

 

Samples should be selected from populations in such a way that you maximize the likelihood that the sample represents the population as best as possible. The goal is to have the sample resemble the population as much as possible.

 

The most important implication of ensuring the similarity between the two is that, once the research is finished, the results based on the sample can be generalized to the population. When the sample does represent the population, the results of the study are said to be generalizable or to have generalizability.

 

The significance is a measure of how much risk we are willing to take when reaching a conclusion about the relationship between variables.

 

Understanding Methodologies: Mixed Approaches

 Mixed Approaches

Mixed approaches are certainly becoming ever more common in social science research. Since the 1980s there has been a growing acceptance of research that traverses the traditional pide of quantitative and qualitative. I think there are two very good arguments for this. The first is that mixed approaches can overcome the shortcomings and biases inherent in each inpidual approach.

 

The thinking here is that if a researcher has a paradigmatic preference, if they ‘see’ themselves as a quantitative or qualitative researcher, then they may be working under a set of assumptions that can narrow their worldview. Accordingly, their ways and means of understanding and exploring the world can be limited.

 

Mixed methodology Incorporating quantitative and qualitative paradigms, approaches, concepts, methods and/or techniques in a single study.

The second reason, certainly linked to the above, is that mixed approaches have the potential to be expansive. This can start with an openness to various ontologies and epistemologies, through to an open-minded approach to the selection of methods of data collection and analysis.

 

This means that mixed methods research can allow for methodological persity, the complementarity of approaches, and both inductive and deductive reasoning. Researchers can work to creatively develop question-driven approaches no longer limited by paradigm.

 

Practically speaking, this allows mixed methods research to:

  • build a broader view by adding depth and insights to ‘numbers’ through the inclusion of dialogue, narratives, and pictures;
  • add precision to ‘words’ through the inclusion of numbers and statistics (which can make results more generalizable);
  • use various research protocols in phased stages; facilitate the capture of varied perspectives;
  • allow for triangulation.

 

Challenges and Possibilities

Challenges

All this sounds exceedingly positive. And mixed methodologies certainly make sense; why not take advantage of both traditions and build as rich a picture as possible? This goal is certainly admirable and definitely worth exploring. But as with everything worthwhile, the advantages need to be balanced with an awareness of obstacles and challenges.

 

For example, one thing you need to work through is whether or not you are likely to have issues with quantitative and qualitative paradigmatic assumptions that may be at odds with each other.

 

There are definitely traditionalists out there who argue that the assumptions underlying quantitative and qualitative traditions do not allow for a mixed approach.

 

The paradigms are at cross-purposes and cannot work in concert. Others, however, suggest that the logic that underpins various research paradigms is compatible and that methodological choice should always be based on what is useful in answering your question, regardless of any philosophical or paradigmatic assumptions.

 

Ask yourself where you sit. Can you reconcile the traditions? Can you be open to the possibility of more than one way of seeing the world?

 

If you believe this openness, acceptance, and appreciation around methodology is possible, you still need to consider whether you are willing to/have the time to learn about two distinct paradigms and their approaches to exploring the world.

 

This includes the assumptions that underpin both traditions; the techniques they each employ in collecting and analyzing data; and how to work towards appropriate criteria for credibility (i.e. understanding the difference between validity and authenticity, reliability and dependability, generalizability and transferability, etc.) and how these are best ensured.

 

Now learning ‘about’ something is one thing, but you also need to ask yourself if you are willing to practice the research skills associated with both paradigms? Can you become adept at collecting and analyzing both quantitative and qualitative data?

 

And this can be a challenge. Open-ended questions, for example, are often asked in ‘quantitative’ surveys, but rarely analyzed to their full potential. Similarly, students who are more familiar with qualitative work, but wish to quantify some of their data, can let fear of statistics limit their analysis.

 

Finally, you will need to consider the practicalities associated with using a mixed approach. You are likely to have limits on what you can achieve, so you will need to be mindful of overambitious design and the possibility that you are trying to do two projects instead of one.

 

So given these considerations, is a mixed methodological approach something you should pursue? Well, the main prerequisites in any research design are:

(1) that your approach answers your question; (2) that it is right for you as a researcher; and that (3) your design is doable. Thus, deciding on the efficacy of any mixed methodological approach comes down to the exploration of these criteria. Ask yourself:

 

1.Do I believe that a mixed approach is the best way to answer my research question? This is, by far, the most crucial question. If a mixed approach does not make sense or add anything to the well-considered research question you wish to answer, then no argument can outweigh this central consideration.

 

If, however, after thoughtful consideration, expert opinion and some good old-fashioned reading, you believe the answer is yes, a mixed method approach is appropriate to the question, then ask:

 

2. Is this the right approach for me? Am I willing to learn about and develop the skills necessary to carry off more than one approach?

 

Belief, dedication, and skills are central to the conduct of credible research, so you need to be honest in your assessment. Talk to your supervisor and other researchers about the challenges of working across two traditions. If you are still up for it, ask:

 

3. Is this doable? Will a mixed approach be practical due to supervisory, time and/or financial constraints? Practicality must always be taken into account. If your supervisor is uncomfortable with your methods and/or does not feel experienced enough to supervise across both traditions, you will need to think about the issue of support.

 

It will also be a real challenge if you run out of time and money. Deadlines always come up more quickly than you realize and I know of very few students with unlimited funds.

 

The idea of the three questions above is to allow you to assess the efficacy of a mixed approach for answering your question, as well as to honestly assess how a mixed approach sits with your perspective on research – while always minding practicalities. After all, what we are after is the most appropriate approach in a real-world situation.

 

Perspectives and Strategies

Strategies

Okay – you see this mixed methods stuff as having real potential for your research – and you feel ready to jump in and start designing a study that can collect both qualitative and quantitative data.

 

Well before you get going, it is worth knowing that there are a number of ways to think about and approach a mixed study – at both theoretical and operational levels.

 

Theoretical perspectives

At a theoretical level, there are actually a number of ways to justify the choice of mixed methodology. In fact, if you are going to pursue mixed methods, it is good to consider your beliefs about mixed approaches and your justification for their adoption. One potential breakdown of these perspectives follows:

 

Paradigm perspective 

With this perspective, you allow room for quantitative and qualitative traditions with a broad worldview. Historical distinctions such as quantitative and qualitative are seen as constructs that can be rewritten – and in fact should be rewritten, as more holistic knowledge of the world evolves.

 

Methodology perspective 

With this perspective, paradigmatic assumptions are seen as real/distinct, but quantitative and qualitative traditions are both seen as valued.

 

Methodologies and methods thus follow appropriate paradigm-based rules and are treated discretely. Both, however, should be incorporated in a single study in order to capitalize on complementary strengths.

 

Research question perspective –

Research question

With this perspective, methods are determined by the research question. It is logic that determines the appropriateness of a mixed approach, rather than any paradigm. This might also be called a non-paradigmatic stance where issues of design sit below explicit paradigmatic considerations. This is often the case in small-scale/applied research.

 

Methods/data perspective –

With this perspective mixed methods are a means for collecting and analyzing both quantitative and qualitative data (number and words/images) and as such are not inherently tied to issues of paradigms. Words and numbers would be a part of any and all paradigms.

 

Operational Perspectives

The theoretical perspectives discussed above will help you rationalize your choice of a mixed approach and help you develop arguments around the credibility of your methodological approach.

 

But they only go so far in telling you how to go about your research – how to operationalize your mixed methodology. And there is definitely more than one way to attack a mixed study with each approach leading you to quite varied research designs and research strategies.

 

Quantitative Approach with Acceptance of Qualitative Data

Qualitative Data

I was once told (by a statistician) that mixed methodology was all about adding a bit of qualitative flesh to quantitative bones. The underlying premise here is, no surprise, that at the heart of a mixed approach is a belief in the quantitative.

 

Researchers who think this way tend to accept the underlying assumptions of the quantitative tradition but are willing to accept that qualitative data might help ‘flesh out’ their study.

 

And there are certainly benefits in this qualitative color – both for depth of understanding and for the construction of more powerful narratives – but qualitative data is generally seen as the second cousin.

 

The most common example here is designing a survey that asks a few open-ended questions that allow for further exploration within a closed question survey; for example, you survey your sample using yes/no, agree/disagree, Like scale items and ranking scales – but then ask for further explanation that allows respondents to write out their responses.

 

And while asking for more depth makes sense, the challenge for those ensconced in a quantitative paradigm who have decided to collect qualitative data is allowing the qualitative data to do the work that it can/should. In fact, it is not uncommon for this type of qualitative survey data to be (1) quantified or (2) ignored.

 

The warning here is that even if using qualitative data as a supplement within a more quantitative study, there is still a need to engage with qualitative thinking/methods that allow that qualitative data to do its job effectively.

 

Qualitative Approach with Acceptance of Quantitative Data

Quantitative Data

It probably won’t come as a surprise, but there are actually a few qualitative researchers out there who are a bit wary of numbers. Their predilection for quality over quantity has left them questioning whether working with quantities means a lack of quality.

 

Luckily this position is softening, with more and more qualitative researchers accepting the power of numbers and recognizing that they can be capitalized on, even given the underlying assumptions of the qualitative tradition.

 

So how might this manifest? Well, one example might be an ethnographic study that embeds a small community survey; for example, you are exploring the local church community trying to get a feel for what it means to be a member of this church.

 

In fact, you’ve joined the church and rented a small nearby apartment for a month. And while you are getting much from the lived experience, you decide to supplement your ethnographic research with a short community survey.

 

Another possibility is that you conduct a series of in-depth interviews and decide to quantitatively code the data for tallying/statistical analysis; for example, you decide to explore the church community through a series of 50 interviews.

 

As you look for themes, you realize that with 50 respondents, it might be of value to produce a few pie charts or bar graphs to visually represent some of your findings.

 

You might also complement a case study with an examination of existing data; for example, you delve into the church community and engage in interviews and observation. But you also look at attendance figures and membership data that the church has kept over the last 10 years.

 

The basic premise here is that in-depth exploration under a qualitative framework will best answer the research question, but that quantification, of at least some of the data, makes sense and can add to the analysis.

 

Whether in the form of an embedded survey, quantifying what is traditionally seen as qualitative data, or exploring existing data, quantitative data can add breadth to a study and may even work towards making it more representative.

 

Even if using quantitative data as a supplement to a more qualitative study, there is still a need to engage with quantitative thinking/methods that allow quantitative data to do its job effectively.

 

Phased Approach

Phased Approach

A really interesting practice in mixed methods is using one method to build the efficacy of another. You might, for example, conduct a few key informant interviews at the start of a project in order to facilitate the development of a survey;

 

For example, you interview a foreman and the head occupational health and safety officer of a mining company in order to determine the main issues that affect workplace stress so that you can produce a relevant employee survey.

 

This is sometimes called an instrument design model, in this case, the final analysis will be quantitative but the quantitative tool used to generate these results is reliant on qualitative methods.

 

On the flip side you might conduct key informant interviews after a survey to add depth to survey findings; for example, you conduct your survey on employee stress, analyze results and then have focus groups to discuss key issues arising from the analysis. This is sometimes referred to as an explanatory model.

 

The phase two qualitative data is there to offer a fuller and richer explication of the quantitative findings. Of course, there is also the possibility that new and unexpected findings will be uncovered through this type of process.

 

The challenge is knowing how to integrate findings that do something other than validating and expand upon your survey results – a difficult, but the rewarding situation.

 

Triangulation Approach

The thinking behind the triangulation approach is that confirmation of any particular data source occurs through comparison and validation of a different data source.

 

So rather than being reliant on survey data alone, or solely interview data, or only data from document analysis, studies designed under a triangulation banner, by design, gather various types of data to look for corroboration that improves the overall robustness and credibility of a study.

 

Under the banner of a triangulation approach, gathering quantitative and qualitative data need not happen simultaneously. Data collection is not phased; one process does not rely on another.

 

In fact, data collection processes are designed to be independent of each other. Data is analyzed separately with integration occurring through a discussion of commonalities and experiences.

 

The challenge occurs when varied sources of percent data point to different results, and triangulation does not occur. In this case, researchers need to reflexively consider both methodological design and the robustness of potential findings.

 

Question-Driven Approach

Question-Driven Approach

This is a direct operationalization of the theoretical position discussed above. The question-driven approach involves putting questions before paradigm, and premises neither the quantitative nor the qualitative tradition.

 

It simply asks what strategies are most likely to get the credible data needed to answer the research question; and sees researchers adopting whatever array of strategies they think can accomplish the task, regardless of paradigm.

 

As required by the research question, this can include any of the quantitative, qualitative or phased approaches discussed above. It might also involve a study that is looking for both in-depth understanding and broader representation;

 

For example, a project on the experience of bullying in a local high school that asks what it feels like (suited to phenomenological approaches) and how common it is (suited to survey research).

 

Another possibility is a study that targets two groups of respondents that require different approaches; for example, a study exploring workplace culture that targets general employees (suited to survey research) and upper-level management (suited to key informant interviews). In my work, I find this a very common driver of mixed methods approaches.

 

I generally work with students doing applied research in organizational settings. Students need to develop a work-based research question that can be answered in a matter of weeks or months.

 

It is recommended that research questions be tight and highly useful to their organization. Because of the work-based nature of their project, a good percentage of students want to understand how inpiduals within an organization feel about a new challenge, new practice, new policy, impending threat, etc.

 

And for many, that means engaging with a variety of stakeholder groups and needing an array of strategies to accomplish this. Interviews, focus groups and surveys often become part of the same methodological plan – simply because it is deemed to be the best/most efficient way to gather the necessary data.

 

I have to say that I am an advocate of the question-driven perspective. And that is because I value both the quantitative and qualitative traditions and understand the strengths and shortcomings of each.

 

So I am open to anything from classic quantitative and qualitative approaches to quite eclectic multi-method approaches.

 

My criteria for design is simply what will gather the most credible data possible. As well, I believe that even when selecting from the approaches listed above, a key criterion should be whether an approach will best answer your question in a practical way.

 

This makes the question-driven approach compatible with the reflective decision-making you will need to make about all the other approaches.

 

Credibility in Mixed Methods Research

Mixed Methods Research

Now, it is worth pointing out challenges particular to mixed approaches. Basically, the challenges associated with mixed methods center on the need to be very clear on what you are trying to achieve, and thus what credibility indicators are appropriate for your work.

 

The indicators that are most expected and accepted in quantitative work are objectivity, validity, reliability, generalizability, and reproducibility. These are not only the gold standard for quantitative research but are often seen as the gold standard for all research.

 

So what happens when you incorporate qualitative research into a basically quantitative study, and you are no longer working with representative samples or even a single verifiable truth?

 

How do you credibly weave in post-positivist indicators such as neutrality, authenticity, dependability, transferability, and suitability; and is it even appropriate to do so?

 

And what happens if you are arguing a more post-positivist, qualitative framework and you suddenly want to argue the importance of a representative sample? Can you work between the two sets of assumptions that drive these paradigms in a way that allows your research to be seen as credible?

 

Well, I think it can be done and would suggest it be done method by method. The steps you might want to include are:

 

(1) make sure your use of a mixed approach is warranted; (2) ensure that each method you employ will add important/insightful data to your research; (3) make sure you are engaged in best practice for that particular method; (4) show how your best practice approach meets the credibility indicators appropriate to your particular research method.

 

As long as the question that each method is trying to explore is well matched and you can show rigor in your processes, you should be able to make sound arguments for using indicators that span the quantitative/qualitative pide.

 

If you are enticed by a mixed methods approach, you may want to delve into some more specialist readings. The following list offers some more recent work in the ‘mixed’ area.

 

Understanding Methodologies:

Understanding Methodologies

As far as the pursuit of pure research is concerned, I cannot think of too many ‘ivory tower’ researchers who conduct their research without some practical purpose in mind. In fact, all research proposals demand a rationale that highlights the scientific or social significance of the research questions posed.

 

In this type of research, however, applying findings is not part of the researcher’s agenda. For those involved in applied/evaluative research, change is more closely tied to a project’s objectives. Knowledge production is, in fact, driven by the immediate need for information that can facilitate practical, effective, evidence-based decision-making.

 

Action research takes this a step further – and rather than expect change because of research, it actually demands change through research processes. Finally, emancipatory research attacks change at the most fundamental levels and include liberation and self-determination in its agenda.

 

Basic research Research is driven by a desire to expand knowledge rather than a desire for situation improvement.

 

Applied research Research that has an express goal of going beyond knowledge production towards situation improvement.

 

Emancipatory research

Research that exposes underlying ideologies in order to liberate those oppressed by them. All researchers want their research to be useful, at least at some level, in the real world.

 

The question is whether that use involves the production of knowledge that may eventually lead to change, or whether change itself will be a direct product of the research process. Regardless, it is well worth seeing research as a political endeavor.

 

Evaluative Research

Evaluative Research

If there is one thing we are not short of it is initiatives. In order to improve a situation, we are willing to try new things: new products, new practices, new policies, new legislation, new interventions, new programmes, new strategies, new structures, new routines, new procedures, new curricula, etc., etc. But how successful are our endeavors?

 

Did whatever we try, do whatever it was supposed to do? Have we been able to make some contribution towards positive change? Have we been able to alleviate a problem situation? Answering these types of questions is the goal of evaluative research.

 

Evaluative research Research that attempts to determine the value of some initiative. Evaluative research identifies an initiative’s consequences as well as opportunities for modification and improvement.

 

The need for evaluative studies is ever increasing. A well-conducted evaluation is now a key strategy for supplying decision-makers with the data they need for rational, informed, evidence-based decision-making. In fact, change intervention proposals increasingly require evaluative components so that assessment is embedded into the management of change from conception.

 

Evaluative studies basically attempt to determine whether an initiative should be continued as is, modified, expanded or scrapped, and do this by asking various stakeholder groups two types of questions.

 

The first is related to outcomes; for example, did a particular initiative to meet its objectives? The second is related to the process; for example, how successful was a particular initiative’s implementation and how might it be improved?

 

Summative/Outcome Evaluation

Summative evaluation, also referred to as outcome evaluation, aims to provide data and information related to the effectiveness of the change strategy in question (that goal, aims, and objectives have been met) and its efficiency (that the effects justify the costs).

 

The idea here is to investigate whether an initiative is responsible for outcomes that would not have occurred if it were not for the initiative, and this should include both intended and unintended effects.

 

Now in the real world, the financial bottom line is almost always a factor, so many outcome evaluations also include data related to cost-effectiveness, often in the form of a cost-benefit analysis.

 

The results of outcome evaluations are expected to inform decision-making related to programme funding, continuation, termination, expansion and reduction. While findings are often case-specific, results can be of interest to any number of stakeholder groups and, depending on the nature of the change intervention, might be of interest to the wider population as well.

 

Methods Appropriate to Summative Evaluation

So what exactly is involved in the conduct of an evaluative study? Well, rather than be defined by any particular methods, an evaluative study is distinguished by its evaluative goals, and it is these goals that determine the appropriate approach. In the summative evaluation, the main goal is to find out if an initiative worked.

 

In other words, whether it met its objectives. As an evaluator exploring outcomes, you will need to determine whether success is to be measured from the perspective of the provider, the recipients, the wider community or all of these. You need to determine which outcome objectives are to be explored and whose perspectives you seek. Your methods will then vary accordingly.

 

Provider perspective 

When designing methods, there are two general ways to find out if providers believe an initiative is a success. This first is to ask. Interviews and focus groups allow you to talk to those responsible for design, delivery, and implementation, as well as those with a higher level of organizational responsibility.

 

The second method is to look at the documentary evidence. This is particularly relevant for questions that focus on cost-effectiveness, or anywhere that evidence of success is likely to be in ‘records’.

 

Recipient perspective 

This is where you really get down to brass tacks and see if the initiative’s change-oriented outcome objectives have been met. Many (including myself) would argue that the best way to do this is through experimental or quasi-experimental designs that allow for comparison across groups and time. There are three possibilities here:

 

Case/control design 

To see whether an initiative has made a difference for a target group, you can use a control group to compare those who have undergone an initiative with those who have not.

 

Before/after design 

Sometimes called ‘time series analysis’, this approach allows for comparison of the same group of inpiduals before and after an initiative is implemented.

 

Case/control – before/after design 

This allows for even more definitive results by combining the two methods above.

All three of these approaches require forward planning and, as discussed at the end of this section, this is not always possible. The alternative is to evaluate perceptions of change, rather than change itself, by surveying or interviewing recipients after implementation. The goal here is to see if recipients believe that change has occurred.

 

Wider community perspective 

Initiatives often include objectives related to stakeholder groups that are not direct recipients. For example, a school initiative to curtail bullying may include an objective related to decreasing parent/community anxiety. Or a health care initiative may include an objective related to improving an organization’s reputation in the community.

 

The methods of choice here are surveys and focus groups. And while such approaches generally ask community members to report on their perceptions and recent changes in those perceptions, the collection of similar data prior to the initiative will allow you to engage in direct comparison.

 

Formative/Process Evaluation

Formative evaluation, also referred to as process evaluation, aims to provide data and information that will aid further development of a particular change initiative. Such studies investigate an initiative’s delivery and ask how, and how well, it is being implemented.

 

These studies can assess strengths, weaknesses, opportunities, and threats, and often work to assess the factors acting to facilitate and/or block successful implementation.

 

The results derived from process evaluations are expected to inform decision-making related to programme improvement, modification and management. And while these studies also tend to be case-specific, ‘transferable’ findings will allow other organizations interested in the use of any similar initiatives to apply ‘lessons learned’.

 

Methods Appropriate to Formative Evaluation

The main objective informative or process evaluation is to assess an initiative’s strengths and weaknesses and ask how the process could be made more efficient and effective.

 

Stakeholder perspectives again play an important role here since providers, recipients and the wider community are likely to have quite varied opinions on what did and did not work so well. Design of methods is, therefore, highly dependent on working out precisely what you want to know and whose perspective you seek:

 

Provider perspective 

The methods you use here will be highly dependent on the complexity and persity of the groups responsible for provision. For example, at one end of the spectrum, you might be asked to evaluate a classroom initiative driven by a particular teacher. In this case, an in-depth interview would make the most sense.

 

At the other end of the spectrum, you might be evaluating a new government health care policy whose design, development and implementation have involved inpiduals working at various levels of government and private industry.

 

With this level of complexity, you might need to call on multiple methods – interviews, focus groups, and even surveys – to gather the data you require. There might also be value in direct observation of the processor in a document review that finds you trolling through and examining records and minutes related to the process being explored.

 

Recipient perspective 

Just because management thinks something went well doesn’t mean recipients will think the same. Good process evaluations will go beyond the provider perspective and seek recipient opinions on strengths, weaknesses and potential modifications.

 

As with providers, target groups also vary in size and complexity, and you might find yourself calling on a variety of methods, including interviews, focus groups and surveys, to gather the data you require.

 

Wider community perspective

The first question you need to ask here is ‘Do you or your “client” want wider community opinion?’ You might not feel that the wider community is a relevant stakeholder, or that broader community groups have the prerequisite knowledge for providing informed opinion.

 

On the other hand, the initiative under review might have far-reaching implications that affect the community or might be related to a problem where the community sees itself as a key stakeholder; for example, an initiative aiming to stop neighborhood graffiti.

 

In this situation, canvassing wider opinion on an initiative’s strengths and weaknesses may be of value. The methods most likely to be called upon here are surveys, focus groups and possibly key informant interviews.

 

The Politics of Evaluative Research

Research

In criticism, I will be bold, and as sternly, absolutely just with friend and foe. For this purpose, nothing shall turn me.

Edgar Allan Poe

 

It is said that all research is political, but none more so than evaluative research. It would be naive to pretend otherwise. Vested interests are everywhere and the pressure for researchers to find ‘success’ can be high.

 

So how do you begin to negotiate and balance the sometimes present political and scientific goals of evaluative research? The first step is to understand researcher-researched realities and relationships.

 

For example, those seeking to have initiatives evaluated do not always have the same goals. Yes, some want honest and open feedback, but others might be after validation of what they have done, while others might just be doing what they need to do to meet funding requirements. And of course, some may be after a combination of these.

 

The same is true of researchers, whether insiders or outsiders; not all evaluative researchers operate with the same style, skills or goals. For example, some see themselves as objective researchers whose clear and unwavering objective is credible findings regardless of political context.

 

Others operate at a more political level and are highly in tune with government/organization/political funding realities, and perhaps their own ongoing consultancy opportunities.

 

There are others who tend to be overcritical and need to show their intelligence by picking holes in the work of others. Finally, there are those who see themselves as facilitators or even mentors who are there to help.

 

When I first began doing evaluative research I came across and learned from all of these styles and assumed that my way forward would be as an objective researcher. But I soon realized that the political end of evaluative research cannot be ignored and that the key to real-world evaluation is flexibility.

 

Now my main grounding objective, which is tied to my own professional ethics, is to produce credible and useful findings. But how those findings are presented, what is emphasized and what is sometimes best left unsaid, are undeniably influenced by both politics and context.

 

I have worked with quite a few evaluators and I think the best ones are politically astute but always work under a code of professional ethics and integrity.

 

Some will adapt their style depending on the client and context, while others will stay true to a certain way of working. But almost all good evaluators understand the need to negotiate clear expectations that meet both client and researcher needs and goals with integrity.

 

I have a question!

Negotiating Real-World Challenges of Evaluative Research

Political realities are not the only challenge to the production of credible data in evaluative research. Evaluations tend to be conducted within messy and chaotic real-world settings, and you will need to skilfully negotiate this level of complexity if you want to produce solid, valuable results.

 

Now if it were up to me, all initiatives to be evaluated would be well established with clear and measurable aims and objectives. But rarely is this the case. You often need to find ways to work around circumstances that are less than ideal. Such situations include the following.

 

When the Decision to Evaluate Comes after Initial Implementation

It would be terrific if the need to evaluate was a recognized part of project planning from conception. You would then have all the options. Early planning would allow you to design comparative studies such as randomized controlled trials, quasi-experiments with control groups, or before and after designs.

 

But there are plenty of circumstances where you will need to undertake evaluations where the evaluative planning was but an afterthought – thereby limiting your methodological options.

 

The key here is remembering that evaluative studies, particularly those studies related to outcomes, are all about comparison. And by far the best way to compare is by using at least two data sets.

 

Effective evaluations are based on either before and after data (data collected before the initiative that can be compared with data collected after the initiative), or case/control data (data collected from two groups, one that has undergone the initiative and one that has not).

 

Without the aid of forwarding planning, you will need to consider if either of these options is available to you. In other words, you will need to determine whether you will be able to collect solid relevant baseline data, or whether you will be able to find a comparable control group.

 

If you can, rigorous evaluation is not too problematic. But if baseline data or a comparable control group is not available, you are left with the following methodological options:

 

1. Do a ‘post group only’ study in which you ask stakeholders about the effects (on knowledge, attitude and/or practice) of the initiative under review. While generally not as strong as truly comparative methods, this approach can still have value. The key here is clear expectations. Your clients need to be aware of your methodological constraints and how they might affect findings.

 

2. Limit your study to process evaluation that centers on stakeholders’ reflections on an initiative’s design, delivery, and implementation.

 

When Objectives Are Not Clearly Articulated or Are Not Readily Measurable

If you want to know if an initiative has achieved its goals, then you clearly need to know two things: what those goals were/are; and how they might be measured. Now by far, the best objectives are those that are ‘SMART’: specific, measurable, achievable, relevant and time-bound.

 

If your initiative has been developed with such objectives in mind, in terms of methodological design, you are half-way there. By definition, your objectives are measurable – so you just need to go out and measure.

 

But for initiatives where objectives are not clearly articulated or are not measurable, you have to do a bit more work. Since you simply cannot evaluate non-existent or waffly objectives, you will need to:

 

  • Work with stakeholders to clearly draw out and articulate the initiative’s objectives – this may involve working through ‘programme logic’ (see below);
  • Decide which objectives will be prioritized for evaluation;

 

  • Determine and agree on how these objectives can be operationalized (e.g. designing a method that can measure ‘the joy of reading in children’ is much more difficult than designing a method that can measure ‘an increase in the recreational reading of third graders by 50% by the end of the year’).

 

When the Initiative Has Not Been Going Long Enough to Expect Results

It is not unusual for the timeframe given for evaluation to be shorter than that needed for an initiative to produce its intended results.

 

For example, in health promotion campaigns, goals are often related to disease alleviation, such as reducing the incidence of lung cancer or type-2 diabetes. But such effects are hard to attribute to a particular campaign, and may not be seen for several years.

 

Programme logic

Programme

A planning, communication and evaluation model/tool that articulates the details of an initiative, its objectives and how success will be measured.

 

A common strategy used by evaluators facing this situation is to negotiate short-to medium-term outcomes that can be measured within the timeframe available and correlated to the expected long-term outcomes.

 

For example, success might be measured by increased awareness; for example, increased community knowledge about the dangers of smoking or increased awareness of the impact of carbohydrates on insulin.

 

Success might also be measured by changes in behavior, such as reducing the number of cigarettes smoked or decreasing levels of sugar consumption.

 

When Effects are Difficult to Measure/Difficult to Attribute to the Initiative

Suppose you were asked to evaluate a high school sex education programme that has a clear and central goal of increasing abstinence. To evaluate this programme, not only would you need to collect sensitive data from young people (who may not feel comfortable exposing their beliefs/practices).

 

But you would also need to design a research protocol that could control for any other factors that are known to have an effect on abstinence, such as parents, peers and the media.

 

Remember: you are not just looking for a correlation here. You are actually trying to establish cause and effect, and your methods need to control for any other factors that might be causal to any perceived change or difference.

 

Controlling for extraneous factors is the only way to be able to attribute results to the programme itself.

 

The lesson here is that before taking on an evaluative research study, you need to clearly consider, articulate and negotiate what is, and what is not, possible. In the real world, it can be difficult, if not impossible, to control for all extraneous variables that may affect change.

Remember: It is much better to have your methodology critiqued before you undertake a study, rather than after it has been completed!

 

Action Research

The purpose of man is in action, not thoughtThomas Carlyle

 

In most research approaches, contributions are limited to the production of knowledge. Even in applied/evaluative research where the goal is to have research knowledge become key in evidence-based decision-making, any actual change comes after the completion of research processes.

 

But what if you want to do more than produce knowledge? What if your goals are to go beyond evidence and recommendations? What if your research goals include doing, shifting, changing or implementing? Enter action research.

 

The term ‘action research’ was coined by Kurt Lewin (1946) and represented quite a departure from ‘objective’ scientific method that viewed implementation as discrete from research processes.

 

Under this traditional framework, responsibility for what happened as a consequence of the production of knowledge was not generally part of a researcher’s agenda.

 

Action research

Action research

Researchers, however, began to recognize that: (1) the knowledge produced through research should be used for change; and (2) researching change should lead to knowledge.

 

The appeal of a research strategy that could link these two goals while embedding elements of evaluation and learning was quite high, particularly in the fields of organizational behavior and education, where continuous improvement was, and still is, a primary goal.

 

Action research also offered a departure from the notion of ‘researcher’ as an expert and the ‘researched’ as passive recipients of scientific knowledge.

 

It, therefore, had great appeal among community development workers who saw value in a collaborative research approach that could empower stakeholders to improve their own practice, circumstances, and environments.

 

The Scope of Action Research

Action research, as it developed through the disciplines of organizational behavior, education and community development, has traveled down a number of percent paths, each with its own priorities and emphases.

 

Common across their experiences, however, is a desire for real and immediate change that involves the engagement and involvement of stakeholders as collaborators or co-researchers; prolonged involvement in learning cycles; the production of rigorous, credible knowledge; and the actioning of tangible change.

 

The nature of the potential change can involve anything from improved practice to shifted programmes, policies and systems as discussed below, through to more radical ‘cultural’ shifts that include empowering the marginalized (discussed under emancipatory research).

 

While the goals of any one action research proposal may sit neatly in any one of these categories, it is not uncommon for action research studies to work simultaneously across a number of goals.

 

Improving Practice

Action research can be an effective way of empowering stakeholders to improve their own professional practice. Rather than mandates that come down from on high, or knowledge that comes from outside experts, action research, which is expressly designed to improve professional practice, recognizes that various stakeholders can contribute to their own learning and development.

 

Action research recognizes the professional nature of stakeholders and their ability to conduct meaningful research. In doing so, it helps break down the pide between stakeholders and the ‘academic elite’, and brings research into day-to-day professional practice.

 

Improving practice through action research is quite common in the educational sector where teachers are encouraged to work in ways that develop their own skills and practice.

 

In recent years, however, there has been an increase in action research studies in health care and nursing, where the desire for professional recognition, autonomy, and respect for learned/local knowledge is high.

 

Shifting Systems

Sometimes the action you want to pursue begins and ends with developing your own professional practice, but at other times you may want to work at the organizational level.

 

Beyond practice, you may be interested in working within an organizational setting to improve procedures or programmes. In fact, in the above example, a higher-level goal was to have the findings from an action research study aimed at developing professional practice contribute to the development of effective policy.

 

I cannot think of any organization that could not be improved in some way or another. Inefficient systems, ineffective management, and outdated policy provide action research opportunities for those working in and with businesses, government, and non-government agencies, community groups, etc.

 

But while action research has been around for the better part of 60 years and can offer much to the management of organizational change, it is not generally a core management strategy. The action research literature certainly addresses organizational change, but the change management literature rarely tackles action research.

 

Nevertheless, terms such as learning, education, facilitation, participation, negotiation, and collaboration that are core in action research are also core in change-management speak.

 

This is particularly so in organizations that have recognized the value of on-the-ground knowledge, as well as the role of engagement and ownership in working towards effective and sustainable change. Action research as a strategy for driving workplace-based change can be highly effective in securing stakeholder support.

 

It can also get a wide range of staff working together towards a common goal; provide a systematic and well-established approach to sustainable change; provide a framework for the conduct of research, and embed the concept of research into management practice. It can also be a step along the way in the development of a learning organization.

 

Key Elements of Action Research

Because action research is quite distinct from traditional research strategies, working through its key elements is well worth the time. Understanding the benefits and challenges of this mode of research is an essential preliminary step in determining the appropriateness of action research for any particular context.

 

Addresses Real-World Problems

Action research is grounded in real problems and real-life situations. It generally begins with the identification of practical problems in a specific real-world context. It then attempts to understand those problems and to seek and implement solutions within that context.

 

Action research is often used in workplaces and rural communities where the ownership of change is a high priority or where the goal is to improve professional practice. It is also considered an effective strategy when there is a strong desire to transform both theory and practice.

 

Pursues Action and Knowledge

Action research rejects the two-stage process of ‘knowledge first, change second’, and suggests that they are highly integrated. Action research practitioners believe that enacting change should not just be seen as the end product of knowledge; rather it should be valued as a source of knowledge itself.

 

And we are not talking here about anecdotal knowledge. The knowledge produced from an action research study needs to be credible and must be collected and analyzed with as much rigor as it would be in any other research strategy.

 

The action is also a clear and immediate goal in every action research project. Whether it be developing skills, changing programmes and policies, or working towards more radical change, action research works towards situation improvement based in practice and avoids the problem of needing to work towards change after knowledge is produced.

 

Participation

The notion of research as the domain of the expert is rejected, with action research calling for the participation of, and collaboration between, researchers, practitioners, and any other interested stakeholders. It minimizes the distinction between the researcher and the researched and places a high value on local knowledge.

 

The premise is that without key stakeholders as part of the research process, outsiders are limited in their ability to build rich and subtle understandings – or implement sustainable change.

 

Contrary to many research paradigms, action research works with, rather than on or for, the ‘researched’, and is therefore often seen as embodying democratic principles. The key is that those who will be affected by the research and action are not acted upon.

 

The nature and level of participation and collaboration are varied and based on the action research approach adopted; the particular context of the situation being studied; and the goals of the various stakeholders. This might find different stakeholders involved in any or all stages and cycles of the process.

 

As for inpiduals driving the research process, in addition to taking on the role of lead researcher, at various points throughout the project, they might also have to act as a planner, leader, catalyzer, facilitator, teacher, designer, listener, observer, synthesizer and/or reporter.

 

The cycles themselves can be defined in numerous ways, they generally involve some variation on observation, reflection, planning and action. The exact nature of the steps in each part of the cycle is emergent and developed collaboratively with stakeholders who form the research team.

 

Research for the ‘observation’ part of the cycle is likely to be set within a particular case and is likely to involve a variety of approaches, methodologies, and methods in a bid to gather data and generate knowledge.

 

The ‘reflection’ part of the cycle can be informal and introspective or can be quite formal and share many elements with formative evaluations, as discussed earlier.

 

The steps related to ‘planning’ and ‘action’, however, are likely to go beyond reflection and research and may require practitioners to delve into the literature on strategic planning and change management.

 

Proposal Example

Here are a few sections from a longer funding proposal a colleague and I submitted some time back. My goal was to make sure my proposal met the funding body’s specifications quite directly. Surprisingly, they did not ask for any background literature, so none was provided.

 

Project title

Great Speech: De-mystifying Powerful Presentations in the Public Sector

 

Project overview (150–250 words)

We all know outstanding presentations and inspirational speakers when we hear them. We know because we are moved. We know because we want to tell others about it.

 

We know because we feel inspired. Yet inspiring can be a difficult objective to reach. In spite of the abundance of advice, dry, tedious, uninspired presentations are often the norm – public sector presentations included.

 

Change within the public sector, however, is generally reliant on cycles of advocacy; and such cycles often culminate in presentations. Reform is often reliant on influence, so the need to drive an idea and inspire an audience is undeniable.

 

Knowing the best means for influencing an audience through an effective presentation is often challenging, particularly in an information age, where Google and Wikipedia now hold knowledge once the domain of experts.

 

The goal of this project is to offer recommendations for improved teaching and learning in the space of public sector presentations. Through an analysis of 70 of the best, most inspired presentations of the past decade, with particular reference to the public sector, this project will deconstruct the core elements that underlie truly inspirational presentations.

 

The project will then analyze a cross-section of Trans-Tasman public sector presentations in a bid to identify gaps in best practice and thus training needs.

 

Project objectives (100–200 words): The overarching aim of this research project is to offer clear recommendations for improved teaching and learning in the space of public sector presentations.

 

The objectives of this project are:

 objectives

  • to identify the core elements that make for highly effective, highly motivational presentations;
  • to identify core elements and contextual issues of particular relevance to the public sector;
  • to create a qualitative matrix for easy identification of core elements;

 

to assess the effectiveness of presentations in the Australia/New Zealand public sector and identify gaps in effective Australia/New Zealand public sector presentations in order to develop and enhance teaching and learning development within this space.

 

 

Project benefits (100–200 words): Within the public sector, rarely is there an initiative, project, programme or policy reform that does not need to be championed. Advocacy is essential and presentations that fail to motivate can end the run of potentially good reform.

 

This project, with its goal of improving teaching and learning in the arena of public sector presentation, offers benefits to three stakeholder groups.

 

The Trans-Tasman public sector will benefit via increased ability to influence the policy cycle. Improved presentations can lead to a more engaged debate on key public administration issues, and contribute to continuing reform in the public sector.

 

The funding institution will benefit through the development of resources for future teaching and applied learning/knowledge activities. The aim is to enhance leadership in public sector communication training while supporting the development of best practice in government.

 

Students will benefit from increased skills, confidence and levels of influence.

 research method

 

Methodology – What research method(s) will your project use (50–150 words)? The methodology will rely on a two-phase qualitative approach reliant on both online and ‘face-to-face’ data.

 

Phase One

Analysis of 70 highly motivational presentations of the past decade. Population: Online presentations (in English) deemed highly motivational by media/speaking experts.

 

Sampling Strategy: Targeted sampling designed to include a wide range of speaker demographics – with a minimum of 35 public sector presentations. Analysis: Development of a best practice matrix through the use of narrative analysis, content analysis and semiotics.

 

Phase Two

Analysis of 30 public sector presentations in the Trans-Tasman region. Population: Presentations at ANZSOG’s annual conference as well as online presentations. Sampling Strategy: Random, cross-sectional. Analysis: Gap analysis via assessment of presentations against the matrix developed in Phase One.

 

All presentations used in this phase will be de-identified and aggregated without identifying data. The aim is to identify common gaps in practice rather than to critique inpidual presentations.

 

What is the rationale for using this method/these methods for this project (100–150 words)? The methodology for this project does not neatly fall within one particular approach, or even one particular paradigm, but rather represents a question-driven approach that utilizes both traditional social science methods as well as project management tools.

 

Specifically, this project relies on: sampling strategies developed within the quantitative paradigm; data analysis methods such as content analysis, narrative analysis, and semiotics drawn from the qualitative school;

 

And a gap analysis more traditionally found in project management. Such mixed methodologies are often advocated for applied research not tied to paradigmatic traditions.

 

The ability to draw from varied schools of thought as well as the ability to leverage the power of the Internet gives veracity to methods and allows for the development of context-driven methods. The particular methods to be employed in this project are those considered most likely to give credible results within the desired timeframe.

Recommend