• unlimited access with print and download
    $ 37 00
  • read full document, no print or download, expires after 72 hours
    $ 4 99
More info
Unlimited access including download and printing, plus availability for reading and annotating in your in your Udini library.
  • Access to this article in your Udini library for 72 hours from purchase.
  • The article will not be available for download or print.
  • Upgrade to the full version of this document at a reduced price.
  • Your trial access payment is credited when purchasing the full version.
Buy
Continue searching

Pragmatic and dialectic mixed method approaches: An empirical comparison

Dissertation
Author: Anne E. Betzner
Abstract:
Mixed methods are increasingly used in the fields of evaluation, health sciences, and education in order to meet the diverse information needs of funders and stakeholders. However, a consensus has yet to develop on the theoretical underpinnings of the methodology. A side-by-side assessment of two competing theoretical approaches to mixed methods, the dialectic and pragmatic, can assist researchers to optimize their use of mixed methods methodology and contribute to the growth of mixed methods theory. This study empirically compares the dialectic and pragmatic approaches to mixed methods and probes key issues underlying the methodology, including unique yield from mixed method studies, the importance of paradigmatic divergence between methods, and the financial demands of mixed method studies. A secondary analysis of a real-world evaluation, this study explores five research questions regarding the convergence, divergence and uniqueness of single method findings; the extent to which mixed methods produce unique findings over and above single methods presented side-by-side; the extent to which studies meet key criteria for validity; stakeholders' perceptions of the utility and credibility of the studies; and the cost of single methods. The pragmatic mixed method study was developed by integrating a post-positivistic telephone survey with weakly interpretive focus groups at the point of interpretation using pragmatic criteria. The dialectic study mixed the same post-positivistic telephone survey with strongly interpretive phenomenological interviews using a Hegelian-inspired dialectic format. All three single methods were examined by a method expert in the field who affirmed the methodologies used. Findings suggest that both mixed method approaches produced unique conclusions that would not have been available by presenting single methods side-by-side. However, the dialectic method produced more complex convergence and more divergence, leading it to be more generative than the pragmatic method. The use of stronger as compared to weaker interpretive methods contributed to the generative quality of the dialectic approach. Overall, the dialectic method appears more suitable to exploring more complex phenomenon as compared to the pragmatic approach. However, these conclusions are drawn from one study of one real-world evaluation. Much more scholarship is needed to explore the issues raised here.

TABLE OF CONTENTS

CHAPTER 1 INTRODUCTION............................................................................1 1.1. Statement of the Problem....................................................................1 1.1.1. The Need for Research on Mixed Methods........................1 1.1.2. Two Competing Theories of Mixed Methods.....................4 1.1.3. Conclusions.........................................................................6 1.2. Purpose of the Study...........................................................................7 1.3. Conceptual Framework.......................................................................8 1.4. Research Questions.............................................................................8 1.5. Research Design................................................................................12 1.6. Overview of the Dissertation.............................................................12

CHAPTER 2 LITERATURE REVIEW...............................................................14 2.1. Paradigms..........................................................................................14 2.1.1. Defining Three Key Paradigms along a Continuum.........................................................................15 2.1.2. Defining Terms.................................................................18 2.1.3. The Relationship between Methods and Paradigms..........................................................................21 2.2. An Historical Review of Mixed Methods.........................................27 2.2.1. Mixed Methods from a Post-Positivist Perspective........................................................................28 2.2.2. The Growth of Qualitative Methods and the Emergence of Mixed Methods..........................................29 2.2.3. Triangulation and Role of Qualitative Methods in Mixed Methods.................................................................30 2.2.4. The Development of the Dialectic Approach to Mixed Methods.................................................................33 2.2.5. An Emerging Pragmatic Approach to Mixed Method Studies..................................................................41

vi

2.2.6. Dialectic and Pragmatic Approaches as Generative Endeavors.......................................................42 2.3. Applying Pragmatism to Mixed Methods.........................................43 2.4. Applying Dialectics to Mixed Methods............................................48 2.5 Validity...............................................................................................56 2.5.1. Trustworthiness: A Unified Framework...........................57 2.5.2. The Standards for Program Evaluation.............................71

CHAPTER THREE RESEARCH METHODS.....................................................74 3.1 Overview of Methodology.................................................................74 3.2 Timeline for Secondary Data.............................................................76 3.3 Research Approach and Rationale.....................................................78 3.4. Research Questions...........................................................................79 3.4.1. Research Question 1: Comparison of Single Method Results to One Another........................................80 3.4.2. Research Question 2: Comparison of Mixed Method Results to Each Other and Single Methods.............................................................................82 3.4.3. Research Question 3: Examination of Validity of Inferences for Studies........................................................88 3.4.4. Research Question 4: Stakeholder Views of Credibility and Utility.......................................................89 3.4.5. Research Question 5: Cost of Single Methods..................91

CHAPTER FOUR RESULTS...............................................................................93 4.1 Research Question 1...........................................................................93 4.1.1. Summary of Single Method Findings...............................93 4.1.2. Convergence, Divergence and Unique Findings across Methods................................................................103 4.2. Research Question 2........................................................................116 Findings of Pragmatic and Dialectic Mixed Method Studies.............................................................................117

vii

Comparison of Mixed Method Process and Findings.................140 What unique information do the mixed method findings produce over and above single methods?........................147 4.3. Research Question 3........................................................................150 4.3.1. Trustworthiness / Validity...............................................152 4.3.2. The Joint Committee’s Program Evaluation Standards.........................................................................171 4.3.3. Expert Review of Single Methods..................................179 4.4 Research Question 4.........................................................................189 4.4.1. Respondent Experience with Single Methods.................190 4.4.2. Perceptions of Credibility of Single Methods.................191 4.4.3. Perceptions of Credibility of Mixed Methods.................194 4.4.4. Reported Utility of Mixed Methods................................196 4.5. Research Question 5........................................................................199 Cost in Billable Researcher Dollars............................................199 Cost in Subject Hours..................................................................202

CHAPTER FIVE CONCLUSIONS AND DISCUSSION..................................204 5.1 Research Questions Answered.........................................................206 5.1.1. Research Question 1........................................................206 5.1.2. Research Question 2........................................................210 5.1.3. Research Question 3........................................................212 5.1.4. Research Question 4........................................................216 5.1.5. Research Question 5........................................................217 5.2 Conclusions......................................................................................217 5.3. Limitations......................................................................................229 5.4. Recommendations...........................................................................232

R EFERENCES................................................................................................................221

viii

APPENDIXES Appendix A. Stakeholder Survey............................................................235 Appendix B: Single method findings by topic area................................250 Appendix C: Convergence, divergence and uniqueness of all single method findings by topic area.......................................................263 Appendix D: Extent to which single methods meet the Joint Committee’s Program Evaluation Standards.........................................276 Appendix E. Expert critique of single methods......................................284

ix

LIST OF TABLES

Table 1. Validity and Trustworthiness for Post-positivist and Interpretive Paradigms.....................................................................66 Table 2. Format to Apply Hegel’s Dialectic Approach to Divergent Mixed Method Findings...................................................................85 Table 3. Format to Apply Pragmatic Criteria to Pragmatic Single Method Findings..............................................................................87 Table 4. Frequency of Findings in Each Mixed Method Relationship Group..............................................................................................114 Table 5. Frequency of Findings in each Mixed Method Relationship Group by Method...........................................................................115 Table 6. Pragmatic Mixed Method Decisions..............................................120 Table 7. Dialectic Mixed Method Decisions................................................131 Table 8. Key Conclusions by Study.............................................................148 Table 9. Summary of Lincoln and Guba’s (1985) Unified Theory of Validity / Trustworthiness..............................................................153 Table 10. Stakeholders’ Previous Experience with Methods.........................191 Table 11. “Other” Comments on Experience with Methods by Method.......191 Table 12. Rating Points and Labels for Stakeholder Survey..........................192 Table 13. Stakeholder Ratings of the Validity of Findings by Method and Respondent.........................................................................192 Table 14. Stakeholder Ratings of Method Mixes...........................................195 Table 15. Comments on the Impact and Potential Impact of the Studies and the Findings that Lead to them, by Method Mix.....................197 Table 16. General Comments on Mixed Methods’ Use.................................198 Table 17. Total Billable Researcher Hours Expended by Method.................200

x

Table 18. Dollars Worked and Percent of Budget for Study Tasks by Method...........................................................................................201 Table 19. Cost in Subject Hours for Recruitment and Completion................203 Table B1. Survey Study Findings by Topic Area...........................................250 Table B2. Focus Group Study Findings by Topic Area..................................253 Table B3. Interview Study Findings by Topic Area.......................................258 Table C. Convergence, Divergence, and Uniqueness of All Single Method Findings by Topic Area....................................................263 Table D1. Extent to which Single Methods Meet Utility Standards...............276 Table D2. Extent to which Single Methods Meet Feasibility Standards........278 Table D3. Extent to which Single Methods Meet Propriety Standards..........279 Table D4. Extent to which Single Methods Meet Accuracy Standards..........281

xi

LIST OF FIGURES

Figure 1. Paradigm Continuum.........................................................................16 Figure 2. Study Methods Mapped to Paradigm Continuum...............................27 Figure 3. A Continuum of Triangulation Design (Jick, 1979)...........................31 Figure 4. Timeline for Key Intervention Dates, Evaluation Data Collection, and Reporting...................................................................77

1

CHAPTER 1 INTRODUCTION

First, this section describes the problem addressed by this research study and the purpose of it. Next, the conceptual framework used in the study is described. The research questions and design are presented next. Finally, an overview of the dissertation chapters is provided. Statement of the Problem The Need for Research on Mixed Methods Local, state, and federal governments invest considerable public resources to address the education, health, and welfare of residents of the United States. The government bodies that disperse these resources and the citizens that fund them have a compelling interest in understanding the extent to which funds are used efficiently and effectively. Mixed method research and evaluation is a tool commonly used by researchers and evaluators to investigate program or policy merit and worth (Creswell, Trout, & Barbuto, 2002; Teddlie & Tashakkori, 2003). Mixed method methodology is frequently used to meet the needs of multiple stakeholders or the individuals or groups who comprise the audience for evaluative and research endeavors (Chelimsky, 1997; Smith, 1997). Especially in the field of evaluation and in research on public policy, an investigator seeks to gather information in the service of clients or constituents who wish to make programming or policy

2

decisions based on the results. Frequently, decisions are made not by one executive, but by multiple individuals who represent diverse interests and hold unique perspectives on what kind of information is accurate and credible (Patton, 1997). Like researchers, stakeholders hold beliefs about what kinds or types of information are most accurate and authentic, and will best support decision-making. Either implicitly or explicitly, some stakeholders believe quantitative studies provide the most reliable information, while others view qualitative data as the most authentic representation of reality, and thus the best source of information for decision-making. In many situations, a combination of types of information provides multiple stakeholders the type of information that they have the most confidence in for use in decision-making (Patton, 1997; Chelimsky, 1997; Benofske, 1995). Additionally, researchers turn to mixed method methodology to address the practical challenges and resultant uncertainty of conducting any single method (Datta, 1997; O’Cathain, Murphy, & Nicholl, 2007). Both post-positivist and interpretive methods have serious limitations. For example, Carole Weiss (1995) described the challenges to conducting the gold standard of post-positivistic research – the randomized controlled experiment – on complex community initiatives. First, likely too few communities could be marshaled for randomization in treatment and control groups for interventions administered at a community level. Second, controlling for key factors in community initiatives, such as a dynamic interplay of government, funder, and grassroots support for issues, would be difficult and ethically questionable. Finally, due to external factors, such as economic and political situations, community

3

initiatives and policies may be enacted or altered or repealed during the course of an investigation, making positivist approaches less effective. The difficulties in conducting post-positivist research to investigate complex phenomenon, such as community initiatives or policies, may suggest that interpretive approaches would be more effective. However, in these settings, interpretive methodologies also face significant challenges. A key strength of interpretive approaches is the ability to understand a phenomenon in depth. However, the impact of large-scale interventions is frequently so wide as to make the sole use of interpretive approaches formidable. Additionally, the mechanisms to identify causal mechanisms in interpretive research require further development (Smith, 1994; Johnson & Onwuegbuzie, 2004). By using multiple, diverse methods, researchers may corroborate findings to increase confidence in the inferences drawn from them. This rationale applies equally to smaller or more bounded objects of study whose complexity lies in the content of the phenomenon being studied. Finally, researchers use mixed methods in order to achieve findings unavailable to single method studies conducted independently. Greene (2007) described that mixed method studies may be generative, as paradox and contradiction are engaged and “fresh insights, new perspectives, and original understandings” emerge (2007, p. 103). Other mixed method authors share this belief in the promise of mixed methods. For example, Abbas Tashakkori and Charles Teddlie (2003) used the term gestalt to indicate the how inferences from mixed methods may be greater than the single method components.

4

Rosaline Barbour (1999) described mixed methods as a whole greater than the sum of its parts”. Two Competing Theories of Mixed Methods The predominant theory of mixed method evaluation is the dialectic approach developed by Jennifer Greene and Valerie Caracelli (1997). The purpose of the dialectic approach is to gain insight by juxtaposing methods conducted using clearly defined and diverse research paradigms (for example, post-positivistic, phenomenological, ethnographic, etc.). Given the importance of paradigms in the dialectic approach, an unstated assumption is that differing paradigms may increase the variance between types of evidence, thus increasing the utility of findings and the validity of inferences drawn from them. Thus, the “distance” between paradigms of diverse methods may be critical to mixed method studies. However, an enhanced focus on paradigms within the dialectic approach faces three primary challenges. First, the empirical literature on the use of a dialectic approach to mixed methods is sparse. The bulk of the literature on mixed methods and on the use of paradigms in mixed methods develops typologies for types of mixed method studies (Creswell, Trout, & Barbuto, 2002; Greene & Caracelli, 1989; Greene & Caracelli, 1997; Rossman & Wilson, 1985). Only one empirical study published in a refereed journal was uncovered. Jennifer Greene and Charles McClintock (1985) asked if methods that differ paradigmatically might be equal when combined in a triangulated mixed method evaluation: an interesting question, but not one directly related to the rationale and practice of mixed method research and evaluation.

5

Second, the dialectic approach appears alternately poorly understood or misunderstood. In his 2001 assessment of 22 evaluation models, noted evaluator Daniel Stufflebeam rated a mixed method model as having restricted though beneficial use in program evaluation. However, his understanding of mixed methods did not include the work of leading theorists, Greene and Caracelli, nor their dialectic approach. Stufflebeam cited only more minor mixed method theorists in his monograph. In terms of the dialectic stance itself, research reviews suggest the dialectic stance is both misunderstood (Mark, Feller, & Button, 1997) and infrequently used (Riggin, 1997; Creswell, Trout, & Barbuto, 2002; Patton, 1985). Third, a nascent yet growing body of work is focusing on a pragmatic approach to mixed methods. Lois-ellen Datta (1997) and Spencer Maxcy (2003) articulated a pragmatic stance to mixing methods that has its roots in the philosophic writings of John Dewey and William James (among others). Datta (1997) outlined the essential criteria for making pragmatic design decisions as (1) practicality, which implies one’s experience and knowledge of what does and does not work; (2) contextual responsiveness to the demands, opportunities, and constraints to an evaluation situation; and (3) consequentiality, or making decisions based on practical consequences. Although the pragmatic theory is evolving, researchers commonly employ a pragmatic stance in mixed methods. Michael Quinn Patton’s (1985, 2008) utilization- focused evaluation is implicitly pragmatic in that it judges the merit of an evaluation by the extent to which it was useful to the clients. John Creswell (2003) reported that

6

pragmatism appears to be the dominant paradigm employed by mixed method researchers. Leslie Riggin (1997) found a pragmatic stance to be almost exclusively employed when she reviewed all examples of mixed method evaluations presented in a volume of New Directions in Evaluation dedicated to the subject. More recently, R. Burke Johnson and Anthony Onwuegbuzie (2004) suggested that “the time has come” for mixed method research, and that investigators do whatever is practical. However, dialectic and pragmatic practitioners of mixed methods alike conceded that pragmatic theory requires further development (Teddlie & Tashakorri, 2003; Morgan, 2007; Greene, 2007). None the less, the evolving theory of pragmatism challenges the primacy of Greene and Caracelli’s dialectic theory and deserves further examination. Conclusions While stakeholder and research considerations suggest a strong need for mixed method research and evaluation, methodological literature on mixed methods is nascent. The rationale that mixed methods yields unique insight from qualitative or quantitative studies conducted independently requires further investigation. Given the still evolving dialectic and pragmatic approaches to mixed methods, researchers and evaluators would benefit from additional guidance in how to optimize the design, implementation, and interpretation of mixed method studies. Additionally, mixed method research and evaluation frequently requires more resources to implement than single method studies. The additional cost of mixed method research warrants a more explicit assessment of the rationale for mixed methods and its optimal design and implementation.

7

Empirical as opposed to theoretical investigations of dialectic and pragmatic approaches to mixed methods is especially needed in order to legitimize and optimize mixed methods. Evidence that the theoretical rationale for varying paradigms in mixed methods is or is not justified would support or challenge the dialectic approach’s status as the predominant theory. An empirical comparison of the dialectic and pragmatic approaches – even in one context – would arm mixed method practitioners with valuable information for practice and would advance the field’s understanding of mixed methods and hopefully lead to higher quality mixed method studies. Purpose of the Study The purpose of this study is to assist researchers in optimizing their mixed method research designs by examining two real-world mixed method studies, each representing one of two competing theories of mixed methods methodology. In this quest, this study also probes underlying assumptions and rationales of mixed methods: that mixed method studies yield findings over and above single methods presented side-by-side, that the paradigmatic divergence of methods is a critical factor in mixed method studies, that mixed method studies can better meet the demands of multiple stakeholders with differing opinions on the usefulness and credibility of qualitative and quantitative research methods, and to examine the increased financial demands of mixed method studies.

8

Conceptual Framework The conceptual framework guiding this dissertation is the two competing theories of mixed methods. The predominant theory of mixed method evaluation is the dialectic approach developed by Greene and Caracelli (1997), as described above. Likewise, the pragmatic approach to evaluation as articulated by Datta (1997), Maxcy (2003), Teddlie & Tashakkori (2003), and Johnson and Onwuegbuzie (2004) is described above. The two approaches differ primarily in their treatment of paradigms. While the dialectic stance prioritizes consciously choosing and engaging paradigms in the conduct of mixed methods research, pragmatically based mixed methods respond not to philosophical tenets, but to a grounded reality of practicality, contextual responsiveness, and consequentiality. Research Questions Additional empirical research in five areas would probe the rationale for using paradigms in mixed methods and provide information on the optimization of mixed methods approaches. First, an underlying assumption of the dialectic approach to mixed methods is that paradigms matter and that more paradigmatically diverse methods may result in more generative findings. However, no empirical information is available on comparing findings from paradigmatically similar and dissimilar methods. While researchers assume a common knowledge of differences in findings based on paradigms, the literature appeared to provide few studied comparisons outside of theoretical argument. Additionally, both William Shadish (1993) and Melvin Mark and

9

Lance Shotland (1987) emphasized the importance of understanding the direction of bias among single methods. Systematically recording the convergence and divergence of findings from single methods that comprise mixed methods in a specific research context would produce a foundation for further insight into the impact of dialectic versus pragmatic mixed method approaches, and would provide evidence to help researchers optimize their choice of single methods in a mixed method study. Second, the dialectic and pragmatic approaches to mixed methods represent two primary approaches to mixed methods. Literature on the implementation of practice of the dialectic approach is scant (Riggin, 1997; Mark & Shotland, 1997), while researchers commonly recognize the need to further develop the pragmatic approach in both theory and practice (Greene, 2007; Tashakkori & Teddlie, 2003). Examining the extent to which dialectic and pragmatic mixed method studies differ substantively in a specific research context would provide researchers with empirical evidence to o ptimize mixed method practice. A comparison of the approaches would also illuminate the importance of differing paradigms as a part of mixed method design and implementation, and at the point of interpretation and use of findings. Finally, comparing mixed method findings to single method findings to mixed method findings would illuminate the extent to which mixed method findings yield unique insights over and above the presentation of single method findings side-by-side. Third, inferences from mixed method studies can only be as legitimate as the inferences from the single methods upon which they are based (Greene, 2007; Teddlie & Tashakkori, 2003). Therefore, to assess the validity / trustworthiness of the

10

inferences from the mixed method studies produced, the validity / trustworthiness of the three single methods’ inferences will be examined. The topic of validity within mixed methods is nascent, in terms of both how validity should be conceptualized in mixed methods and in criteria for judging it (Greene, 2007; Dellinger & Leech, 2007; Creswell & Plano Clark, 2007; Teddlie & Tashakkori, 2003). However, examining mixed method findings against key criteria for validity in a specific research context is a first step. Patton (2002) suggested alternative criteria for validity that may be applied to mixed methods; the Joint Committee’s Program Evaluation Standards (1994) will be examined. Fourth, a key rationale for mixed methods is that it meets the multiple information needs of diverse stakeholders (Benofske, 1995; Patton, 1997). Examining stakeholders’ perceptions of the credibility and utility of mixed method findings would provide additional empirical evidence to support this claim. Stakeholder perceptions of the credibility and utility of mixed method evaluation findings can also contribute to an understanding of the validity of dialectic versus pragmatic mixed method evaluations. Therefore, examining stakeholders’ views of the credibility and utility of individual methods and dialectic versus pragmatic mixed method evaluation findings in a specific research context would provide valuable information about the rationale and validity of mixed methods. A final foundational consideration in mixing methods is quintessentially pragmatic and rooted in the Joint Committee’s Program Evaluation Standards (1994) and the standard of feasibility. Little empirical literature exists on the financial

11

feasibility of mixed methods. Understanding the cost of single methods that comprise mixed methods in a specific research context in terms of researcher and subject hours and resources expended provides additional valuable information to investigators and they choose between dialectically- or pragmatically-driven approaches. Taken together, these five research questions provide background and evidence to probe the dialectic versus pragmatic approaches in real-world mixed methods research and evaluation, and to explore several of the assumptions, rationales and issues that under gird mixed methods. The research questions include the following: 1. What are the substantive findings of single methods? What findings converge and diverge? What findings are unique? 2. What are the substantive findings of pragmatic versus dialectic mixed method studies? How are the two mixed method study findings similar and different from one another? What unique information do the mixed method findings produce over and above single methods? 3. To what extent are the inferences drawn from single method findings meeting key criteria for validity / trustworthiness? To what extent are inferences drawn from single method and mixed method findings valid / trustworthy according to The Program Evaluation Standards (Joint Committee, 1994)? 4. How do stakeholders view the credibility and utility of single method findings and mixed method findings? What do they see as the advantages and disadvantages of mixing? What are their prior beliefs about the credibility of diverse methods and paradigms?

12

5. What are the costs of the single methods in terms of researcher and subject hours? Research Design This dissertation is a secondary analysis of data from a real-world evaluation of local, smoke-free ordinances on Minnesotans trying to quit and enrolled in a specific stop-smoking program. The evaluation involves three single methods that are combined into two mixed method studies for this dissertation. The three studies include an 18 month follow-up telephone survey with comparison groups, phenomenological interviews, and focus groups. The specific methodology for each method was reviewed by an expert in that methodology to provide evidence of content validity. The first mixed method study represents a dialectic approach to mixed methods and combines the survey and phenomenological interviews. The second mixed method study represents a pragmatic approach and combines the survey and focus groups. The substantive findings of single and mixed method studies are compared via content analysis of evaluation documents. The studies are also examined for the extent that they meet key criteria for validity. Stakeholders’ views on the utility and credibility of single and mixed methods is examined. Finally, the cost of single methods is considered. Overview of the Dissertation Chapters 2 through 5 comprise the remainder of this dissertation. Chapter 2 reviews the pertinent literature of paradigms and mixed methods, probing the rationale,

13

theories, and methodological research in the area. Chapter 3 describes the methodology of the dissertation. Chapter 4 presents the results. The conclusions and limitations of the research are discussed in Chapter 5.

14

CHAPTER 2 LITERATURE REVIEW

This literature review comprises five sections representing concepts critical to the conduct of this dissertation. First, the three key paradigms relevant to mixed methods are defined, and the relationship of the paradigms to the three methods used in this dissertation are specified. Second, an historical review of the development of mixed methods is presented, leading to a description of the two primary approaches promoted within the field of mixed methods today. The literature review describes how further consideration of these two approaches, called the pragmatic and dialectic stances, is needed, despite their ascendancy. To explore how a pragmatic mixed method study would be conducted, the third section presents an in-depth review of pragmatism. Likewise, the fourth section describes an in-depth review of a dialectic approach. Finally, the concept of validity or trustworthiness is explored within the context of mixed methods studies in the fifth section. Paradigms The purpose of this section is to define the paradigm continuum and the relationship of specific research methods to paradigms, especially for the three research methods used for this dissertation. Paradigms refer to a worldview that guides decision- making. Popularized by Thomas Kuhn (1962), they encompass one’s views on the nature of reality and of knowledge, its origins and foundations (Greene & Caracelli, 1997). A paradigm is essentially philosophical in nature, and may be specified by its

15

ontological, epistemological, and axiological tenants. For researchers, one’s paradigm informs the research questions one chooses, and how one collects information and interprets it. Defining Three Key Paradigms along a Continuum This section defines three key paradigms and situates them on a continuum. Logical positivism, post-positivism, and interpretivism are examples of paradigms, which may be considered on a continuum (see Figure 1, below). Anchoring one end is logical positivism. Introduced by French philosopher August Comte (Yu, 2006), logical positivism holds that truth is represented by measurable, naturally occurring phenomenon. In fact, logical positivism asserts that measurement is proof of existence, so if a phenomenon cannot be measured, than it does not exist (Potts, 1998). Further, logical positivism argues that all naturally occurring phenomenon can be broken down into measurable moments, which when considered together form the whole of the phenomenon of interest and reproduce “truth”. Logical positivist researchers use deductive reasoning to generate theory from which specific hypotheses evolve and are tested. Inferences from experiments are then employed in theory construction and the development of natural laws (Yu, 2006; Benofske, 1995). Contemporary researchers universally agree that logical positivism consists of numerous irreconcilable fallacies, and is dead (Reichardt & Rallis, 1994a; Shadish, 1998).

16

Figure 1. Paradigm Continuum

Post-positivism is a softening of the logical positivist position that has been evolving since the 1930s (Popper, 1959, as cited in Reichardt & Rallis, 1994). The post positivist philosophy asserts that truth may be discovered, and is best understood through objectivity, standardization, deductive reasoning, and control within the research process (Yu, 2006). Causality is a central concern of post-positivist research techniques, and is established by research design, statistical hypothesis testing, and energetically assessing alternative possible explanations for findings. The randomized controlled experiment is considered the ideal. Validity of inferences from findings is assessed by internal validity, external validity, reliability, and objectivity. The strengths of post-positivist research are precision, generalizability, reliability, and replicability. Post-positivist research focuses on addressing causality in research questions and is commonly considered to be well suited for confirmatory research (Shadish, Cook & Campbell, 2002). Mary Lee Smith (1994) carefully considered the shortcomings in both quantitative (post-positivistic) and qualitative (interpretive) research and suggests areas for substantive improvement in each realm. She argued that quantitative research could be enhanced and refined by considering its applicability in highly complex, Positivism Post- positivism Interpretivism

Full document contains 303 pages
Abstract: Mixed methods are increasingly used in the fields of evaluation, health sciences, and education in order to meet the diverse information needs of funders and stakeholders. However, a consensus has yet to develop on the theoretical underpinnings of the methodology. A side-by-side assessment of two competing theoretical approaches to mixed methods, the dialectic and pragmatic, can assist researchers to optimize their use of mixed methods methodology and contribute to the growth of mixed methods theory. This study empirically compares the dialectic and pragmatic approaches to mixed methods and probes key issues underlying the methodology, including unique yield from mixed method studies, the importance of paradigmatic divergence between methods, and the financial demands of mixed method studies. A secondary analysis of a real-world evaluation, this study explores five research questions regarding the convergence, divergence and uniqueness of single method findings; the extent to which mixed methods produce unique findings over and above single methods presented side-by-side; the extent to which studies meet key criteria for validity; stakeholders' perceptions of the utility and credibility of the studies; and the cost of single methods. The pragmatic mixed method study was developed by integrating a post-positivistic telephone survey with weakly interpretive focus groups at the point of interpretation using pragmatic criteria. The dialectic study mixed the same post-positivistic telephone survey with strongly interpretive phenomenological interviews using a Hegelian-inspired dialectic format. All three single methods were examined by a method expert in the field who affirmed the methodologies used. Findings suggest that both mixed method approaches produced unique conclusions that would not have been available by presenting single methods side-by-side. However, the dialectic method produced more complex convergence and more divergence, leading it to be more generative than the pragmatic method. The use of stronger as compared to weaker interpretive methods contributed to the generative quality of the dialectic approach. Overall, the dialectic method appears more suitable to exploring more complex phenomenon as compared to the pragmatic approach. However, these conclusions are drawn from one study of one real-world evaluation. Much more scholarship is needed to explore the issues raised here.