(2001) criteria for internal validity. Are these adjustments more or less accurate than the original figures? As a second example, models in articles will sometimes have a grab-all variable/construct such as Environmental Factors. The problem here is similar to the example above. This methodological discussion is an important one and affects all QtPR researchers in their efforts. Communications of the Association for Information Systems, 20(22), 322-345. (Logik der Forschung, Vienna, 1935). It allows you to gain reliable, objective insights from data and clearly understand trends and patterns. It is a closed deterministic system in which all of the independent and dependent variables are known and included in the model. Research results are totally in doubt if the instrument does not measure the theoretical constructs at a scientifically acceptable level. In other words, data can differ across individuals (a between-variation) at the same point in time but also internally across time (a within-variation). Hair et al. Another debate in QtPR is about the choice of analysis approaches and toolsets. New Guidelines for Null Hypothesis Significance Testing in Hypothetico-Deductive IS Research. MIS Quarterly, 35(2), 261-292. Investigate current theories or trends surrounding the problem or issue. The quantitative methods acquired in a Sustainability Master's online combine information from various sources to create more informed predictions, while importantly providing the scientific reasoning to accurately describe what is known and what is not. Mathesis Press. In simple terms, in QtPR it is often useful to understand theory as a lawlike statement that attributes causality to sets of variables, although other conceptions of theory do exist and are used in QtPR and other types of research (Gregor, 2006). Since the data is coming from the real world, the results can likely be generalized to other similar real-world settings. Figure 9 shows how to prioritize the assessment of measurement during data analysis. (1979). (1935). A new Criterion for Assessing Discriminant Validity in Variance-based Structural Equation Modeling. Secondary data sources can be usually found quickly and cheaply. In post-positivist understanding, pure empiricism, i.e., deriving knowledge only through observation and measurement, is understood to be too demanding. However, even if complete accuracy were obtained, the measurements would still not reflect the construct theorized because of the lack of shared meaning. Or, the questionnaire could have been used in an entirely different method, such as a field study of users of some digital platform. It stood for garbage in, garbage out. It meant that if the data being used for a computer program were of poor, unacceptable quality, then the output report was just as deficient. The underlying principle is to develop a linear combination of each set of variables (both independent and dependent) to maximize the correlation between the two sets. Shadish et al. Distinguishing between the logical basics of the theory and its empirical, testable, predictions. We are ourselves IS researchers but this does not mean that the advice is not useful to researchers in other fields. It is important to note here that correlation does not imply causation. Heisenberg, W. (1927). Lets take the construct labelled originally Co-creation. Again, the label itself is confusing (albeit typical) in that it likely does not mean that one is co-creating something or not. As will be explained in Section 3 below, it should be noted that quantitative, positivist research is really just shorthand for quantitative, post-positivist research. Without delving into many details at this point, positivist researchers generally assume that reality is objectively given, that it is independent of the observer (researcher) and their instruments, and that it can be discovered by a researcher and described by measurable properties. They should create . Other tests include factor analysis (a latent variable modeling approach) or principal component analysis (a composite-based analysis approach), both of which are tests to assess whether items load appropriately on constructs represented through a mathematically latent variable (a higher order factor). There is a large variety of excellent resources available to learn more about QtPR. Initially, a researcher must decide what the purpose of their specific study is: Is it confirmatory or is it exploratory research? In the course of their doctoral journeys and careers, some researchers develop a preference for one particular form of study. This resource seeks to address the needs of quantitative, positivist researchers in IS research in particular those just beginning to learn to use these methods. Multivariate analyses, broadly speaking, refer to all statistical methods that simultaneously analyze multiple measurements on each individual or object under investigation (Hair et al., 2010); as such, many multivariate techniques are extensions of univariate and bivariate analysis. Many of these data collection techniques require a research instrument, such as a questionnaire or an interview script. Measurement in Physical Education and Exercise Science, 5(1), 13-34. Laboratory experiments take place in a setting especially created by the researcher for the investigation of the phenomenon. Cambridge University Press. Action Research and Organizational Change. (1980), Causal Methods in Marketing. A common problem at this stage is that researchers assume that labelling a construct with a name is equivalent to defining it and specifying its content domains: It is not. Sage Publications. 221-238). Finally, there is a perennial debate in QtPR about null hypothesis significance testing (Branch, 2014; Cohen, 1994; Pernet, 2016; Schwab et al., 2011; Szucs & Ioannidis, 2017; Wasserstein & Lazar, 2016; Wasserstein et al., 2019). Fowler, F. J. ), Research Methods in Information Systems (pp. 235-257). It separates the procedure into four main stages and describes the different tasks to be performed (grey rounded boxes), related inputs and outputs (white rectangles), and the relevant literature or sources of empirical data required to carry out the tasks (dark grey rectangles). PERSPECTIVEResearchers Should Make Thoughtful Assessments Instead of Null-Hypothesis Significance Tests. The conceptual labeling of this construct is too broad to easily convey its meaning. Organizational Research Methods, 13(4), 620-643. It is a special case of MANOVA used with two groups or levels of a treatment variable (Hair et al., 2010). And, crucially, inferring temporal precedence, i.e., establishing that the cause came before the effect, in a one-point in time survey is at best related to self-reporting by the subject. In the early days of computing there was an acronym for this basic idea: GIGO. Ill start: (1/3)", "Very interesting thoughts. The benefits can be fulfilled through media . Regarding Type I errors, researchers are typically reporting p-values that are compared against an alpha protection level. Gelman, A. It incorporates techniques to demonstrate and assess the content validity of measures as well as their reliability and validity. If the data or phenomenon concerns changes over time, an analysis technique is required that allows modeling differences in data over time. Sources of data are of less concern in identifying an approach as being QtPR than the fact that numbers about empirical observations lie at the core of the scientific evidence assembled. The experimental method studies whether there is a cause-and-effect relationship between the research variables. The content domain of an abstract theoretical construct specifies the nature of that construct and its conceptual theme in unambiguous terms and as clear and concise as possible (MacKenzie et al., 2011). An example may help solidify this important point. Field, A. Theory & Psychology, 24(2), 256-277. In their book, they explain that deterministic prediction is not feasible and that there is a boundary of critical realism that scientists cannot go beyond. Models and prototypes are frequently the products of design research. Orne, M. T. (1962). While the positivist epistemology deals only with observed and measured knowledge, the post-positivist epistemology recognizes that such an approach would result in making many important aspects of psychology irrelevant because feelings and perceptions cannot be readily measured. Quantitative research is structured around the scientific method. The whole point is justifying what was done, not who did it. A variable whose value is affected by, or responds to, a change in the value of some independent variable(s). The decision tree presented in Figure 8 provides a simplified guide for making the right choices. We note that these are our own, short-handed descriptions of views that have been, and continue to be, debated at length in ongoing philosophy of science discourses. For example, several historically accepted ways to validate measurements (such as approaches based on average variance extracted, composite reliability, or goodness of fit indices) have later been criticized and eventually displaced by alternative approaches. Since no change in the status quo is being promoted, scholars are granted a larger latitude to make a mistake in whether this inference can be generalized to the population. Often, such tests can be performed through structural equation modelling or moderated mediation models. Recker, J. the estimated effect size, whereas invalid measurement means youre not measuring what you wanted to measure. Statistical Power in Analyzing Interaction Effects: Questioning the Advantage of PLS With Product Indicators. Statistical Power Analysis for the Behavioral Sciences (2nd ed.). The key point to remember here is that for validation, a new sample of data is required it should be different from the data used for developing the measurements, and it should be different from the data used to evaluate the hypotheses and theory. Cronbach, L. J. Reliable quantitative research requires the knowledge and skills to scrutinize your findings thoroughly. A seminal book on experimental research has been written by William Shadish, Thomas Cook, and Donald Campbell (Shadish et al., 2001). Bagozzi, R.P. A Tool for Addressing Construct Identity in Literature Reviews and Meta-Analyses. (2017). Quantitative research produces objective data that can be clearly communicated through statistics and numbers. This combination of should, could and must not do forms a balanced checklist that can help IS researchers throughout all stages of the research cycle to protect themselves against cognitive biases (e.g., by preregistering protocols or hypotheses), improve statistical mastery where possible (e.g., through consulting independent methodological advice), and become modest, humble, contextualized, and transparent (Wasserstein et al., 2019) wherever possible (e.g., by following open science reporting guidelines and cross-checking terminology and argumentation). Gaining experience in quantitative research enables professionals to go beyond existing findings and explore their area of interest through their own sampling, analysis and interpretation of the data. Where quantitative research falls short is in explaining the 'why'. On the other hand, Size of Firm is more easily interpretable, and this construct frequently appears, as noted elsewhere in this treatise. Scholars argue that we are living in a technological age. There are also articles on how information systems builds on these ideas, or not (e.g., Siponen & Klaavuniemi, 2020). If the measures are not valid and reliable, then we cannot trust that there is scientific value to the work. Wohlin et al.s (2000) book on Experimental Software Engineering, for example, illustrates, exemplifies, and discusses many of the most important threats to validity, such as lack of representativeness of independent variable, pre-test sensitisation to treatments, fatigue and learning effects, or lack of sensitivity of dependent variables. Incorporating Formative Measures into Covariance-Based Structural Equation Models. Experimentation in Software Engineering: An Introduction. A research instrument can be administered as part of several different research approaches, e.g., as part of an experiment, a web survey, or a semi-structured interview. Sampling Techniques (3rd ed.). 4. Quantitative Data Analysis with SPSS 14, 15 & 16: A Guide for Social Scientists. Experiments are specifically intended to examine cause and effect relationships. Quantitative research yields objective data that can be easily communicated through statistics and numbers. Creating model over findings ie. Kim, G., Shin, B., & Grover, V. (2010). The table in Figure 10 presents a number of guidelines for IS scholars constructing and reporting QtPR research based on, and extended from, Mertens and Recker (2020). Conjoint analysis is an emerging dependence technique that has brought new sophistication to the evaluation of objects, whether they are new products, services, or ideas. NHST originated from a debate that mainly took place in the first half of the 20th century between Fisher (e.g., 1935a, 1935b; 1955) on the one hand, and Neyman and Pearson (e.g., 1928, 1933) on the other hand. Churchill Jr., G. A. When the data do not contradict the hypothesized predictions of the theory, it is temporarily corroborated. Reviewers should be especially honed in to measurement problems for this reason. Researchers study groups that are pre-existing rather than created for the study. European Journal of Information Systems, 4, 74-81. The background knowledge is expressed as a prior distribution and combined with observational data in the form of a likelihood function to determine the posterior distribution. Classic statistics involve mean, median, variance, or standard deviation. Several viewpoints pertaining to this debate are available (Aguirre-Urreta & Marakas, 2012; Centefelli & Bassellier, 2009; Diamantopoulos, 2001; Diamantopoulos & Siguaw, 2006; Diamantopoulos & Winklhofer, 2001; Kim et al., 2010; Petter et al., 2007). Pearl, J. It summarizes findings in the literature on the contribution of information and communication technology to economic growth arising from capital deepening and increases in total factor productivity. Textbooks on survey research that are worth reading include Floyd Flowers textbook (Fowler, 2001) plus a few others (Babbie, 1990; Czaja & Blair, 1996). Popper, K. R. (1959). Simply put, QtPR focus on how you can do research with an emphasis on quantitative data collected as scientific evidence. But is it? One form of randomization (random assignment) relates to the use of treatments or manipulations (in experiments, most often) and is therefore an aspect of internal validity (Trochim et al., 2016). However, in 1927, German scientist Werner Heisenberg struck down this kind of thinking with his discovery of the uncertainty principle. It is important to note that the procedural model as shown in Figure 3 describes this process as iterative and discrete, which is a simplified and idealized model of the actual process. Comparative research can also include ex post facto study designs where archival data is used. Moreover, correlation analysis assumes a linear relationship. To analyze data with a time dimension, several analytical tools are available that can be used to model how a current observation can be estimated by previous observations, or to forecast future observations based on that pattern. Another important debate in the QtPR realm is the ongoing discussion on reflective versus formative measurement development. Establishing reliability and validity of measures and measurement is a demanding and resource-intensive task. Moreover, experiments without strong theory tend to be ad hoc, possibly illogical, and meaningless because one essentially finds some mathematical connections between measures without being able to offer a justificatory mechanism for the connection (you cant tell me why you got these results). Edwards, J. R., & Berry, J. W. (2010). Wasserstein, R. L., Schirm, A. L., & Lazar, N. A. R-squared or R2: Coefficient of determination: Measure of the proportion of the variance of the dependent variable about its mean that is explained by the independent variable(s). Streiner, D. L. (2003). Explanation:Researchers use quantitative methods to observe situations or events that affect people. R-squared is derived from the F statistic. Role of ICT in Research. The p-value also does not describe the probability of the null hypothesis p(H0) being true (Schwab et al., 2011). This can be the most immediate previous observation (a lag of order 1), a seasonal effect (such as the value this month last year, a lag of order 12), or any other combination of previous observations. Its primary disadvantage is often a lack of ecological validity because the desire to isolate and control variables typically comes at the expense of realism of the setting. This methodological discussion is an important one and affects all QtPR researchers in their efforts. Unfortunately, unbeknownst to you, the model you specify is wrong (in the sense that the model may omit common antecedents to both the independent and the dependent variables, or that it exhibits endogeneity concerns). Correspondence analysis is a recently developed interdependence technique that facilitates both dimensional reduction of object ratings (e.g., products, persons, etc.) Koronadal City: Department of Education . There is not enough space here to cover the varieties or intricacies of different quantitative data analysis strategies. MIS Quarterly, 12(2), 259-274. Basically, experience can show theories to be wrong, but can never prove them right. Cambridge University Press. Since laboratory experiments most often give one group a treatment (or manipulation) of some sort and another group no treatment, the effect on the DV has high internal validity. Content validity in our understanding refers to the extent to which a researchers conceptualization of a construct is reflected in her operationalization of it, that is, how well a set of measures match with and capture the relevant content domain of a theoretical construct (Cronbach, 1971). We intend to provide basic information about the methods and techniques associated with QtPR and to offer the visitor references to other useful resources and to seminal works. Researchers using this method do not generally begin with a hypothesis. To understand different types of QtPR methods, it is useful to consider how a researcher designs for variable control and randomization in the study. Following the MAP (Methods, Approaches, Perspectives) in Information Systems Research. Integration of Information, Communication, and Technology (ICT) in education refers to the use of computer- . STUDY f IMPORTANCE OF QUANTITATIVE RESEARCH IN DIFFERENT FIELDS 1. It may, however, influence it, because different techniques for data collection or analysis are more or less well suited to allow or examine variable control; and likewise different techniques for data collection are often associated with different sampling approaches (e.g., non-random versus random). Why is the Hypothetico-Deductive (H-D) Method in Information Systems not an H-D Method? This step concerns the. The power of a study is a measure of the probability of avoiding a Type II error. It is out of tradition and reverence to Mr. Pearson that it remains so. Construct validity is an issue of operationalization and measurement between constructs. As the transition was made to seeing communication from a social scientific perspective, scholars began studying communication using the methods established from the physical sciences. North-Holland. Reliability does not guarantee validity. Boudreau, M.-C., Gefen, D., & Straub, D. W. (2001). Ringle, C. M., Sarstedt, M., & Straub, D. W. (2012). Campbell, D.T., and Fiske, D.W. Convergent and Discriminant Validation by the Multitrait- Multimethod Matrix, Psychological Bulletin (56:2, March) 1959, pp 81-105. For example, if one had a treatment in the form of three different user-interface-designs for an e-commerce website, in a between-subject design three groups of people would each evaluate one of these designs. Development And Measurement Validity Of A Task-Technology Fit Instrument For User Evaluations Of Information Systems. Fisher, R. A. To achieve this goal, companies and employees must use technology wisely. By continuing to navigate this site you are consenting to the collection of information via our use of cookies. Pursuing Failure. The higher the statistical power of a test, the lower the risk of making a Type II error. The first stage of the procedural model is construct conceptualization, which is concerned with defining the conceptual content domain of a construct. 103-117). Quantitative analysis refers to economic, business or financial . Statistical Methods and Scientific Induction. When Statistical Significance Is Not Enough: Investigating Relevance, Practical Significance and Statistical Significance. The purpose of survey research in exploration is to become more familiar with a phenomenon or topic of interest. Another problem with Cronbachs alpha is that a higher alpha can most often be obtained simply by adding more construct items in that alpha is a function of k items. For example, both positivist and interpretive researchers agree that theoretical constructs, or important notions such as causality, are social constructions (e.g., responses to a survey instrument). Mohajeri, K., Mesgari, M., & Lee, A. S. (2020). Lauren Slater provides some wonderful examples in her book about experiments in psychology (Slater, 2005). This step concerns the, The variables that are chosen as operationalizations must also guarantee that data can be collected from the selected empirical referents accurately (i.e., consistently and precisely). Textbooks on survey research that are worth reading include Floyd Flowers textbook (Fowler, 2001), Devellis and Thorpe (2021), plus a few others (Babbie, 1990; Czaja & Blair, 1996). The treatment in an experiment is thus how an independent variable is operationalized. Written for communication students, Quantitative Research in Communication provides practical, user-friendly coverage of how to use statistics, how to interpret SPSS printouts, how to write results, and how to assess whether the assumptions of various procedures have been met. Statistical control variables are added to models to demonstrate that there is little-to-no explained variance associated with the designated statistical controls. IS research is a field that is primarily concerned with socio-technical systems comprising individuals and collectives that deploy digital information and communication technology for tasks in business, private, or social settings. Burton-Jones, A., Recker, J., Indulska, M., Green, P., & Weber, R. (2017). The simplest distinction between the two is that quantitative research focuses on numbers, and qualitative research focuses on text, most importantly text that captures records of what people have said, done, believed, or experienced about a particular phenomenon, topic, or event. (2013). In an experiment, for example, it is critical that a researcher check not only the experimental instrument, but also whether the manipulation or treatment works as intended, whether experimental task are properly phrased, and so forth. Wilks Lambda: One of the four principal statistics for testing the null hypothesis in MANOVA. Cronbach, L. J. If samples are not drawn independently, or are not selected randomly, or are not selected to represent the population precisely, then the conclusions drawn from NHST are thrown into question because it is impossible to correct for unknown sampling bias. Validity describes whether the operationalizations and the collected data share the true meaning of the constructs that the researchers set out to measure. Multiple regression is the appropriate method of analysis when the research problem involves a single metric dependent variable presumed to be related to one or more metric independent variables. Also reminded me that while I am not using any of it anymore, I did also study the class, Quantitative Research in Information Systems, What is Quantitative, Positivist Research, http://www.janrecker.com/quantitative-research-in-information-systems/, https://guides.lib.byu.edu/c.php?g=216417&p=1686139, https://en.wikibooks.org/wiki/Handbook_of_Management_Scales. (Note that this is an entirely different concept from the term control used in an experiment where it means that one or more groups have not gotten an experimental treatment; to differentiate it from controls used to discount other explanations of the DV, we can call these experimental controls.). One aspect of this debate focuses on supplementing p-value testing with additional analysis that extra the meaning of the effects of statistically significant results (Lin et al., 2013; Mohajeri et al., 2020; Sen et al., 2022). Psychometrika, 16(3), 291-334. Quantitative research methods were originally developed in the natural sciences to study natural phenomena. QtPR is not math analytical modeling, which typically depends on mathematical derivations and assumptions, sans data. The standard value for betas has historically been set at .80 (Cohen 1988). Development of an Instrument to Measure the Perceptions of Adopting an Information Technology Innovation. Research Methods: The Essential Knowledge Base (2nd ed.). It differs from construct validity, in that it focuses on alternative explanations of the strength of links between constructs whereas construct validity focuses on the measurement of individual constructs. The Difference Between Significant and Not Significant is not Itself Statistically Significant. f importance of quantitative research across fields research findings can affect people's lives, ways of doing things, laws, rules and regulations, as well as policies, Often, we approximate objective data through inter-subjective measures in which a range of individuals (multiple study subjects or multiple researchers, for example) all rate the same observation and we look to get consistent, consensual results. In theory-evaluating research, QtPR researchers typically use collected data to test the relationships between constructs by estimating model parameters with a view to maintain good fit of the theory to the collected data. The objective is to find a way of condensing the information contained in a number of original variables into a smaller set of principal component variables with a minimum loss of information (Hair et al., 2010). 2016). There are three main steps in deduction (Levallet et al. Inferential analysis refers to the statistical testing of hypotheses about populations based on a sample typically the suspected cause and effect relationships to ascertain whether the theory receives support from the data within certain degrees of confidence, typically described through significance levels. Consider the example of weighing a person. Linear probability models accommodate all types of independent variables (metric and non-metric) and do not require the assumption of multivariate normality (Hair et al., 2010). Secondary data also extend the time and space range, for example, collection of past data or data about foreign countries (Emory, 1980). Journal of the Royal Statistical Society, 98(1), 39-82. In any case, the researcher is motivated by the numerical outputs and how to imbue them with meaning. This difference stresses that empirical data gathering or data exploration is an integral part of QtPR, as is the positivist philosophy that deals with problem-solving and the testing of the theories derived to test these understandings. Many great examples exist as templates that can guide the writing of QtPR papers. For example, each participant would first evaluate user-interface-design one, then the second user-interface-design, and then the third. Christensen, R. (2005). Communications of the Association for Information Systems, 16(45), 880-894. External Validity in IS Survey Research. Data analysis concerns the examination of quantitative data in a number of ways. The Effect of Statistical Training on the Evaluation of Evidence. Avoiding personal pronouns can likewise be a way to emphasize that QtPR scientists were deliberately trying to stand back from the object of the study. A scientific theory, in contrast to psychoanalysis, is one that can be empirically falsified. Rather, they develop one after collecting the data. 571-586. Unfortunately, though, based on observations of hundreds of educational technology projects over the past decade, it is pretty clear to me that, in too many cases, investments in educational technologies remain a largely faith-based initiative in many places around the world. Decision Sciences, 29(1), 105-139. This is because measurement provides the fundamental connection between empirical observation and the theoretical and mathematical expression of quantitative relationships.