Loyola University Chicago

Division of Student Development

Interpreting Findings

 

Quantitative vs. Qualitative Research

Assessment work can be conducted using quantitative, qualitative, or mixed methods approaches.

A quantitative methodology attempts to gather data by examining numbers, statistics, and measurements. Quantitative research relies on the comparison between a baseline control group and an experimental group, examining the differences that occur when external variables are introduced into the environment.  Quantitative research is particularly effective for testing previously developed theoretical claims, as well as comparing percentages and frequency rates.  An institution may use quantitative research to examine the rate at which underrepresented groups are persisting to graduation within four years, for example.

A qualitative approach attempts to gather data through the processes of narrative dialogue and naturalistic observation.  Qualitative research relies on assessment of a phenomenon in its natural setting, considering the unique interactions among individuals, time, and place.  Qualitative research is particularly effective for developing new theory grounded in the experiences of individuals.  An institution may utilize qualitative research to examine the root reasons why the rate of persistence to graduation is, for example, higher or lower than the regional average.

A mixed methods approach utilizes components of both quantitative and qualitative research; this can occur in the data collection process, the data interpretation process, or both.

 

Interpreting Data

Perhaps the most important, yet too often ignored, aspect to assessment is the interpretation of the raw data. Interpretation requires two mindsets: First, what is our data telling us about our students, our faculty and staff, and our institution? What themes, trends, and surprises can be found within the data that has been amassed? Is the institution able to interpret their data in a scholarly fashion that addresses potential biases? Second, what is our data not telling us about our students, our faculty and staff, and our institution? Just as crucial as the ability to recognize trends in a raw data set is the ability to recognize what we have failed to collect or address in our data collection methods.  Assessment is a cyclical, perpetual process; with each data collection, an institution must recognize ways in which the process can be improved in the future.

 

Correlation vs. Causation

To say that two phenomena are somehow correlated to each other is quite distinct from claiming that one phenomenon causes another phenomenon.  A major pitfall of data interpretation is the attribution of causation to a relationship that is, by nature, a correlation.  For example, both ice cream sales and reported boating accidents in Chicago increase in the summer months.  These two statistics are correlated; their frequency moves in the same direction at the same time.  However, there is no causation implicit in this relationship; boating accidents do not cause increased ice cream sales, or vice versa.  They are simply both statistics that increase when the weather is warm.  When working with institutional data in order to assess campus departments, programs, and new initiatives, it is important to keep this distinction between correlation and causation at the forefront of the interpretation process.  An institution must accurately understand the relationships inherent in the data if they are to truly serve their students through data-driven positive change.