Employee Surveys, After All, Depend On It
In a list of the toughest academic courses in the world, statistics ranks tenth. To say that the collection and interpretation of data is complex is an understatement for most. That’s why survey validity expertise is a fundamental must in matters of employee feedback—assuming, of course, that you want solid, reliable data from which to make decisions. Some liken statistics to epistemology; the study or investigation into the nature of knowledge, ways of asking how we can learn about the world, and how certain we can be about that knowledge. Plato and Descarte, among countless other philosophers pondered ways of knowing and learning about social reality. To a certain extent survey methodologists, statisticians, social and behavioral scientists, and the likes, walk in their stead today.
A Validated Survey Measures What it Should
There’s hordes of information about survey theories and methodologies, psychometric principles, and peer-reviewed and vetted scientific techniques. That’s fantastic information to know if you’re hard-wired for that kind of thinking. What really matters when it comes down to employee surveys, is that all the hurdles outside of the typical HR purview have been dealt with; that the vendors and platforms you align with and the surveys you use genuinely measure what’s intended to be measured—so that you CAN be certain about the knowledge discerned.
Question and Survey Design Assessment
The wording, order, flow of questions, how long it takes to complete and the number of points on a rating scale (Likert’s 5-point scale is typical) are some of the factors that impact survey validity and data reliability. “Our organization welcomes new methods of working and communicating to improve team productivity” is a valid, rigorously tested DE&I question. It’s consistently been shown that respondents interpret what’s being asked in the same way. Response results are repeatedly similar time and again. So, if you see changes you can know definitively that the interpretation hasn’t changed but that a shift has occurred on the part of employee or manager attitudes, organizational goals, within a regional or functional team, or any combination of these and multiple other factors. Looked at in correlation with other data can bring more clarity. Followed up pulse surveys, further clarity still. Conversely, “Different ways of working and communication are encouraged to improve team output,” may seem swappable. But there’s wiggle room for interpretation and without rigorous testing to establish and verify a baseline response rate, its structure and validity can’t be certain.
How a Survey is Validated
The design of an employee survey (or any other questionnaire) is verified for its usefulness based on six steps.
1. Establishing face validity by having the survey reviewed by two different parties. Experts or people who understand the topic under review (DE&I or Employee Engagement or Remote Work scholars, for instance), review the survey to evaluate whether questions capture the topic successfully. Methodologists check question construction for double-barreled, confusing, and leading questions among other key considerations, to make sure that the data collected is accurate and tells the right story. In the words of American Association for Public Opinion Research (AAPOR) member and Associate Chair of the AAPOR Education Committee, Kyley McGeeney:"Typically, someone comes to me with a research question that they want to answer using a survey. It’s my job as a methodologist to identify the population of interest and figure out how we will sample them; identify a sample frame with contact info; decide what mode we’re going to use to contact and survey them; decide on other elements of the data collection protocol such as how many reminders we’ll send or calls we’ll attempt; design the questionnaire in terms of what we’ll ask and how; and, finally, weight and analyze the resulting data."
2. Running a pilot test on a sample size of intended participants to gauge format simplicity, time for completion and clarity of questions. The more respondents the better. A pilot test also checks for how easily a survey can be scored, registered, coded and interpreted.
3. Cleaning collected pilot data by logging values in a format that can be checked to confirm, for instance, that responses to negatively phrased questions are consistent with responses to similar positively phrased questions and eliminating responses that aren’t.
4. Conducting a principal component analysis (PCA). In layman's terms, this step validates what your survey is actually measuring. It’s a complicated process that tells you what factors are being measured by your questions, looking for common themes that “load onto the same factors".To put it another way, when there are many questions in a survey, what is being measured by every question tends to overlap. If the few questions that measure a similar idea are grouped together, the average score from these questions will be a more accurate index for that underlying idea than any individual question.WorkTango uses a collection of multiple statements or questions (forming an index) to ensure accuracy, measure attitudes and behaviors, and gain an understanding of overall employee sentiment. The Engagement Index, for example, measures attitudes and behaviors by focusing on four key components or themes to increase survey validity.
5. Completing a factor analysis to check for internal consistency of questions—the correlation between questions that load onto the same factor and whether responses are consistent. Cronbach’s Alpha (CA) is a standard “Factor Analysis” test with values that range from 0 to 1. Values of 0.6 or higher indicate an acceptable level of reliability. In simpler terms, factor analysis shows which questions are measuring the same underlying idea. This can be achieved by essentially analyzing how responses to different questions co-vary. For instance, if respondents answer question 1 and question 2 in the same way such that scoring high on question 1 means also scoring high on question 2, these two questions have a high co-variance, thus they are likely measuring the same thing in the minds of respondents. The results of factor analysis help us to figure out which questions are actually measuring similar ideas, so we can group the responses in the most meaningful way and eliminate questions that are repetitive or irrelevant to make the survey more concise without sacrificing accuracy. The results of factor analysis show the number of “factors” (think “big ideas”) that can be reliably measured from the current survey, as well as the questions that jointly measure each idea. It also shows how closely each question measures the big idea. Questions that have a high loading (0.8) more closely measure the big idea than questions with a low loading (0.4). After a few “big concepts” have been identified by factor analysis, multiple regression helps to determine how much each big idea contributes and its relative weight on the key index of interest (be it Engagement or DE&I or Remote Work and so on).
6. Modifying the survey based on PCA and CA outcomes.
7. Repeating the validation process as needed.
Of course, not every survey question is applicable to every workplace. It’s a reasonable expectation to be able to eliminate or add questions that support the insights you need at a given moment in time. Survey validity is preserved when you have access to a library of validated questions and the expertise of survey methodologists to ensure accuracy every step of the way. Although there are BEST practices, remember to focus on the RIGHT practices for your organization. Survey methodologies rooted in third-party academic research, statistical analysis of proprietary data sets, and scientific research techniques ask the critical questions at the core of the lived employee experience across the entire life cycle: from hire and onboarding to the final exit interview and all points between.