
Along with many others working in higher education today, I am keenly observing the explosion of education technology solutions. While this growth is not primarily due to the current pandemic, the contrast between the surge in spending on ed tech and the budget cut-backs and layoffs in the academy accelerated by the pandemic is quite stark, as so eloquently explained by Goldie Blumenstyk at the Chronicle. In 2020 alone, ed-tech startup companies obtained over $2 billion in private and venture capital, a half-billion more than the year prior.
While many ed-tech companies sell products designed for individual consumers (think Coursera to upskill on a topic or explore a curiosity), a number are developing and selling products directly to postsecondary institutions. The largest traditional category is the expanding product offerings for online learning and course solution tools that are used by brick-and-mortar institutions, as well as by growing online providers (e.g., Western Governors University, Southern New Hampshire University). Meanwhile, according to Steven Southwick, CEO and founder of Pointful Education, the product categories in ed tech expected to grow and gain wider adoption are emerging technologies such as virtual reality, augmented reality, robotics, artificial intelligence, and machine learning.
With all of this technology comes data. And with data come questions of privacy—as well as opportunities for evaluation and evidence-backed growth and improvement. Here are the areas we’re watching at HEI as this sector continues to evolve:
- Privacy: Questions of privacy are important aspects of ed tech that will generate growing attention given the flood of technology into higher ed spaces. The more information we track about students, and the more departments engaged with technology tools, the more investments will need to be made to mitigate the risks of private student information being inappropriately accessed. It’s easy to envision a cottage industry of consultants who will soon be arriving (if they’re not already in place) to provide strategic planning around this risk in the same way they once assessed staff professional development needs.
- Return on Investment: Since launching Higher Ed Insight over 10 years ago, we have worked with and been approached by ed-tech firms eager to obtain third-party evaluation and verification of the impact and value of their products. In some cases, the central questions for the evaluations were ROI—what did the institution spend and save as a result of utilizing the product? Some intended outcomes are easier to measure than others. Consider an ed-tech solution that helps students obtain their financial aid funds digitally rather than through the mail. We could measure the cost associated with the two different approaches (old school snail mail vs. digital) or examine the time-to-deposit for funds.
- Other Outcomes: How do we measure the impact of products designed for admissions, registration, student verification, remediation, advising, scheduling automation? This task is imminently important given the mass acceptance of these products.Some of the ed-tech offerings we’ve come across over the years include tools to optimize financial awards to yield a desired student body, tools to identify at-risk students who are making missteps in their academic progress, and more recently financial forecast tools for predicting spending needs. But what are the other outcomes that are important to measure with respect to ed-tech spending by institutions? And what happens if an institution or K-12 school adopts a tech product but never fully implements it or realizes its full potential to receive any true efficiency or impact? Goldie’s piece sheds light on this point with the poignant quote: “It’s too early to determine the impact of this ed-tech investment bonanza. But it’s not too late to pay attention to something perennially missing from these booms: whether the tools are working.”Her point leaves me pondering again the question that has perplexed me for years working in higher ed: why, given all of the intellectual resources of a college or university, don’t we do a better job identifying outcomes for students and making decisions based on those outcomes? Why don’t institutions expect tech firms to demonstrate the effectiveness of their products beyond fancy marketing? Institutions seem content to leave uninterrogated the black box of education experiences and their impacts on students.
Institutions know the demographics of those who graduate and who don’t. In some cases they emphasize understanding of retention and graduation by critical demographic and academic fields. They know who repays their debt and who doesn’t. They may even know who is working and what they earn, and who successfully obtains licensure or continuing ed in a given field where it’s required. Yet often the people who know this information are not the same people designing and implementing curriculum, developing programs, or advising and serving students. While institutions gather feedback in the form of student surveys, instructional feedback, and required data for accrediting and professional organizations, there is still an enormous gap in the capacity of most institutions to measure and use data about educational experiences themselves: strengths and weaknesses of a program or campus experiences, effectiveness of the course availability for the career pathways, the employability of an individuals in a desired and related job area, the satisfaction of the learning experience. And perhaps towards more lofty outcome aspirations – the role of education in a graduates’ civic life, family and community, intellectual growth and curiosity, and life satisfaction.
- Diversity, Equity, and Inclusion: I’ve noticed a big uptick in conversations about diversity, equity, and inclusion with a focus on outcomes across stakeholders in higher education, including especially in professional associations like those in student services (NASPA), excellent organizations focused on quality teaching (ACUE), many of the higher education associations in DC (ACE, AAU, APLU, AASCU, NAICU, and AACC), and among institutional researchers (AIR). Alongside these deepening conversations and growth in data collection, we are now adding new ed-tech tools to the mix of an already fuzzy understanding of input-to-outcome understanding in higher education. There is not enough learning transpiring with the data and tools already available, let alone new ones.
Many people working in higher ed deeply care for students and seek to do the best they can educating and serving students. Institutions also continue to grow their resources and invest in student success staff positions and ed-tech products in the student success arena, but these are often black box exercises with little analysis or transparency about whether any of these things make a difference, and if they do, why. As this next wave of ed tech solutions arrive on college and university campuses, these institutions need to make a far greater investment to develop meaningful mechanisms and approaches to understanding the impacts of educational experiences and their outcomes for students.