Menu

AI and Assessment in Student Affairs

AI and Assessment in Student Affairs

AI and Assessment in Student Affairs

Natasha A. Jankowski and Gavin W. Henning

While most people in higher education have likely heard of generative AI, it is useful to start any discussion of it with a brief introduction to what it is and why it is causing such a stir. Generative AI is a type of artificial intelligence that can produce or generate text, images, and audio. AI has been around since the 1950s and has been used in higher education often in the form of chatbots to tutor students, increase retention, answer questions about scheduling and institutional policies, support facilities management, and predictive analytics. Why the sudden fuss and what is the potential for student affairs?

 

In part, current intrigue is due to the ease of use of tools such as ChatGPT, their wide accessibility, and a user interface that makes generative AI tools user-friendly. During the pandemic, AI tools were used for online test proctoring and plagiarism detection, but with the wide-scale access to generative AI tools that can write papers, answer exams, produce coherent text, and largely go undetected by plagiarism software, concerns about cheating, relevancy of writing assignments, and academic integrity were quick to suck the air from the room.

 

For those interested in different generative AI tools that are text-based, a short list is included below:

 

Generative AI tools are based on large language models or LLMs that pull information from a vast collection of different sources on which they are “trained.” Those sources of information form the basis of what the tools produce from algorithms predicting what text is statistically most likely to come next from a specific prompt, similar to predictive texts on phones. These large language models are just that—large. For example, ChatGPT-4 has in the region of 1 trillion parameters, which are the building blocks for models and impact its output. When given a prompt or a query, such as “list the top 5 ways generative AI is used in higher education student affairs today,” the algorithm extends the prompt based on its training. This means how a prompt is written impacts what comes out just as much as the sources that are pulled from impact the types of responses one will receive. It also means that it is hard to discern which tasks generative AI tools are good or bad at, meaning any output needs to be verified. Bias is built in because the LLMs are trained on human language and texts, which are themselves biased. Remember how it took a day for Tay, Microsoft’s AI chatbot to spit out racist remarks on Twitter?  While there is great potential for generative AI, there are also lots of challenges to consider for responsive generative AI use.

 

How Can AI Be Used for Student Affairs Assessment?

While there are challenges and concerns, there are interesting opportunities for supporting the work of student affairs assessment with generative AI tools, and while the buzz of conversation in higher education has focused on writing assignments in classes, we offer five ways that generative AI can support student affairs professionals in their work.  

 

For starters, generative AI tools may be used to draft learning outcome statements, whether from program descriptions or from what other units or institutions have been doing. They can be used to write student-employee job descriptions based on learning outcomes, critique outcomes, revise current outcomes, and/or serve as an idea generator for learning outcome areas that a unit may want to include. Prompts can request a focus on diversity or social justice, a particular student population, or even institutional and strategic goals to help ensure the generated responses have local applicability within specific places and contexts. Prompts may even include a focus on revising or developing learning outcomes using a specific learning outcome taxonomy such as Finks’ Taxonomy of Significant Learning, LaFever’s Medicine Wheel, or the NACE career readiness competencies, etc. If the contextual information, such as a specific taxonomy or institutional goals, is not in the training data, a user can simply paste that information into the generative AI tool and ask it to use that information for whatever task it is being asked to do. 

 

Generative AI can also be used to inform assessment design and data collection. Tools can be prompted to draft a 4-question follow-up survey about a specific event based on the event description and associated learning outcomes. Tools can generate ideas on how one might go about assessing a specific unit, program, or activity based on the program or unit’s strategic plan or goals. Prompts can request a draft of an informed consent document for a focus group, or draft a focus group or interview protocol, or even outline a protocol and instructions for a talking circle. Generative AI can also create rubrics for projects or reflection papers.

 

Generative AI can support data analysis and reporting, particularly for small units or professionals who do not have the time or potentially the expertise to conduct analysis. For example, generative AI can be prompted to determine the types of analyses to run on quantitative and/or qualitative data, the tools to run that data, and the steps needed to run the analysis. Some tools can even engage in statistical analysis of unidentified data or thematic analysis of de-identified open-ended responses.  Additional ideas related to data analysis and reporting are using generative AI to create a report, summarize findings, or even develop images and slides to present findings or results. Those reports can be evaluated against specific rubrics or criteria to refine and hone the presentation of results. 

 

There are a few caveats worth noting here. Most tools are not secure, meaning that the information that is put in can be accessed by other users. If you enter a data set of student information for a generative AI tool to analyze, there is no assurance of privacy or protection. The key takeaway here is: do not put any student-level data, identifying information, or sensitive information into generative AI. Second caveat: not all tools are to be trusted when it comes to running statistical analysis. Should you want to have a tool run the analysis for you, choose the advanced data analysis option in a paid version of the tools. Open-source access tools are not able to do the computations desired with confidence. An additional consideration is that generative AI tools may also imbue bias when performing analysis of qualitative data. Because of these issues, one should carefully review and critique the generative AI output. 

 

Tools can generate ideas on using assessment results for making changes or improvements. Prompts may request the top four actions to take to address a specific issue, suggest how to leverage assessment results for change within a particular institution type and setting, or even provide strategies for managing politics and organizational structure in securing support for assessment. Tools can generate meeting agendas, emails, and possible responses to address hindrances to the work of assessing student learning. 

 

And lastly, generative AI tools could be used in support of other assessment-related processes and practices, including determining reporting processes and forms, identifying data needed for accreditation standards, drafting possible responses to accreditation requests, and identifying strategies for implementing equity-centered assessment. AI tools can take meeting notes, summarize content, and transcribe interactions. Prompts could draft mission statements or even a charge and responsibilities along with invite emails for a proposed student affairs assessment committee. 

 

This is an example of the output from ChatGPT-4 regarding different steps of the assessment process for a hypothetical weekend leadership retreat. 

Considerations and Principles for AI Use

The current conversation on generative AI and higher education has been dominated by faculty concerned about writing assignments. What has been slower to emerge are the opportunities to make the work of professionals easier, as well as what students should be learning and doing with generative AI. Student affairs assessment professionals are uniquely positioned to help students and their colleagues engage with generative AI in ways that prepare students for future success in a technology-infused world. We offer the following four considerations and principles to help guide AI use in student affairs assessment. 

 

  1. Center equity. Generative AI models include bias; simply being run by a computer does not remove the human-based models upon which they derive their training basis. The AI-gaze and work of Dr. Joy Buolamwini (including a documentary on this topic) outline the potential for harm and perpetuating inequities when tools are not used intentionally to address equity. Consider as well, the accessibility of free versus paid subscription models of access to AI tools and how that may or may not hinder equitable access to technology supports. 

  2. Check accuracy. Generative AI outputs may contain lies or produce “hallucinations” or “botshit” by making statistically informed guesses that appear confident or plausible but are in fact, incorrect. Statistical analyses may be presented that are incorrect as well; for example, ChatGPT 3.5 cannot accurately compute means from data. Just because the output was produced by a computer does not mean it is to be trusted or is accurate.  

  3. Ensure privacy. Check the tool being used for what is done with any data put in. If student work is uploaded, it may be used for training data, or it may not be secure, and others can access sensitive information. Institutional review boards may also have policies or guidelines for the use of generative AI in research and assessment. 

  4. Act ethically. Avoid claiming AI work as your own without attribution. If generative AI tools are used to produce content, be transparent in describing when and how AI was used. In addition, generative AI tools can produce misleading content (see for example how Leon Furze was able to use AI to generate a fabricated podcast) or can be utilized to spread false or misinformation. As educators, we have a shared responsibility to teach students about such potentials and the ethical implications of creating such content. 

 

Resources to Stay Informed

While the SAAL blog and community is a great place to stay informed of student affairs assessment connections with AI, there are a variety of outlets compiling resources on AI engagement and usage more broadly. 

Go Back

Comment