On the use of AI in Academia

It's clear that many academics are weary of using generative AI in the form of chatbots to assist in their academic work, and for good reason.

This is a very new phenomenon and many people are quick to form judgments and opinions.

The fact of the matter is, this technology exists and we are beyond a point where the question is whether or not AI should be used. The more substantial and interesting question, in my opinion, is how AI can be used effectively. Other questions I have are about the nature of academic work in general and why there is a pressure to use AI to write abstracts or create written work.

AI is advertised as an efficiency tool. So it seems like the culture of "publish or perish" is fostering an environment where AI is an attractive tool to help academics stay on course for their career progressions.

Students feel a pressure to submit work on time and graduate quickly in order to build a life for themselves. The degree matters more than the learning to some.

At the core, people do not feel safe within the current standards of success. Working hard isn't enough.

As such, it makes sense why people are imagining others using AI to "cheat" and meet these goals more efficiently and are quick to bash the use of AI on moral grounds.

I believe at the core, however, everyone wants to do good work and to contribute meaningfully. I think it's an unfair and cynical judgment to assume that most people are using AI completely nefariously.

Like I said earlier, this is a very new technology and we as a society are still understanding the role of AI in the world around us.

I think this moment is an opportunity to think about what it means to work, to produce, and to create in general.

These days while I'm working on my research, I find myself asking these questions: What does it mean for me to write and publish something? Is my goal to educate other people? Is my goal to deposit data into a larger repository of information? Is my goal to achieve tenure?

I think the role of AI to help meet any of these goals would look a little different. And it's unlikely that I only care about one of these goals at a time.

On the question of how to use AI effectively, I have yet to come across a good set of guidelines that make sense to me.

I use AI to help with the syntax of my data analysis code. I know what plot I want to make or the questions I want to answer, but I can't be bothered to scan for a specific equal sign or bracket.

Screenshot of a conversation with ChatGPT to have it help me turn my handwritten function into code.

In terms of refining my ideas, I use AI to help me identify a logical gap in my reasoning and to consider other potential options. I still have to think and make the scaffolding of a rationale and I do not blindly accept its responses. It is as if I'm talking to a colleague who is much more well-read than I am. I don't expect them to remember every single detail of every paper they read and I often don't agree with some conclusions or inferences they may make. The act of engaging in conversation, however, enriches my own thought processes and pushes me to map out and continue exploring directions.

A screenshot of the graph view of my doctoral research Obsidian Notes vault.

I don't claim to have a solution to this issue, but I hope to implore more folks to put their judgments to the side for a moment and consider that multiple things can be true at once. It is true that bad actors out there exist and are trying to get away with disingenuous contributions to the Academy. It is true that nobody really knows what is the best use of AI. It is also true that AI is a remarkable technology that was built from the hard work of many curious academics like ourselves over many years. Rather than looking at this new part of society as a detriment and with too much caution, I implore you to get curious and excited about your role in shaping an academic culture of the future.

Next
Next

Cultivating Seeds of Knowledge: My Teaching Philosophy