AI and chatbots have made a spectacular entrance into the world of higher education. Colleges like Fairfield University stand on the edge of a new era in education, one in which students might frequently look to artificial minds to aid them in their academic endeavors. 

One of the larger topics of discussion happening in higher education is—how should professors address the new technology in the classroom, if at all?

Much has been made over the past several months of the implications of ChatGPT, a fairly sophisticated prototype chatbot created by OpenAI. Since the application’s release in November 2022, there has been considerable debate in the academic community about the impact that it will have on coursework.

It should be a given that artificial intelligence of the highest quality would have a higher burden of access than the ability to create a Google account. However, the sophistication of ChatGPT is still amazing to behold, especially when such a small segment of the population had previously been aware of the modern capabilities of chatbot technology. It is a remarkably adaptable tool whose depth of knowledge can leave one’s head spinning.

Even so, the limitations of ChatGPT become apparent after only a few minutes of testing its responses. As the AI will readily tell you, it has only been trained on events that occurred prior to October 2021; as such, it cannot provide meaningful commentary on current events. Furthermore, the AI is incapable of citing sources for any research that is asked of it, a significant hurdle to any unscrupulous students who might be tempted to use the chatbot to complete written assignments.

To gain some more insight into the impact of ChatGPT on Fairfield University’s faculty, I sat down over Zoom with Tommy Xie, PhD, associate professor of English, to discuss the new chatbot. I did so at the recommendation of multiple students of his, all of whom claimed that he had been vocal about ChatGPT and its potential impact on higher education.

Dr. Xie first became aware of ChatGPT amidst the flurry of social media discussions about its launch at the end of last year. “I decided that I had to try it for myself,” he said. “Its potential and sophistication quickly became apparent to me.”

After doing so, he logged into Blackboard to check the requirements of some of his final projects to determine the degree to which ChatGPT could be used to unethically complete the assignments.

“ChatGPT’s inability to cite sources, particularly about recent information, would have made it difficult for a student to use it on my final,” Xie said. “Even so, it has raised my awareness of the need to create assignments in which fundamental course skills cannot be faked.”

In general, many of the concerns of teachers and professors around the country are somewhat valid. Unless the guidelines for written assignments are extremely specific about the tools and content that are to be utilized, students can theoretically use the chatbot to complete much of their assigned work.

The newfound ease with which students can use ChatGPT to complete assignments is compounded by a similar problem: without training, educators can have trouble spotting AI-generated content.

If students were to utilize chatbots in their assignments, would the average faculty member be able to spot the difference between students’ work and that of the machines? As I quickly learned, the distinction can be difficult for even the most seasoned instructors to notice.

Dr. Xie directed my attention to a New York Times quiz presented to several educators (as well as beloved children’s author Judy Blume) that sought to test their ability to differentiate between AI-generated content and content produced by real students. “The educators didn’t exactly do great,” said Xie.

If ChatGPT could fool these teachers while the AI is still in its relative infancy, who knows how difficult to distinguish it might become in the near future? While platforms like OpenAI’s new text classifier can inform a user of whether or not submitted text was likely generated by an AI, they cannot do so with complete accuracy. As more students learn of ChatGPT’s capabilities, this issue is likely to only become more pronounced.

Recently, Dr. Xie was invited to attend a faculty discussion, hosted by Fairfield’s Office of Academic Excellence, to talk about the potential use of AI tools in the classroom. It quickly became clear to him that there was a serious lack of consensus among professors about how applications like ChatGPT might one day be incorporated into teaching.

“In general, STEM professors seemed the most adaptive and open to the possibilities of this technology,” said Xie. Conversely, “much of the opposition to chatbots came from professors in the humanities.”

At the discussion, some professors explained that their wariness stems from a belief that so much as addressing the arrival of such a sophisticated chatbot could tempt students to abuse it for their own academic gain.

For his part, Dr. Xie has expressed a willingness to learn more about the capabilities of tools like ChatGPT, as well as the potential for their integration into his pedagogy. After all, there is a world of difference between secretly using a chatbot to write a paper and, for example, openly using a chatbot to conduct preliminary research.

“The fear, for me,” he said, “is that people are not seeing the distinction between the two assessments. What I want is total transparency from my students [in a class that approves the use of AI]; if you use AI, tell me the degree to which you used it.”

More than anything, Dr. Xie expressed to me that the debate was not limited to that one faculty discussion; there will almost certainly be a continuously evolving dialogue regarding the use of AI at Fairfield going forward.

It is clear that this development isn’t simply going to go away if ignored; the sooner that professors reach a consensus on the use of AI, the sooner that it can be utilized in an advantageous and honest manner.


Leave a Reply

Your email address will not be published. Required fields are marked *