Kate Crane鈥檚 advice to professors worried about new artificial intelligence (AI) tools comes down to two simple words: don鈥檛 panic.
An educational developer with Dal鈥檚 Centre for Learning and Teaching (CLT), Crane says, 鈥淚 would just want to remind professors that they should prepare, but they don鈥檛 necessarily have to turn their worlds upside down.鈥
Kate Crane, educational developer (Nick Pearce photo).
Sometimes it feels like everything, not just teaching, is being turned upside-down by new generative AI tools.
Media are experimenting with articles written and illustrated by AI. Sometimes this goes wrong, as when Microsoft made the news for apparently AI-written travel articles that recommended Ottawa tourists visit a food bank 鈥渙n an empty stomach.鈥 Meanwhile, search engines and blogging tools are incorporating AI assistants and chatbots, and photo-enhancing software is using AI to blur the lines between reality and touch-up more than ever.
And your feeds are likely full of AI images, some of them passing themselves off as real. (No, those babies are not really parachuting, and that isn鈥檛 Van Gogh sitting on the front steps of his house at Arles.)
But what does it all mean for teaching, learning, and academic integrity? Does widespread adoption of ChatGPT mean the end of the essay as a meaningful evaluation tool? Are Dal鈥檚 academic integrity officers about to be swamped? And should professors ban AI, incorporate it, or embrace it?
A pedagogical problem and an integrity issue
In an online workshop held earlier this year, Computer Science professor Christian Blouin said AI tools like ChatGPT represent 鈥渁 pedagogical problem that has a short-term academic integrity issue 鈥 and we need to sort ourselves out very quickly.鈥 Dr. Blouin is Dal鈥檚 institutional lead for AI strategy, and he says it doesn鈥檛 make sense for a university with as many programs and disciplines as Dal to have one blanket policy on acceptable use of AI by students.
Related reading:听Dal's AI lead aims to spark conversation and connection on our rapidly evolving information future (Dal News, July 25)
鈥淚n computer science, we鈥檙e thinking about AI-driven tools differently than in engineering, for example,鈥 Dr. Blouin said in an interview. Meanwhile, in the arts and social sciences, 鈥淲e are assessing critical thinking, but the medium through which we do that is writing.鈥
The discussion around AI tools quickly draws us from specifics to big-picture questions: What are universities for? What is the purpose of assignments? What are we assessing and evaluating?
When it comes to essays, for instance, 鈥淭he point is not so much that you wrote something, but the process of thinking, and the process of articulating what鈥檚 underneath,鈥 Dr. Blouin says. 鈥淎 tool is not an agent, it鈥檚 not a person... People come to university so they can become citizens and professionals. And it鈥檚 really important that we provide them with an education and give them an assessment of their abilities in making decisions, and reasoning through, and thinking ethically.鈥
Jesse Albiston, pictured on the sofa wearing a cap, with colleagues from Bitstrapped and a robot dog.
AI in the workplace
Jesse Albiston (BComm鈥14) is a founder and partner at , a Toronto-based consulting firm specializing in machine learning operations and data platforms. In short, they help companies figure out if and how they should be using AI.
The AI revolution has been good for Bitstrapped. Albiston says the company booked more work in the first quarter of this year than all of the previous year. At the same time, he cautions against jumping on the AI bandwagon just because that鈥檚 what everyone else is doing. When the firm is approached by clients who want to integrate AI into their workflows, 鈥淗alf the time 鈥 maybe more than half the time 鈥 AI is not the right approach,鈥 he says.
At the same time, he thinks learning how to use these tools should be an essential part of a university education 鈥 at least in some fields 鈥 because they are going to be an essential part of the workplace.
鈥淚f someone graduates university today, they should be using these tools. You鈥檙e not going to be replaced by AI. You鈥檙e going to be replaced by people using these tools,鈥 Albiston says. 鈥淚n my company, I have employees one or two years out of university who are using these tools, and their output is fantastic. They just need a bit of coaching on how it works.鈥
But if they 鈥渏ust need a bit of coaching,鈥 is that something a university should be providing? Dr. Blouin is not so sure. He says graduates will definitely encounter AI integrated into tools like office suites. But universities should take a longer-term view, preparing students for careers that will last decades. (How many of us learned high school tech skills we never used again, because technology had moved on?) That means thinking beyond ChatGPT and related large language model (LLM) tools.
Even if professors do want to integrate tools like ChatGPT, Crane says they should proceed with caution. While she believes 鈥渆xperimentation is good,鈥 she notes that at Dal, instructors are not allowed to require students to use AI for coursework. Apart from any pedagogical concerns, 鈥淭here are data privacy concerns,鈥 she says. The CLT says on its website that making the use of AI tools mandatory for a class contravenes Nova Scotia privacy law and 麻豆传媒鈥檚 Protection of Personal Information Policy.
Process over output
English professor Rohan Maitzen, who teaches both literature and writing, feels 鈥渞esentment towards the people who are propagating these systems on us without our permission.鈥 Teaching and learning writing is more about process than output, she says. And ChatGPT can鈥檛 help with that. But because it offers the promise of producing passable essays quickly and easily, Dr. Maitzen says she and her colleagues are worried.
鈥淲e can鈥檛 ignore the fact that this is a tool designed to take over the writing process,鈥 says Dr. Maitzen.
鈥淩ight from the moment you think, 鈥榃hat am I even going to write about?鈥 that begins your own individual process of figuring something out and putting your mind in contact with it. You can鈥檛 outsource that work to a machine. It鈥檚 an act of communication between you and the person you鈥檙e writing for.鈥
Dr. Maitzen has already received at least one assignment written by ChatGPT. One of the tool鈥檚 well-known shortcomings is that it is known to make up information, such as false citations and inaccurate 鈥渇acts.鈥 She assigned a reflection on a short poem and received an essay with one critical problem: 鈥淭he quotations the paper included were not in the poem. They don鈥檛 exist at all,鈥 Dr. Maitzen says. 鈥淪o it wasn鈥檛 a mystery looking at this paper 鈥 and looking at the relatively short poem that the paper was supposed to be about 鈥 there was just no correlation whatsoever.鈥
Avoiding an arms race
This, of course, brings up the question of cheating and academic integrity. Bob Mann is the manager of discipline appeals for the university secretariat. He said the number of cases referred to academic integrity officers has 鈥済one up dramatically鈥 in the last few years鈥攁lthough that might be because of greater detection. Mann said sometimes students are deliberately cheating, but often they 鈥渁re just trying to figure things out鈥 and 鈥渋nadvertently commit offences.鈥
He expects AI tools to make him busier this year. 鈥淚 call it Napster for homework,鈥 he says. But it won鈥檛 necessitate a change in academic integrity rules. 鈥淲riting a paper using AI is not a specific offence we have on the books; a student is required to submit work that is their own. So the rules have not changed.鈥
But determining what constitutes a student鈥檚 own work has (with exceptions, like the AI fabricated quotes Dr. Maitzen mentioned earlier) become harder. In terms of enforcement, Mann cautions against assuming students are using AI, saying he has seen cases where accusations proved to be unfounded. Students who struggle with English or who don鈥檛 understand how to cite properly may be particularly vulnerable to these charges.
And while it might seem tempting to deploy ever-more-sophisticated tools to crack down, everyone interviewed for this story counselled against that approach.
On a basic level, if you are 鈥渟uspicious of generative AI, why are you going to trust another piece of AI software with a decision that can cause harm? It makes no sense from an ethical perspective,鈥 Dr. Blouin says.
Dr. Maitzen agrees and says 鈥渇ocusing on this as a discipline-and-punish problem is maybe counterproductive.鈥 Plus, she has no interest in an AI arms race with students.
鈥淚t isn鈥檛 just about an enforcement problem. We would like to trust them, and we would like to engage with them in the spirit of trust and authenticity. And so we want them to understand what it is we鈥檙e really asking them to do, rather than just emphasizing what we鈥檙e telling them not to do,鈥 she said.
Les T. Johnson, who, like Crane, is an educational developer at the CLT, also emphasizes conversation. He says, 鈥淚 would want to make sure I had an honest conversation with my students about [AI].鈥
Crane agrees: 鈥淭alk to your students about implications of AI, for themselves, their communities, their learning, society.鈥
鈥淎rtificial intelligence鈥 is not intelligent
A small number of students have always found ways to cheat or cut corners on assignments. They could copy passages out of books, hire people to write papers for them, buy essays online, and use any number of other tools. What AI has done is made it that much easier for students to use outside help鈥攚hether or not sanctioned by their professors.
But not that easy.
Despite the term 鈥渁rtificial intelligence鈥 there is nothing intelligent about ChatGPT and other LLMs. They recognize patterns and can create coherent sentences. That doesn鈥檛 mean students can always rely on them to write a decent essay. 鈥淚t isn鈥檛 actually quite so easy as just logging on, putting in your assignment prompt, and it鈥檒l just give you exactly what you need to get a C or above,鈥 Crane says. 鈥淚t takes some competency to get what you want out of it, and that takes time.鈥
Dr. Johnson has noticed professors seem less worried about students turning in AI-generated assignments than they were six months ago. That may be because a lot of students tried them and found them lacking. 鈥淚 was thinking about ChatGPT in particular. When it first started, it was, 鈥極h my gosh, how exciting! I can have this computer write my essay!鈥欌 he says. 鈥淏ut of the students who used it, some may have been cited for plagiarism and then some just didn鈥檛 get good grades. And they鈥檙e like, well, wait a second 鈥 I didn鈥檛 learn anything, I didn鈥檛 even do that well, and it wasn鈥檛 that much easier... I鈥檓 just going to do it myself.鈥
Dr. Maitzen doesn鈥檛 blame students who are anxious about grades. They鈥檝e grown up in a culture that increasingly tends to view university education as a commodity. 鈥淭hey don鈥檛 have a sense that it鈥檚 all right to take a risk, to just give it a try, to just say what they think,鈥 Dr. Maitzen says. 鈥淭hey鈥檙e not sure they have the skills to do that, and they don鈥檛 have enough metacognition to realize that doing it online is exactly what prevents them from developing those skills 鈥 and it becomes a self-fulfilling prophecy.鈥
Related reading:听Ask the experts: Where will artificial intelligence go next? (Dal News, June 5)
How broken are our courses?
Dr. Blouin said the Faculty of Computer Science hired a graduate student over the summer 鈥渢o assess, if we change nothing in our curriculum, how broken our courses are.鈥 Essentially, how far could students get using AI for assignments, without actually understanding any of the material or learning anything?
Dr.聽Christian Blouin (Nick Pearce photo).
In some cases, pretty far. 鈥淕PT is a pretty good programmer for simple stuff,鈥 Dr. Blouin says. What the faculty found anecdotally was a lot of variation. 鈥淭here are some third-year assignments that it does very well, and some first-year questions on which it falls apart very quickly.鈥 One approach would be to rejig assignments that can be done by ChatGPT, but Dr. Blouin calls that 鈥渁 dangerous game to play, because as soon as a new version comes out, your entire strategy may fall apart.鈥
Despite the AI revolution, Dr. Blouin says one fundamental thing has not changed: 鈥淵ou are intellectually responsible for the work that you produce.鈥 And 鈥渦niversities should not be satisfied with something that looks like work. We are satisfied with and expect actual work from our students,鈥 he says. 鈥淪o I think it鈥檚 more an issue of personal and professional responsibility than it is of honesty.鈥
Still, Dr. Maitzen is planning on making some changes to her assignments, especially for larger introductory classes. That may mean more contract grading, in-class writing, and multiple-choice tests. (She is less worried about her upper-level Victorian literature courses.)
But she is hoping that honest conversations with students about getting the most out of university will carry the day. Crane agrees with that approach. A good assessment isn鈥檛 suddenly bad because there is a possibility AI could help with it. She urges faculty to 鈥渒eep designing good assessments according to evidence-based practice.鈥
This story appeared in the聽DAL Magazine Fall/Winter 2023聽issue. Flip through the rest of the issue using the links below.