麻豆传媒

 

» Go to news main

Dal's AI lead aims to spark conversation and connection on our rapidly evolving information future

Posted by Ryan McNutt on July 25, 2023 in Computer Science, Faculty, Faculty of Computer Science
Christian Blouin, 麻豆传媒's strategy lead for AI. (Nick Pearce photos)
Christian Blouin, 麻豆传媒's strategy lead for AI. (Nick Pearce photos)

It鈥檚 one of 2023鈥檚 hottest topics: artificial intelligence, or 鈥淎I.鈥 Read any news outlet or spend some time on social media and someone, somewhere, is showing off something that Chat GPT 鈥渨rote鈥 or that an image generator 鈥渄rew.鈥

Science fiction becoming science reality? Not quite yet. But it鈥檚 fair to say that the power of these machine-learning tools, and the speed at which they have advanced into something approaching the fantastical, has created a mix of hype and hysteria can be hard to parse through.

Read also:聽Ask the experts: Where will artificial intelligence go next?聽(Dal News, June 5)

Within higher education, initial conversations around tools like Chat GPT have largely focused on academic integrity. Back in April,聽, featuring a range of perspectives from across the university. One of the participants in that event was聽Christian Blouin, professor and associate dean academic in the Faculty of Computer Science, who is working to support faculty though multiple disruptions in recent past and recently appointed as institutional lead (AI strategy) for 麻豆传媒.

And while Dr. Blouin sees the academic integrity conversation as an important one, he鈥檚 also keen to broaden the AI conversation at 麻豆传媒 into something much more holistic.

鈥淚f we assume pace of disruption is increasing 鈥 even if it stays constant 鈥 we don鈥檛 want to find ourselves, in a where we鈥檒l constantly be criminalizing everything new,鈥 he says. 鈥淚nstead of defining ourselves by what鈥檚 not allowed, we need to be clear on what we鈥檙e trying to achieve as a university.鈥

Leslie Phillmore, associate vice-president academic, says the pace by which AI is impacting and will continue to affect academic work makes this a critical conversation to have now.

鈥淗aving Christian help facilitate that conversation at 麻豆传媒 not only will give this important work a focal point but will allow us to better connect with other universities across Canada to share information and strategies,鈥 she says.

Developing systems and supports


As for what Dal is trying to achieve with AI, that鈥檚 a conversation with Dr. Blouin right at its centre. For the next couple of years, a portion of his time will be spent consulting with staff and faculty, answering their questions and helping the university develop policies and guidelines with respect to the use of AI and machine-learning systems in the classroom, in research and in administrative work.

鈥淎I is not really a technology question 鈥 it鈥檚 more a people question,鈥 explains Dr. Blouin. 鈥淲here is it appropriate or ethical to delegate automation or decision-making to algorithms and software systems 鈥 and where is it not? Especially within a university, a place where we disseminate knowledge, it鈥檚 important that we empower everyone to be part of that conversation.鈥

Dr. Blouin already hosted meetings and delivered presentations with many Faculties and faculty councils on the subject, with more to come. His initial focus is on putting together a guidance document for fall courses on how these AI tools (such as large-language models) should be considered.

鈥淧eople want to know the boundaries of what they can and can鈥檛 do, and September is coming soon for faculty who may be looking to adjust their course plans or their syllabus,鈥 he says. 鈥淭he idea is a mix of pedagogical support and guidance-level advice 鈥 a working document that gets folks talking about it and feeling like they can start to get engaged in the subject.鈥

搁别惫颈别飞:听听摆笔顿贵闭

Longer term, it鈥檚 about helping Dal prepare itself for an AI-informed digital future in which the pace of change is accelerating. Dr. Blouin wants to ensure the university isn鈥檛 caught off-guard by new developments but, instead, has the processes and people in place to carefully consider opportunities and challenges as they emerge. Most importantly, that we get better at coming together and make nuanced decisions in a multi-disciplinary and collaborative manner.

The human element


While the term 鈥淎I鈥 is still perhaps best known for its sci-fi context in popular fiction like Terminator or The Matrix, its current application isn鈥檛 about artificial consciousness akin to actual human thinking. It鈥檚 about computer processes that consider massive amounts of data, whether words or numbers, to perform certain tasks very quickly.

What makes it seem 鈥渋ntelligent,鈥 though, is that the tasks being performed have, traditionally, been distinctly in the human domain such as writing complex text in particular styles or creating realistic-looking images. Through AI tools, computer software can now perform these functions 鈥 and can do so at a much higher quality level than ever before.

Scary stuff? It can seem that way. 鈥淭he first time someone uses a tool like Chat GPT it can be pretty overwhelming,鈥 says Dr. Blouin, referring to the text-generating software developed by OpenAI that, since its launch just seven months ago, has become the standard-bearer for what modern AI can do. 鈥淭hese systems designed to generate language exercise quite a bit of analytical skills, and that鈥檚 disturbing, because we thought we [as humans] had a monopoly on that.鈥

But these sorts of big-data systems can also be incredibly helpful. They work so fast, and on such a huge scale, that they can accomplish easily automatable tasks or processes that take up significant amounts of time 鈥 particularly ones that don鈥檛 require or benefit from creativity and analysis. Dr. Blouin cites an example of being asked to summarize a 90-page proposal: a large language tool can review that and provide back bullet points in seconds, versus taking hours to read through it and take notes.

鈥淚t gives me the ability to scan so much more information more quickly,鈥 he explains. 鈥淏ut we should never make critical decisions based on that work alone.鈥

Empowering people


If we鈥檙e looking to maintain the essential human role in our work with AI, we need to make sure the humans know what to do with it all. And there is a lot to consider here 鈥 not just issues of authorship, but bias, privacy, copyright and (given the carbon footprint of the servers that run these tools) environmental implications as well.

鈥淛ust asking faculty to figure this out on their own 鈥 it鈥檚 not realistic or fair,鈥 says Dr. Blouin. 鈥淭hat鈥檚 why we want to figure out how we provide guidance to faculty who are designing courses and programs on how to bring this effectively and ethically into their work. And this applies to staff as well. How do we provide the Dal community with hype-free information and guidance on what鈥檚 appropriate to do, and to help them adapt to a rapidly changing situation and make the best of it?鈥

In a way, Dr. Blouin sees his appointment as working towards his own redundancy as institutional lead of AI 鈥 to help 麻豆传媒 reach a point where Faculties, departments, instructors, students all feel like they can engage with the bigger AI discussion in their own work or study.

鈥淭he problems and opportunities that AI represents in various fields and disciplines are unique to those disciplines. A computer scientist wouldn鈥檛 necessarily understand them. So over the next decade, everyone has to 鈥榦wn鈥 AI, not just computer scientists. But we can only do this if there鈥檚 a baseline understanding, and people feel they have the authority to make good, informed decisions.鈥

鈥淚f you empower people with knowledge to form their own opinion, and to have the confidence to do so 鈥 that鈥檚 how we navigate the ethical nuance of AI.鈥