AI in Academia, What it Means for You
2023-03-20
Preface: The following is a blog post version of a lightning talk I gave for a general audience. Some aspects are simplified in the interest of time and suitability.
Artificial What?
When we say artificial intelligence or AI, the first thing that comes to mind for many is HAL from 2001: A Space Odyssey or the Terminator with those glowing red eyes - but the reality is much simpler.
When we say AI, we’re talking about a program or system that is designed to carry out human-like processes by taking in annotated examples and learning to improve itself based on previous experiences. Now many scientific fields will have a history stretching back hundreds of years, AI is slightly different in that regard in that we can roughly dates the earliest modern work to around the start of the 1950s. It was during this period that various scientists were trying to figure out the potential boundaries of AI by proposing problems that they thought would be impossible for an AI to accomplish. Problems that were too complicated, too complex, too human for an AI to learn.
One popular example was chess.
The idea was that chess had just too many pieces in too many combinations. It would be too complicated for an AI to beat a human. And then in 1996 an AI system called Deep Blue defeated the World Grandmaster. And so going back to the drawing board, what about Go? Go is another board game largely played in Asia with hundreds more pieces and billions of potential positions. Even humans struggle to keep track of what is going on during a typical game. Surely there’s no way an AI could beat a human at that. And then in 2016 a system called AlphaGo defeated the world number one.
So back to the drawing board again: art. If AI is good at learning strict rules like that used in board games, art is the complete opposite end of the spectrum. Art must be too abstract and too uniquely human that AI could never produce its own art. And then last year, OpenAI introduced DALLE, an AI that can take in a prompt from the user and generate completely unique images. And then in the summer Github introduced Copilot for helping programmers write code by just describing the problem to be solved. And then just last month, Google released Imagen, a system for creating entire videos just from prompts.
What Now?
And so all these milestones start to bring forward a few interesting points. Not only is each successive breakthrough more impressive than the last, they’re also starting to happen faster and faster.
So the obvious question is: what does this mean for us?
A lot of the examples I’ve given here are mainly focused on games and more creative pursuits but academia is starting to be effected too. How do we as students, as academics, as teachers, stay ahead of systems like these? Recently we’ve started to see systems that can read and understand papers faster than we can; or that can analyse results faster than we can; or that can carry out their own research faster than we can.
And from a teaching perspective, we’re going to start seeing issues too. What good is an art assignment if students can generate a masterpiece? Why set an essay as an assignment when students can generate everything with just a topic sentence? Why set students programming tasks if they can solve the problem by just giving a brief outline?
The Path Ahead
AI is going to radically affect our lives and our work in the coming years. It is a rapidly evolving field that offers countless amazing new opportunities but also poses many difficult questions. While we’re still a long way off systems as intelligent as HAL and the Terminator, we are going to have to think about how we confront systems that are edging closer to doing our job better than we can. As these kinds of systems get better and better, we are going to have to start thinking about what approach do we take? Do we hand over control and let AIs dictate our work? Or do we dig in and try outpace and outsmart these machines?
Or is there a middle ground we can strike? One where we can pull from the best of both worlds. One where we leverage the immense processing power of modern AI systems to bolster our expertise and innate curiosity? Just because these systems can carry out all these different kinds of tasks, that doesn’t have to mean that AIs should be the only ones to do them. There’s opportunities here to go further together than alone. But that’s a conversation for another post.