Professors weigh in on the real life ramifications of artificial intelligence

July 20, 2023

Computer screens with sun cascading on them

With new forms of artificial intelligence (AI) appearing every week, many believe that the world could be headed toward destruction, as seen in science fiction movies from the past few decades.

Some Quinnipiac professors believe that AI will lead to certain types of loss, but not necessarily of society. Instead, AI is just like every other technological advance in the sense that it will completely change the way that the world works.

But, there’s no need to fear a science fiction computer takeover, explained Quinnipiac computer science and software engineering professors.

“AI tools aren’t conscious," said Jonathan Blake, professor of computer science and software engineering. "There is no singularity happening with ChatGPT. Large language models simply generate a statically likely sequence of words that are filtered through a program to ensure it is proper English.”

AI is a diverse field, only connected by their desire to create computer systems that can do the things that humanity has always been better at, he said.

Blake's Ph.D. research focused on computational linguistics, contributing to a small part of the work that would become the large language models, or LLMs, like the academically infamous ChatGPT.

With generative AI websites becoming more popular, students can easily ask a question to a computer and get a written-out answer with little to no effort on their end. While doing so violates many codes of ethics, there are benefits to this sense of easy access.

“I think we should take the opportunity to focus on what we really want our students to do when it comes to generative AI," said Blake. "I have taught about general AI in courses previously, and this fall I want to incorporate LLMs into my courses at all levels to explore the power of these tools in creating potential solutions."

Coming from an extensive background in the field, the knowledge he shares with his students is firsthand from someone who helped build the foundation of this kind of AI. Blake is mostly concerned about the Quinnipiac community misunderstanding what these sites are and what they do.

While the sites themselves aren’t positive or negative, people are free to use them as they choose. he said.

“AI is only as good as the data it is trained on, and if we train these tools on the internet, we know that the result won’t be good,” said Blake. "That goes for many LLMs, including ChatGPT."

Carrie Bulger, a professor of psychology at Quinnipiac, has seen the ways that these AI-based websites provide incorrect information. They function by compiling information from the internet, similar to search engines like Google or Bing, she said. The only difference is their way of presenting that information.

“It might seem like a good idea to use a program like ChatGPT to get a basic idea of what something is about, similar to how some people use Wikipedia," said Bulger. "The problems come from the incorrect information these programs discover and their ability to make up something that ‘sounds right.'"

While some individuals believe in fact-checking information from these programs, many fall into a trap discovered years ago by cognitive psychologists.

“When we read something, it becomes a part of what we know," said Bulger. "It gets linked to other information in our brains, and if what the website is saying seems plausible, the connection can be even stronger."

For nuanced questions, AI may not understand what it is being asked and will provide information it thinks will suit the question. For more complex questions, AI can string together facts in a way that makes them false. There is no sure way to know if what you’re being told is the truth unless you fact-check it on credible sites, she said.

"There have been tests run on LLMs where they were asked to provide sources for their information and some of those provided were false," said Bulger. "Just as it can generate plausible answers, it can generate plausible citations."

But, these types of AI can still provide something valuable, she added. They have become a way to expedite simple but time-consuming work. Scanning lines of code for that one small error causing it to fail, creating an outline for the essay you’re already prepared to write or proofreading a final work for grammar are all tasks that these sites can simplify.

For individuals with ADHD or dyslexia, these platforms and tools can be a lifesaver, added Gina Abbott, professor of psychology.

"There were many concerns about software as simple as simple as spellcheck and Grammarly a few years ago, and some were wary of internet research," said Abbott. "I am concerned that students may rely on this too much, but I also see it as a potential resource and tool. I believe those with learning disabilities should take advantage of technical tools to assist them in staying on track, but papers should still be created by a human being.”

She’s not the only one who feels this way.  

"I think we should be embracing these new tools instead of clinging to old ways," said Blake. "We should be looking for ways to update our ethical expectations while meeting students within the ways that they learn. If that means using AI tools, then we should use them.”

Dr. Chetan Jaiswal in the computer science department primarily researches AI and cybersecurity. The recent boom in artificial intelligence is nothing new to him, as he has taught the subject in his classrooms for years.

“Academically I believe we need to adapt and evolve on how to provide a more effective learning environment for our students because over-reliance on AI systems can lead to a reduced reliance on human skills and critical thinking," said Jaiswal. "It is crucial to strike a balance between leveraging AI as a tool and nurturing human intelligence, creativity and problem-solving abilities."

Jaiswal has been teaching about learning that balance for years.

“I always say that ‘AI won’t replace you, but a person knowing and using AI might,’” he said. “AI is not always reliable. It lacks contextual understanding, leading to bias and patterns, not accuracy. Students are going to use these tools, so as a professor my role is to help them understand how it works, what it can do and what it can’t do.”

Like Jaiswal, many Quinnipiac professors in the field believe in the common theme to be that AI can benefit those who know its limits and understand how to use it.

“Our current students are our future leaders and they must be exposed to the latest technologies and state-of-the-art advancements," said Jaiswal. "AI is a tool that was created by humans and its impact is shaped by how we design, deploy and regulate it."

AI may lead to some jobs becoming obsolete, but it can also create them, he added. It may make plagiarism easier, but then again, false information will make it easier to spot. AI, like every other technological advance, has positives that outweigh the negatives.

The world is safe from destruction, he concluded, at least from ChatGPT and other AI sites.

Stay in the Loop

Sign Up Now