Andrew Carr, the director of graduate cybersecurity programs and an assistant professor of cybersecurity at Utica University, says there are pros and cons with the rise of artificial intelligence.

Carr says it helps with productivity, but while it takes in good information on the internet, it also takes in the bad and unvetted information that’s not true.

“It's essentially just regurgitating things that it finds on the open Internet and sometimes trying to piece together details where there's a missing part. So, we can't really take it at face value, because oftentimes it's incorrect," said Carr.

Some feel AI has a mind of its own. Carr said AI technology is currently not sentient.

“It's just recalling information that it has learned. It's not going out and formulating its own ideas on things. It's a really glorified kind of search engine if you will," Carr said.

Could someone, for example, hack an institution by asking a computer system to do so? Carr says not now. Information may be provided, but he says there are guardrails.

“So, the companies that produce these large language models will put guardrails on them to try and stop individuals who try to use it nefariously from doing so. But as we've seen, there are ways around those," said Carr.

Then there are deepfakes, images of videos of people that are digitally altered to spread false information. Fake hostage videos and false videos of world leaders are among the things that could be created, which could be a national security concern.

Carr pointed to how real movies can look, and while he said right now there are ways to tell a deepfake, that might not always be the case.

“Like any technology, there are researchers out there that their sole duty in their minds is to try and circumvent any technology that's out there,” Carr said. “So you'll have security researchers that deliver these proof of concept that bypass all the protections we have, and then bad actors will get a hold of it and they'll use it for nefarious purposes. So it's a neverending battle and always has been.”

He added that corporations and the government need to keep up with AI and make sure safety guards are in place as technology keeps advancing. He urges people not to give AI confidential information.

Carr warns that, even if you try to not share your location information online, sometimes artificial intelligence can learn it anyway if you ask a location-based question. Sometimes, he says, just asking a location-based question can actually activate a precise location through terms of service.