BUFFALO, N.Y. -- The New York state attorney general's office is asking the country's largest social media companies to disclose the measures they are taking to protect voters from misinformation this presidential election cycle.

Chris MacKenzie is the senior director of communications for Chamber of Progress, a tech trade group that represents many of the platforms.

"Companies are aware of the threat of misinformation in this election. Misinformation online and content moderation has kind of been front and center for them since 2016," MacKenzie said.

However, in a letter this week, first reported by ABC News and independently obtained by Spectrum News 1, the AG points out, with the rise of generative AI, "barriers that prevent bad actors from creating deceptive or misleading content have weakened dramatically."

"Determining what's misinformation and what's just politically-aligned information, what might be on the far right or far left versus something the might misinform a voter, that's going to be challenging for companies to make that call," MacKenzie said.

The letter asks the companies to address in writing 11 specific issues and have a follow up meeting with the AG's office.

The topics include:

  • how they will address the dissemination of deceptive AI-generated materials on their platforms and the use of platform AI tools to create those materials
  • labels to identify both readily apparent AI content and authentic content
  • detection and monitoring systems
  • reporting mechanism and enforcement policies
  • outreach and education among other things

 "Companies are using AI in their content moderation algorithms to look for content online that is misinformation and could mislead voters," MacKenzie said.

He said there are concerns the companies need to consider in developing their policies including the fact not all AI material is necessarily deceptive. As with content moderation policies in previous elections, he said social media platforms must walk a political tightrope.

"The companies at the end of the day need to be able to make their own content moderation decisions so that they remain within their First Amendment boundaries and they don't let politicians use their regulatory power to persuade them one way or another," MacKenzie said.

The New York attorney general's office says it has a keen interest in the risks and benefits of new generative AI technology and earlier this month released a report and hosted a symposium on the subject.