FCC to investigate AI’s impact on robocalls

October 25, 2023

The Federal Communications Commission (FCC) is tasked with protecting consumers from unwanted communication. AI has the potential to both help and hinder its efforts as robocalls continue to be a problem.

In 2022 the FCC received more than 120,000 complaints from people who received automated robocalls. The makers of call protection app YouMail, say that Americans were targeted by more than 50 billion robocalls in 2022.

FCC Chairwoman Jessica Rosenworcel announced on Monday that she will be putting a proposal together to investigate how AI will impact illegal and unwanted robocalls and texts.

Robocalls can already be targeted using demographics, but imagine the fine-tuned customization AI can now bring to these calls. Instead of recording a one-size-fits-all message, audio can be created that specifically targets an individual.

If you’ve been on the receiving end of annoying robocalls then the prospect of them being supercharged by AI isn’t great. The additional problem is that AI audio generation is getting so good that you might not even know you’re on a robocall.

It’s not all bad news though. In her statement, Rosenworcel said, “While we are aware of the challenges AI can present, there is also significant potential to use this technology to benefit communications networks and their customers—including in the fight against junk robocalls and robotexts.”

Aims of the proposed inquiry

Rosenworcel’s proposal will be subject to a vote at the FCC’s open meeting in November. If adopted it will investigate:

  • How AI technologies fit into the Commission’s statutory responsibilities under the
    Telephone Consumer Protection Act (TCPA)
  • If and when future AI technologies fall under the TCPA;
  • How AI impacts existing regulatory frameworks and future policy formulation;
  • If the Commission should consider ways to verify the authenticity of legitimately
    generated AI voice or text content from trusted sources; and,
  • What next steps, if any, are necessary to advance this inquiry.

Working out which aspects of AI should fall under the purview of the FCC will be tricky. Recently, New York City Mayor Eric Adams faced criticism over robocalls using deepfakes of his voice to communicate with the city’s residents.

Robocalls for political or social communication aren’t currently illegal. So technically Adams’ calls were on the right side of the law.

Should AI robocalls like that carry a disclaimer informing the recipient that the audio was AI-generated? If communication companies could use AI to monitor calls to flag robocalls would you be OK with that? It would probably mean that your call would be analyzed too.

The FCC statement explained that “AI can also pose new privacy and safety challenges, including by mimicking real human voices. This inquiry aims to understand these benefits and risks, so the Commission can better combat harms, utilize the benefits of AI, and protect consumers.”

For now, the FCC is proposing an inquiry to begin to understand AI’s role in communications so that they can eventually put legislation in place.

It’s a start, but you’ve got to wonder how slow-moving bureaucracies like the FCC will keep up as AI tech races ahead of policy.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions