The dos and donts of campaigning with AI

Happy Thursday! Tech policy world this week is tracking whether children’s online safety and privacy legislation could hitch a ride on a major aviation package, because Congress is deeply weird. Send news tips to: cristiano.lima@washpost.com

Today: Meta employees are calling on their employer to be more transparent about how the tech giant moderates content on the Israel-Gaza conflict. But first:

The dos and don'ts of campaigning with AI

Chatbots that take questions from the public. Robocalls that sound like candidates. Fundraising emails that appear to be human-made.

As the use of artificial intelligence tools soars across the country, political candidates are increasingly deploying the technology to reach voters. 

But amid a regulatory vacuum in Washington, campaigns have largely been left to self-police their use of the AI, which many fear could wreak havoc on the 2024 elections.

Advertisement

On Thursday, a prominent political group that invests in technology for Democrats is launching a new initiative to help campaigns navigate the muddy waters of AI in elections, releasing a detailed guide for how they can harness the tools — and what mistakes they should avoid. 

I spoke this week to the leaders of Zinc Labs, a coalition made up of veterans of the Biden and Clinton campaigns as well as other left-leaning political and tech groups, about what they see as the dos and don'ts of using AI for elections. Here are the takeaways: 

Do: ‘Always keep a human in the loop’

While tools like generative AI can help power services used to communicate with voters, they should never fully replace staff or their functions, and campaigns should always make sure that information produced by AI is vetted by humans, the group wrote. 

Advertisement

“This technology should not be deployed unattended, so always keep a human in the loop,” the group wrote. “A person should check and approve every citation, social media post, code snippet, or other output produced with generative AI.”

That also means setting up AI chatbots — whose responses are probably too numerous to individually check — may not be the most productive use of campaign time, it said.

“You're going to get less value out of the promise of generative AI there because that sort of thing can inevitably be gamed or screwed up without those guardrails,” said Ben Resnik, the group’s deputy director of tech strategy. 

Zinc Labs is an offshoot of the Zinc Collective, a coalition of liberal political groups whose largest funder is LinkedIn co-founder and major Democratic donor Allen Blue.

Don’t: Mislead voters about what’s AI-generated

Campaigns should not misrepresent when they are using AI, and they should clearly label it when they do. That means steering clear of using chatbots “to invent an anecdote for a speech” or creating video or audio that “mimics the likeness of an opponent.” 

Advertisement

Campaigns should also not claim “without evidence” that any potentially damaging video shared by an opponent is AI-generated, the group wrote.

“With public trust so low, continuing to erode that public trust is not in your campaign's benefit,” said Matt Hodges, the group’s executive director.

Do: Make a game plan to counter deepfakes

Fake or misleading videos powered by AI, known as deepfakes, are probably now an inevitable part of elections, so candidates and political organizations should plan ahead, according to the group. 

Campaigns should create “a crisis response plan ahead of time” — which includes “clear response plans, roles, and timelines” — so they are not caught flat-footed, it wrote. And campaigns should also be “careful” not to rush to publicly answer any fakes. “A knee-jerk response may just draw more attention.”.

Advertisement

Ultimately, the group argued, the best way to counter deepfakes is a “good offense” — regularly communicating in a way that is “authentic” so that those exposed to fake videos are “instinctively skeptical” about the source of the information.

Don’t: Trample on privacy

Campaigns integrating AI into their functions risk creating new privacy or cybersecurity breaches involving voters’ personal information. To that end, the group called for closely vetting third-party vendors and avoiding things like “uploading donors’ personal information to a generative AI tool” or “using a voter’s likeness or story for AI-generated materials without their consent.”

“AI data and usage policy, with clear boundaries around who can share what data to what tools for what purpose, protects your campaign, your staff, and your constituents,” the group wrote.

Advertisement

But campaigns shouldn’t rule out tools simply because data might be used to train AI, it said.

Do: Use AI for more mundane internal tasks

While much of the public debate around AI in elections has focused on the risk posed by fake visuals and videos, Zinc Labs urged campaigns to consider how the tools can be used for more routine internal tasks “to inform campaign strategy and communications in new ways.”

That includes using AI to help craft personalized messages to potential voters and donors, summarizing policy briefs or voter data for easier consumption and using it to generate first drafts of social media posts or other public communications. 

“It’s not replacing entire functions. It is accelerating the most tedious part of important functions,” Resnik said, citing gathering press clips and drafting social media posts as examples.

Our top tabs

Meta employees say company needs more transparency about Israel-Gaza content moderation

A group of Meta employees have publicized an internal letter from December asking the company to be more transparent about how it moderates content related to the Israel-Gaza war, a week after Google fired 50 workers for protesting the search giant’s work with the Israeli government, my colleague Gerrit De Vynck reports.

Advertisement

A Human Rights Watch report from December said Meta, which owns Facebook, WhatsApp and Instagram, has been “silencing voices in support of Palestine and Palestinian human rights.” But internally, employees say they are not allowed to discuss or criticize the company’s content moderation decisions.

“Nobody’s talking about these issues; there is no mention of bias and ethics and discrimination and how to fight them as a company,” a Meta employee said, speaking on the condition of anonymity to prevent backlash from the company. “It’s a war of public perception, and who controls public perception more than Meta?”

A spokesperson for Meta declined to comment. When the Human Rights Watch report was initially published, a company spokesperson told CNN that Meta's policies are “designed to give everyone a voice while at the same time keeping our platforms safe.”

Government scanner

Meet the woman who showed President Biden ChatGPT— and helped set the course for AI (Wired)

Advertisement

National Archives bans employee use of ChatGPT (404 Media)

U.S. moves to bar Huawei, other Chinese telecoms from certifying wireless equipment (Reuters)

Russian state media ramping up English, Spanish presence on TikTok, study finds (By Joseph Menn)

Hill happenings

Republicans release tech executives’ internal communications (The Verge)

Security bill aims to prevent safety breaches of AI models (The Verge)

Inside the industry

Amazon gets more fuel for AI race (Wall Street Journal)

Amazon-backed Anthropic launches iPhone app and business tier to compete with OpenAI's ChatGPT (CNBC)

Competition watch

Microsoft concern over Google’s lead drove OpenAI investment (Bloomberg News)

U.S. lawmaker probes FTC work with Europe to block Amazon iRobot merger (Reuters)

Trending

Apple banned this app for years. It’s now America’s No. 1 iPhone app. (By Shira Ovide)

Daybook

  • The Senate Commerce Committee holds a hearing, “The Future of Broadband Affordability,” Thursday at 10 a.m.

Before you log off

Thats all for today — thank you so much for joining us! Make sure to tell others to subscribe to The Technology202 here. Get in touch with Cristiano (via email or social media) and Will (via email or social media) for tips, feedback or greetings!

ncG1vNJzZmivp6x7uK3SoaCnn6Sku7G70q1lnKedZL2wuMitoJyrX2d9c4COaWxoaGJksbC%2FjJ2mp6yjYrCiuc%2BaoKCmmaO0bsPIrZ9mmZlk