Artificial intelligence is no longer an idea for the future. It is already part of daily life. We see it in search engines, translation tools, chatbots, and software that helps us process data. For those who work in public participation and engagement, AI brings new opportunities and new risks.
At IAP2 Canada, our work is built on trust, respect, equity, and transparency. These values guide every decision about how we involve people in shaping policies, programs, and projects. As AI tools become more common in engagement, we need to use them in ways that strengthen those values.
This article explores how AI intersects with public participation. It looks at opportunities, risks, and practical steps for ethical use.
AI is a broad term. At its core, it describes computer systems designed to perform tasks that usually require human intelligence. These tasks include learning, reasoning, and making predictions.
The type of AI most people know is generative AI. It creates text, images, audio, and video by remixing data it was trained on. Large language models, like ChatGPT, predict the next word in a sentence based on patterns learned from billions of examples. Other tools process images, recognize speech, or translate between languages.
It is important to remember that AI does not think like a person. It produces outputs based on patterns in its training data. These outputs can be useful, but they can also be inaccurate or biased.
AI can help practitioners with several parts of the engagement process:
AI-assisted search tools can scan large amounts of information and provide summaries in plain language.
Data mining can reveal trends that help tailor engagement strategies.
Generative AI can help draft discussion guides, surveys, or agendas.
Translation tools make information available in more languages.
Voice-to-text and text-to-speech tools improve accessibility for people with hearing or vision needs.
Generative AI can help draft early versions of press releases or plain language summaries.
AI moderation tools can flag harmful language in online forums.
Chatbots can answer common questions from participants and support dialogue by keeping conversations active.
Real-time translation and transcription can help more people participate in meetings.
Natural language processing tools can analyze thousands of comments, survey responses, or transcripts to identify themes.
Sentiment analysis can reveal how communities feel about proposals.
AI forecasting tools can support planners and decision makers with projections.
These uses can help practitioners save time, reach more people, and make engagement more inclusive.
The promise of AI comes with serious risks. Practitioners need to understand them before adopting any tool.
AI systems learn from data, and data often reflects social and cultural biases. Many AI systems are trained on data that is WEIRD (Western, Educated, Industrialized, Rich, Democratic). This means they may not work well for communities outside those contexts. If not checked, AI outputs can reinforce stereotypes and exclude marginalized groups (Norori et al., 2021, as cited in Boyco & Robinson, 2024).
Generative AI tools often produce outputs that are wrong but sound convincing. These errors are sometimes called hallucinations. In engagement, this can spread misinformation or reduce confidence in the process.
Participants deserve to know when AI is being used. Lack of clarity can erode trust. If people believe their input is being filtered or analyzed by a “black box,” they may disengage.
Many AI tools require uploading data to commercial systems. This creates risks around data ownership, consent, and protection. Sensitive community information should never be shared with tools that lack clear safeguards.
Participation is built on dialogue, empathy, and relationships. Over-reliance on AI can reduce the human contact that makes engagement meaningful.
Some researchers are experimenting with replacing human participants with “virtual publics” created by large language models. This idea, sometimes called synthetic democracy, risks undermining real voices. Public participation is about people, not simulations (Crockett & Messeri, 2023, as cited in Boyco & Robinson, 2024).
AI also has impacts beyond the engagement process. Training and running large models requires massive energy, contributing to environmental harm. Behind many tools is low-wage labor in the Global South, where workers label data under poor conditions (Taylor, 2023, as cited in Boyco & Robinson, 2024). Practitioners need to be aware of these hidden costs when making choices about AI.
IAP2’s Core Values and Code of Ethics provide a foundation for thinking about AI. Here are practical steps for ethical use:
Be transparent. Always disclose when and how AI is being used. Give participants a choice to engage through other means.
Protect privacy. Never upload sensitive information into AI systems without safeguards. Check your organization’s policies.
Evaluate bias. Review outputs for fairness. Consider how use of the tool might reinforce inequities or exclude some voices.
Keep people in the loop. Use AI to support analysis, not to make decisions. Human judgment must always guide outcomes.
Build trust. Only use AI if it adds value to the process and strengthens relationships with participants.
Test and verify. Treat AI outputs as drafts. Check their accuracy before sharing them.
Avoid overreliance. Do not let AI replace your own thinking or creativity.
Choose tools carefully. Look for reliable, appropriate, and context-sensitive tools. Be skeptical of hype.
These steps align with international guidelines, but also reflect the unique responsibilities of public participation professionals (Boyco & Robinson, 2024).
AI will continue to evolve quickly. We will see more tools integrated into platforms we already use, from survey software to digital engagement portals. We will also see more pressure to use AI for efficiency in times of budget cuts.
Practitioners need to resist shortcuts that compromise integrity. Good engagement takes time and human effort. There are no replacements for listening, dialogue, and trust building.
IAP2 Canada encourages practitioners to take an active role in shaping how AI is used. This means asking critical questions, sharing experiences, and learning from each other. It also means advocating for public input in how AI tools themselves are designed and governed (Boyco & Robinson, 2024).
AI offers real opportunities to improve participation through translation, accessibility, analysis, and planning.
Serious risks include bias, inaccuracy, privacy concerns, and the loss of human connection.
Practitioners must use AI carefully, with transparency, accountability, and human oversight.
Decisions should never be automated. Public participation is about people, not machines.
The goal is not to replace engagement with AI, but to use it responsibly to support more inclusive and effective processes (Boyco & Robinson, 2024).
Public participation strengthens democracy. AI will shape its future, but values must come first. Respect, inclusion, transparency, and accountability are the anchors. With these in place, AI can help us listen better, reach further, and involve more people in decisions that affect their lives (Boyco & Robinson, 2024).
Reference
Boyco, M., & Robinson, P. (2024). Artificial Intelligence: Its potential and ethics in the practice of public participation. International Association for Public Participation (IAP2) Canada. https://cdn.wildapricot.com/44479/resources/Documents/RESEARCH/Artificial%20Intelligence%20-%20IAP2%20Canada%20-%20April%202024%20FINAL3.pdf
-Patrick McKeown is a digital marketing and technology leader with more than 25 years of experience in the web industry. He has worked with global brands, government organizations, and non-profits, specializing in SEO, digital strategy, and content development. Currently, he leads marketing and IT initiatives for IAP2 Canada, where he focuses on building engagement, expanding digital presence, and shaping the organization’s strategic direction. *The views and opinions expressed in this article are those of the author and do not necessarily reflect the official position or policies of IAP2 Canada.
Subscribe Today For News & Updates
Subscribe to our mailing list to get the latest updates from IAP2 Canada.
Subscribe
Sitemap - English
Home
About
Membership
Training & PD
Programs
Community
Resources
Contact
Member zone
Subscribe for updates
Plan du site - Français
Accueil
À propos
Adhésion
Formation et DP
Programmes
Communauté
Ressources
Contactez-nous
Zone membre
S'abonner
© Copyright IAP2 Canada 2025 | Privacy Policy