Key Points:
- Generative AI can make public health communication more efficient, audience-centered, and accessible.
- The rise of AI technology has newfound implications for ethics and transparency in public health communications.
- Several ongoing projects at RTI are focused on helping clients leverage AI to enhance formative communication research.
Generative artificial intelligence (AI) tools are becoming increasingly valuable, enabling public health communicators to efficiently perform tasks that are otherwise time-consuming, expensive, and error-prone. As AI capabilities grow, public health professionals must explore how these tools can enhance communication strategies while ensuring ethical and responsible use.
At RTI, our experts are actively leveraging AI to improve formative communication research, tailor health messaging, and enhance virtual design. This blog explores ways health communicators can leverage AI, as well as considerations for its use in content development.
AI in Formative Communication Research
One type of tool that has revolutionized public health communication is the use of large language models (LLMs). LLMs, like ChatGPT, are AI technologies that can quickly analyze text and generate human-like responses. Users can prompt LLMs with tailored the instructions given to the LLM—a process known as prompt engineering.
Formative communication research often involves coding and analyzing large volumes of text, a process that is labor-intensive. With this in mind, we evaluated LLMs’ ability to analyze texts in various public health applications including:
- Qualitative coding of open-ended responses to surveys, interviews, focus groups, and public comments.
- Conducting literature meta-analyses to identify trends and insights from large datasets.
- Monitoring public discourse on social media to assess responses to health messaging.
Our findings revealed that AI-assisted analyses can match human coders in accuracy while significantly increasing speed. However, human oversight remains crucial. By refining prompt engineering techniques and incorporating systematic reviews, we mitigate accuracy concerns and maximize the reliability of AI-generated insights.
LLMs allow us to expedite ongoing work to analyze public comments to federal policy changes, extract data and information to support meta-analyses, and monitor social media for responses to public health messaging. Additionally, we have used LLMs to develop internal tools like SmartSearch to answer questions based on document collections and help ensure regulatory compliance.
We have identified several areas for further exploration of using LLMs to assist formative research analyses by applying these methods to new contexts and document types.
AI-Powered Content Development
Public health messaging must be clear, accessible, and audience-specific. Generative AI streamlines content creation, allowing for efficient adaption to different audiences and their preferences.
For example, plain language is an integral part of public health communication. Creating clear and accessible content for the general public often takes significant expertise and time. Rewriting content geared toward different audiences requires adapting words, sentence length, and formatting for the intended reader. Additionally, crafting short summaries (like the one at the top of this post) is another way to optimize content for readers.
Our multidisciplinary team of AI and plain language experts assessed generative AI’s ability to tailor content for different audiences by prompting an LLM (ChatGPT) to apply plain language principles to a wide variety of content—including web pages, technical reports, and manuscripts—for three primary audiences: people with low literacy, the public, and health care providers.
After scoring AI’s outputs for reading-level, accuracy, and tone, our findings revealed considerable potential in generative AI’s ability to condense large volumes of content, organize it into more readable sections, and rewrite it in active voice. Across materials, ChatGPT successfully decreased reading level and maintained original meaning. However, findings suggested that plain language edits continue to need human review for consistency, accuracy, and audience-centered language.
Overall, we have found that the use of AI can help communicators more efficiently tailor content for specific audiences when used as a tool rather than a replacement for public health professionals.
Enhancing Visual Design with Generative AI
Public health communicators face a critical need for audience-centered, accessible, and eye-catching design.
Currently available stock imagery has limitations in terms of varying skin tones, ethnic and cultural, representation, and body types. Generative AI models like StyleGAN can fill these gaps in representation by creating images indistinguishable from human faces.
Additionally, generative AI can recreate public health settings that are otherwise difficult to capture. Any setting can be created to avoid the expensive and inefficient logistical challenges required to represent multiple scenarios and locations like homes, schools, and medical settings. Other potential depictions include time-lapse imagery and environmental impact visualizations that users can quickly adapt for public health campaigns.
Once this imagery is created, however, accessibility remains essential. Thanks to AI tools, we can instantly generate alternative text and closed captioning—reducing the time required to deliver high-quality, audience-centered products for all consumers.
RTI is actively exploring the ways that generative AI tools (Adobe, DALL-E) can be leveraged to facilitate visual storytelling. Our experiments involve the use of AI to create step-by-step health care training scenarios, benefitting from these tools’ ability to create hyper-realistic imagery and communicate comprehensive health narratives.
Another recent application involves our use of predictive eye-tracking tools. These programs scan text and images to make recommendations for improved readability and layouts. AI enables us to maximize the impact of public health communication products with these additional checks.
Finally, generative AI can be used to enhance persona profiles. Personas are a crucial tool in the content development process that depict target audiences for public health campaigns. By leveraging AI tools that can generate multiple realistic personas and utilize audience research, we can create dynamic digital personas that elevate health communication materials.
AI Considerations
While these tools are changing the landscape of public health communication, there are several considerations when implementing generative AI:
Intellectual Property and Copyright Infringement
Many AI tools are trained on copyrighted materials (Stable Diffusion and MidJourney) or do not disclose the source of their data (DALL-E and Shutterstock). Recent rulings have protected certain tools from copyright disputes, but responsible commercial use of generative AI remains imperative.
Despite this, there are tools available that comply with copyright standards and are available for commercial use, including subscription-based options from Getty Images and Adobe.
Transparency and Trust
Communicators must also consider consumers’ reaction to AI-generated content. Further exploration and research are needed to determine the public’s trust in products that employ these tools. It’s also important to consider what transparency means for the use of AI in public health communication and how public health communicators can be a leader in both innovation and responsible use of AI.
The Future of AI in Public Health Communication
As AI continues to evolve, public health organizations must establish guidelines for ethical AI use, invest in structured evaluations processes, and prioritize human oversight to maximize benefits while minimizing risks. Given the tremendous opportunities AI tools present, it is important to look at the implications for its further use and exploration.
- Human review of AI: We know that current iterations of generative AI models have the ability to create misinformation (hallucination) and may lack consistency in their outputs. It’s our duty as communicators to review AI-generated products and verify accuracy when using these tools.
- Responsible AI use: Organizations must lead in responsible AI use and prioritize ethics while using such groundbreaking tools.
- Structured evaluation processes: We must develop processes to evaluate performance of LLMs and other AI tools to inform the use of generative AI. Monitoring outputs will lead to more credible and efficient use of tools that have the potential to lower costs and increase efficiency for public health communicators.
By leveraging AI’s transformative capabilities, we empower organizations to create more audience-centered and cost-effective health communication, providing tailored solutions that address unique needs and deliver measurable impact.