Artificial intelligence (AI) offers considerable promise for humanity, yet it also poses risks that must be considered in practical settings. While AI technologies can enhance research—whether improving project efficiencies by using large language models or assisting in scenarios with sensitive or uncommon data— researchers must commit to ensure the responsible, secure, and impactful development and use of groundbreaking AI in their work.
Building Frameworks to Support the Use of AI in Research
AI tools are becoming more widespread in their use; with just a few clicks, individuals have access to a broad umbrella of AI technologies that can complete tasks that ordinarily require human intelligence. As with any new technology, there must be thoughtful attention given to guarantee its safe and effective implementation. Responsible AI practices prioritize these ideals by identifying ways to maintain fairness, sustainability, privacy, and security—among other qualities. Mandating these AI values will benefit users in the long-term, and it is the responsibility of the legislature, universities and organizations, government agencies, and others to proactively develop the necessary governance for AI-usage.
With this need in mind, RTI established a comprehensive framework of internal policies and procedures to guide the responsible use of AI (with a particular focus on generative AI), and assembled a cross-disciplinary team to review uses and requests of AI within the organization. The approach was informed by internal expertise and insights from external resources and entities, including the National Institute of Standards and Technology (NIST) Risk Management Framework. The RTI AI framework covers both productivity and project needs and includes special requirements for high-risk applications with potential for significant human impact.
Responsible AI in Practice: Implementing Safe and Ethical AI Frameworks
To understand how to implement responsible AI, it must first be generally acknowledged that AI is already in widespread use and has caused harm in the past. Even with these past harms, AI regulation in the United States is slow, and we are only protected under certain voluntary guidelines and a patchwork of state laws. Organizations must step in to fill the gap to manage AI risk, ensure equity, and prevent harm. In my experience, there is common interest for these considerations, including among my fellow consortium members participating in the NIST U.S. AI Safety Consortium (AISIC). The group brings together AI creators and users, academics, government and industry researchers, and civil society organizations to meet the mission of supporting the development and deployment of trustworthy and safe AI.
To start, data privacy and protection is a core component of a responsible AI framework. For example, restrictions can be put in place to prevent client or organizational data from being improperly uploaded to unauthorized cloud or generative AI systems, potentially compromising the data. AI embedded in enterprise software should be reviewed prior to use. In addition, AI use must comply with existing provisions including international, state, and federal regulations and information security requirements.
There must also be an emphasis on the importance of notice, transparency, accountability, and human leadership and review. On our team, investigators using generative AI systems are accountable for their outputs and must work towards mitigating potential negative consequences, including bias, ethical shortcomings, and legal infringements. RTI staff are prohibited from using AI systems to create misinformation or “deepfakes”.
A variety of individuals and teams with backgrounds in data governance, internet technology, and ethics must step into critical roles to ensure the appropriate implementation of AI, including making decisions about if and how AI solutions are developed and implemented. Individual contributors and project leaders should have access to documentation and educational resources to understand and use AI in their work to allow for user success and efficacy. By taking these intentional steps now, practitioners can use AI both responsibly and ethically, avoiding potential challenges from using the technologies.
Setting the Standard for Responsible AI in the Research Community
Through membership in associations such as NIST’s AISIC, I have joined other subject matter experts on a shared mission to contribute to the development of proposed national generative AI standards. This is an exciting opportunity for RTI to share our expertise from our work with AI on real-world projects with human impact, and to collaborate with others working towards the safe and trustworthy use of AI. Along with this external focus group, our team has conducted responsible AI research that was presented at the 2022 IEEE Big Data Responsible AI and Data Ethics (RAIDE) workshop. The presentation emphasized the need to prioritize policy that furthers responsible AI in the United States—from legislative efforts on the national level to education initiatives on the state and organizational level. Advocacy from all these stakeholders can set the research community on a path for safer AI implementation in their work, which will be vital for its continued use on projects.
The involvement of community partners is another important facet of creating responsible AI protocols. Their insight into AI efforts is crucial to support and maintain diversity and equity in AI development. RTI’s Transformative Research Unit for Equity (TRUE) is an important leader for connecting responsible AI efforts at RTI with the larger community. In February 2024, TRUE hosted a discussion titled, “Using AI in Service of Equity.” The gathering united community leaders and researchers in North Carolina to discuss AI in health, education, environmental justice and infrastructural equity, and community safety.
As the scientific ecosystem continues to change at a rapid pace, our team is proactively promoting the ethical and responsible use of AI. RTI’s ongoing investment in innovative technologies—and the policies built to support them—will allow our researchers to continue to answer the call to clients and partners in all areas of research, including matters of public health, climate, justice, education, and more.
Learn more about RTI’s work in artificial intelligence technologies and solutions.