Technological Oversight

Introduction

Artificial intelligence (AI) technologies are increasingly central to communication strategies that aim to deliver personalized messaging at scale. What once involved basic personalization, such as inserting a recipient’s first name into a message, has evolved into more complex, behavior-driven automation enabled by advances in technology. For example, follow-up messages can now be triggered automatically when a user abandons a form. As communication teams adopt AI tools to enhance relevance and engagement, it is critical to ensure these technologies are implemented ethically and responsibly.

AI’s ability to process and act on growing volumes of constituent data can improve message precision and efficiency. However, it also introduces significant risks, particularly regarding privacy, data misuse, and the potential for unintended bias. In the context of a new Customer Relationship Management (CRM) system, where data is central to strategy, communication professionals must balance innovation, alongside ethical data practices.

Grounded in Jesuit values, this section of the toolkit outlines guiding principles and governance practices to support the responsible use of AI. These values emphasize respect for individual dignity, the responsible stewardship of information, and communication practices that foster trust and accountability.

Gonzaga University’s communication professionals should deploy AI tools for specific, clearly defined communication goals. A focused purpose helps prevent mission drift and supports alignment with institutional objectives. As AI systems rely on vast datasets to function, the risk of privacy violations and potential biases increases (Mirek-Rogowska et al., 2024). Clearly defining use cases helps limit data exposure and ensures that AI applications are both intentional and ethically grounded.

Purpose-Driven Design

Communication professionals must remain involved in reviewing, approving, and, when necessary, revising AI-generated messages. This ensures that automation supports rather than replaces ethical decision-making within the communication process.

Human Accountability

Communication professionals should conduct regular assessments of AI-generated content to verify alignment with its values such as inclusivity, respect, and integrity. These reviews should evaluate tone, accuracy, and potential impact on different audiences.

Ethical Alignment Reviews

Gonzaga University should prioritize tools that allow staff to understand how AI decisions are made. Provide transparency so that staff, regardless of their technical experience, can understand AI decision-making processes. Outputs must be explainable and interpretable to build trust and support accountability.

Transparency and Explainability

Communication professionals should regularly review AI-generated messages to ensure they reflect Jesuit values. When issues arise, they should adjust how the AI creates messages. The organization should also welcome feedback from others and use it to improve fairness, tone, and relevance in future communication.

Feedback and Continuous Improvement

While AI technologies offer powerful opportunities to enhance communication, their use must be guided by thoughtful oversight. Ethical risks such as bias, misalignment with organizational values, and unintended harm can be reduced through clear purpose-setting, human review, and continuous improvement. Although there is a level of uncertainty in technological advancement (Bareis & Katenbach, 2022), by establishing technological oversight through collaboration with data and IT teams, AI technologies can be finessed to align with ethical principles. This collaborative approach ensures that innovation remains grounded in transparency, accountability, and the organization’s mission to communicate with integrity and respect.

References

Bareis, J., & Katzenbach, C. (2022). Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics. Science, Technology, & Human Values, 47(5), pp. 855–881. https://doi.org/10.1177/01622439211030007

Mirek-Rogowska, A., Kucza, W., & Gajdka, K. (2024). AI in communication: Theoretical perspectives, ethical implications, and emerging competencies. Communication Today, Vol.15 (2), 16–29.

Previous
Previous

Ethical Data Use Guidelines

Next
Next

Digital Accessibility and Equity