Technology Oversight
Artificial intelligence (AI) is transforming how organizations deliver personalized communication. Advances in technology now enable behavior-based messaging at scale. For example, follow-up messages can be triggered automatically when a user abandons a form. As AI becomes more central to communication strategies, it is essential to use it responsibly. While AI can improve efficiency and relevance, it also raises ethical concerns, especially around privacy and bias. Grounded in Jesuit values, this toolkit promotes responsible AI use through transparency, respect for individuals, and alignment with the Gonzaga University brand.
Purpose-Driven Design
Gonzaga University’s communication teams should deploy AI tools for specific, clearly defined communication goals. A focused purpose helps prevent mission drift and supports alignment with institutional objectives. As AI systems rely on vast datasets to function, the risk of privacy violations and potential biases increases. Clearly defining use cases helps limit data exposure and ensures that AI applications are both intentional and ethically grounded.
Human Accountability
Communication professionals must remain involved in reviewing, approving, and, when necessary, revising AI-generated messages. This ensures that automation supports rather than replaces ethical decision-making within the communication process.
Ethical Alignment Reviews
Communication professionals should conduct regular assessments of AI-generated content to verify alignment with its values such as inclusivity, respect, and integrity. These reviews should evaluate tone, accuracy, and potential impact on different audiences.
Transparency and Explainability
Gonzaga University should prioritize tools that allow communication teams to understand how AI decisions are made. Provide transparency so that staff, regardless of their technical experience, can understand AI decision-making processes. Outputs must be explainable and interpretable to build trust and support accountability.
Feedback and Continuous Improvement
Communication teams should regularly review AI-generated messages to ensure they reflect Jesuit values. When issues arise, they should adjust how the AI creates messages. The University should also welcome feedback from others and use it to improve fairness, tone, and relevance in future communication.
While AI technologies offer powerful opportunities to enhance communication, their use must be guided by thoughtful oversight. Ethical risks such as bias, misalignment with organizational values, and unintended harm can be reduced through clear purpose-setting, human review, and continuous improvement. Although there is a level of uncertainty in technological advancement (Bareis & Katenbach, 2022), by establishing technological oversight through collaboration with data and IT teams, AI technologies can be finessed to align with ethical principles. This collaborative approach ensures that innovation remains grounded in transparency, accountability, and the organization’s mission to communicate with integrity and respect.
Preparing for Ethical AI Integration in
CRM-Enabled Communication
References
Bareis, J., & Katzenbach, C. (2022). Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics. Science, Technology, & Human Values, 47(5), pp. 855–881. https://doi.org/10.1177/01622439211030007
Mirek-Rogowska, A., Kucza, W., & Gajdka, K. (2024). AI in communication: Theoretical perspectives, ethical implications, and emerging competencies. Communication Today, Vol.15 (2), 16–29.