Adopted March 20, 2025
Introduction
ESA is committed to exploring the power of Artificial Intelligence (AI) to advance the Society's mission while upholding ethical standards and prioritizing responsibility. This AI policy outlines guidelines for the use of AI to perform work and emphasizes the importance of individual responsibility in all AI-generated outputs. This policy uses the term "originator" to include individual(s) who use AI-generated outputs in our organization's work and configure AI capabilities with our work processes.
Ethical AI Framework
ESA will adhere to a well-defined Ethical Framework for AI development, deployment, and usage. This Ethical Framework includes the following principles:
- Transparency: ESA will be transparent about the use of AI in our work. ESA discloses that AI is assumed to be used in part of many work processes.
- Fairness and Equity: Originators are responsible for reviewing for bias, hallucinations, and discrimination, and they will actively monitor and rectify any unintended biases in the AI-generated outputs.
- Privacy and Data Security: Aligned with the ESA Privacy Policy and working with our partners, ESA will ensure that AI models used by the Society and originators are trained on anonymized and consented data and that access to sensitive information is strictly controlled. This AI policy is an extension of and should be read in conjunction with the ESA Privacy policy (https://www.entsoc.org/privacy).
- Accountability: ESA will hold individuals accountable for decisions made based on AI-generated outputs and ensure that the responsibility for any consequences remains with the originator of the work. AI cannot be an originator in this sense.
- Incidents: In the event of AI incidents that impact the organization, ESA will conduct a review process to ensure accountability. This includes identifying lesson(s) learned, implementing changes to processes using AI output, updating training, and communicating to maintain trust.
- Deployment Ethics: ESA recognizes the diverse source of AI tools - whether procured by the individual or the organization, standalone or integrated. The deployment of these AI systems in ESA's workflow adheres to the ethical intent of this policy.
- Communities of Practice: Communities of practice within ESA (e.g., Branches, Sections, Committees, staff, boards, councils, etc.) may develop additional guidelines for AI usage in their areas of responsibility. To the extent that they do not undermine this policy, those policies are to be relied upon and honored. If the policies are found to be in conflict, this Society level policy shall supersede.
Originator Responsibility
ESA recognizes that while AI can significantly enhance productivity and decision-making, it is ultimately a tool that requires human oversight and responsibility. Therefore, the following guidelines for originator responsibility will be strictly adhered to:
- AI-Assisted Human Decision Making: AI-generated outputs will always require human review and approval before implementation. In cases where the stakes are high or there are large societal impacts, special care should be included with the human review.
- Training and Awareness: ESA will seek to make available or otherwise promote training on AI ethics, biases, and limitations to make informed and responsible decisions for both staff and the originator community.
- Decisive Authority: Originators retain the final authority over AI-generated outputs and will not absolve their accountability by blaming AI algorithms for unintended outcomes. For example, if there are AI-generated conclusion(s), then it is up to the originator(s) to verify that the conclusion(s) is true.
- Regular Audits: ESA may conduct audits and evaluations of AI systems to assess their performance, fairness, and overall impact, as well as ensure compliance with our Ethical Framework.
Data Usage and Ownership
ESA will strictly adhere to data usage and ownership guidelines:
- Data Responsibility: Data used for training AI models will be obtained legally and responsibly, with proper consent and protection aligned with the ESA Privacy Policy. Individualized membership data (e.g., identifiable names, email addresses, membership demographics) will not be entered into AI models without those members' consent.
- Open Source and Collaboration: In the event that ESA creates a new AI-powered tool or utility, when feasible and within the means of our resources, ESA will seek to contribute AI models and tools to the open-source community to encourage transparency, collaboration, and accountability.
Collaboration and External Partnerships
ESA recognizes the significance of collaboration with external partners in the AI space. When collaborating with external organizations:
- Shared Values: We will partner with entities that share our commitment to ethical AI usage and responsible decision-making.
- Clear Agreements: All partnerships will have explicit agreements that outline data usage, responsibilities, intended use, and shared ethical principles.
- Approval for all collaboration will align with accepted decision-making structures wherein the Governing Board is the decision-maker for strategic partnerships and the Executive Director is the decision-maker for operational implementation of those partnerships.
Continuous Improvement
ESA is committed to continuous improvement in our AI policy and practices:
- Feedback Mechanism: We will establish a feedback mechanism to receive inputs from stakeholders on AI-related practices and concerns.
- Adaptive Governance: The AI policy will be periodically reviewed and updated to align with the latest advancements, regulations, and ethical standards.
Conclusion
By adopting this AI policy, ESA intends to leverage AI's potential while ensuring the ultimate accountability lies with the human originators. We commit to conducting our AI-based work ethically, transparently, and responsibly, fostering ESA's mission.