LEGAL
Responsible AI Governance Policy
Last Updated
Responsible AI Governance Policy
Version 1.0
Syntari AI, Inc.
Effective Date: February 23, 2026
855 Boylston Street, Suite 1000
Boston, MA 02116
Contact:
legal@syntari.ai
ai-governance@syntari.ai
Copyright © 2026 Syntari International, Inc.
Table of Contents
● 1. Purpose and Scope
● 2. AI Ethics Principles
● 3. AI Risk Classification Framework
● 4. Third-Party AI Provider Governance
● 5. Data Governance for AI
● 6. Model Lifecycle Management
● 7. Bias and Fairness
● 8. Transparency and Explainability
● 9. Human Oversight
● 10. Incident Response for AI
● 11. Compliance and Regulatory Alignment
● 12. Audit and Accountability
● 13. Training and Awareness
● 14. Version History and Review Schedule
Purpose and Scope
This Responsible AI Governance Policy establishes the framework for the development, deployment, and ongoing management of artificial intelligence (AI) features within the Syntari platform. The policy applies to all AI-powered capabilities, services, and features offered to our customers in the insurance, asset management, and financial services industries.
1.1 Governance Objectives
● Ensure ethical and responsible development and deployment of AI systems
● Establish clear accountability mechanisms for AI-powered decisions and outcomes
● Maintain customer trust through transparent and explainable AI practices
● Protect customer privacy and data security in all AI processing
● Identify and mitigate risks associated with AI systems before deployment
● Ensure compliance with applicable regulatory frameworks including the EU AI Act
● Provide mechanisms for human oversight and intervention in high-risk AI decisions
1.2 Scope of Coverage
This policy governs all AI features within the Syntari platform, including but not limited to:
● Features powered by third-party AI providers (Anthropic/Claude, OpenAI/GPT, Google/Gemini)
● Data processing and analysis utilizing AI models
● Automated decision-making systems with potential customer impact
● AI-generated recommendations and content
● Customer-facing and internal AI tools and features
● Custom fine-tuned models or implementationsAI Ethics Principles
Syntari AI is committed to the following ethical principles in the design, development, and deployment of all AI features:
2.1 Fairness
We ensure that AI systems do not discriminate against individuals or groups based on protected characteristics including but not limited to race, gender, age, ethnicity, or socioeconomic status. All AI systems undergo bias detection and mitigation testing before deployment and on an ongoing basis.
2.2 Transparency
We maintain transparency regarding the use of AI in the Syntari platform. All customers are clearly informed when they are interacting with AI-powered features, how their data is being processed, and the limitations of AI outputs.
2.3 Accountability
We establish clear lines of accountability for AI-powered decisions and outcomes. Responsibility for AI system performance, bias, and incidents is clearly assigned and monitored. All AI systems include audit trails and logging capabilities for accountability purposes.
2.4 Privacy
We protect customer privacy as a fundamental right and implement data minimization practices in all AI processing. Customer data is never used to train, fine-tune, or otherwise improve third-party AI models without explicit consent.
2.5 Safety
We prioritize the safety and security of AI systems to prevent harm to customers and their data. All AI systems are tested for robustness, security vulnerabilities, and potential harmful outputs before deployment.
2.6 Human Oversight
We maintain meaningful human oversight over AI-powered decisions, particularly for high-risk or consequential decisions. Humans retain the ability to understand, override, and intervene in AI system decisions.AI Risk Classification Framework
Syntari AI implements a risk classification framework aligned with the EU AI Act to categorize AI systems based on their potential impact and risk level. This framework guides governance, oversight, and compliance requirements.
3.1 Risk Tiers
Risk Tier Definition Governance Requirements
Minimal Risk AI systems with minimal potential for harm (e.g., general recommendation systems) Standard monitoring and documentation
Limited Risk AI systems with potential for some customer impact (e.g., decision support systems) Enhanced monitoring, bias testing, user disclosure
High Risk AI systems with significant potential for harm (e.g., automated underwriting, compliance determinations) Rigorous testing, human-in-the-loop, impact assessment, ongoing monitoring
Unacceptable Risk AI systems that create unacceptable risks to customer safety or rights Prohibition or significant constraints
3.2 Risk Assessment Process
All new AI features undergo a formal risk assessment before deployment:
● Identify potential harms and impacts on customers
● Evaluate probability and severity of potential harms
● Determine applicable risk tier
● Document risk mitigation strategies
● Obtain appropriate approvals based on risk tierThird-Party AI Provider Governance
Syntari AI relies on third-party AI providers to power many of its AI features. We implement comprehensive governance and oversight mechanisms for these provider relationships.
4.1 Covered AI Providers
Syntari AI works with the following third-party AI providers:
● Anthropic (Claude models)
● OpenAI (GPT models)
● Google (Gemini models)
4.2 Provider Selection Criteria
New AI providers are evaluated based on the following criteria:
● Security and data protection capabilities
● Commitment to responsible AI practices
● Transparency regarding model training data and capabilities
● Compliance with applicable regulations
● Financial stability and business continuity
● Support for audit and oversight mechanisms
● Alignment with Syntari AI ethics principles
4.3 Provider Evaluation Process
● Initial due diligence assessment (security, compliance, ethics)
● Legal review of provider terms of service and data handling
● Technical evaluation of model capabilities and limitations
● Pilot testing with representative use cases
● Board-level approval for significant new providers
4.4 Risk Assessment Requirements
Syntari AI conducts ongoing risk assessments for each AI provider, covering:
● Model accuracy, bias, and fairness performance
● Data privacy and security practices
● Capability limitations and potential for harmful outputs
● Transparency and explainability features
● Provider financial viability and business continuity
● Regulatory compliance status
4.5 Ongoing Monitoring and Performance Evaluation
Syntari AI maintains continuous monitoring of third-party AI providers:
● Monthly performance metrics review (accuracy, latency, error rates)
● Quarterly security and compliance assessments
● Semi-annual bias and fairness evaluations
● Annual comprehensive provider audits
● Tracking of provider updates, policy changes, and security incidents
● Maintained inventory of all provider relationships and dependenciesData Governance for AI
Data governance is fundamental to responsible AI practices. Syntari AI implements strict policies to protect customer data and maintain privacy in all AI processing.
5.1 Training Data Restrictions
Syntari AI prohibits the use of customer data for training, fine-tuning, or otherwise improving AI models without explicit customer consent:
● Customer data is never used to train proprietary models
● Customer data is not used to fine-tune third-party models
● Aggregated, anonymized data may be used for quality improvement only with contractual safeguards
● Customers maintain ability to opt out of data usage for improvement purposes
● All training data usage is documented and disclosed to customers
5.2 Prompt Data Handling and Retention
Syntari AI manages data submitted as prompts to AI systems according to the following principles:
● Prompt data is classified as customer data and subject to the same protections
● Prompts are retained only as necessary for service delivery and abuse monitoring
● Prompt data is retained for no more than 30 days for abuse monitoring purposes
● Prompts containing sensitive information (PII, financial data) are encrypted and access-restricted
● Customers may request deletion of prompt data at any time
● Syntari AI works with third-party providers to minimize prompt data retention
5.3 Output Data Ownership and Classification
AI-generated outputs are treated as follows:
● Customers own AI-generated outputs produced using the Syntari platform
● Outputs are classified according to sensitivity and regulatory requirements
● Syntari AI retains only technical metadata necessary for platform operations
● Outputs containing sensitive data are stored with appropriate encryption and access controls
● Customers control distribution and use of AI-generated outputs
5.4 Data Minimization in AI Processing
Syntari AI implements data minimization practices in all AI processing:
● Only data necessary for specific AI use cases is processed
● Sensitive data (PII, financial information) is masked or anonymized where possible
● Data retention periods are minimized to the extent practical
● Regular data audits ensure compliance with minimization principlesModel Lifecycle Management
Syntari AI implements comprehensive management of AI models throughout their lifecycle, from development through retirement.
6.1 Model Deployment Approval Process
All new AI models undergo a formal approval process before deployment:
● Risk assessment and classification (Section 3)
● Technical evaluation of model performance and safety
● Bias testing and fairness evaluation (Section 7)
● Security and privacy review
● Documentation of model capabilities and limitations
● Approval by appropriate governance bodies based on risk tier
● Customer impact assessment and disclosure planning
● Rollback plan development and testing
6.2 Version Control and Change Management
Syntari AI maintains strict version control and change management for all models:
● All model versions are tracked and documented
● Changes to models are tested before deployment
● Model changes are communicated to affected customers
● Change logs document technical changes and performance impacts
● Regular model performance monitoring ensures quality over time
6.3 Model Retirement Procedures
Models are retired according to the following procedures:
● Retirement is planned with advance notice to customers
● Alternative models or capabilities are provided to customers
● Data generated by retired models is retained according to retention schedules
● Documentation of retired models is retained for audit purposes
6.4 Rollback Capabilities
Syntari AI maintains rollback capabilities for all deployed models:
● Previous model versions are retained and maintained
● Rollback procedures are tested regularly
● Rollback can be executed rapidly in response to model failuresBias and Fairness
Syntari AI is committed to detecting and mitigating bias in all AI systems to ensure fair treatment of all customers.
7.1 Bias Detection and Mitigation Framework
Syntari AI implements a comprehensive framework for bias detection and mitigation:
● Bias testing is conducted on all new models before deployment
● Fairness metrics are defined for each use case
● Testing evaluates performance across protected demographic groups
● Mitigation strategies are developed for identified biases
● Ongoing monitoring detects emerging biases post-deployment
● Regular retraining addresses model drift and emerging biases
7.2 Regular Algorithmic Auditing Schedule
Syntari AI conducts regular algorithmic audits according to the following schedule:
● Monthly: High-risk models
● Quarterly: Limited-risk models
● Semi-annually: Minimal-risk models
● Annual: Comprehensive review of all models
● Ad-hoc: In response to customer complaints or identified issues
7.3 Disparate Impact Assessment Procedures
Syntari AI evaluates potential disparate impact of AI systems:
● Analysis of outcome disparities across protected groups
● Evaluation of both direct and indirect discrimination
● Assessment of policy intent versus discriminatory effects
● Documentation of findings and mitigation strategies
● Regular reassessment to ensure sustained compliance
7.4 Remediation Protocols
When bias is detected, Syntari AI implements remediation:
● Immediate investigation and impact assessment
● Development of corrective action plan
● Implementation of technical mitigations (retraining, rebalancing, etc.)
● Customer notification if impact is significant
● Validation of remediation effectivenessTransparency and Explainability
Syntari AI is committed to transparency regarding the use of AI in its platform and the explainability of AI decisions.
8.1 AI Disclosure Requirements to End Users
Customers are clearly informed regarding AI use in the Syntari platform:
● Clear indication when features are powered by AI
● Explanation of AI capabilities and limitations
● Description of data used by AI systems
● Privacy implications of AI feature usage
● Options to opt out of AI-powered features
● Contact information for AI governance questions
8.2 Decision Explainability Standards
AI systems that make or support consequential decisions provide explanations:
● High-risk systems provide detailed explanations of decision rationale
● Explanations are provided in plain language understandable to customers
● Customers can challenge or appeal AI-made decisions
● Human review and override mechanisms are available
8.3 AI-Generated Content Labeling
Content generated by AI is clearly labeled:
● All AI-generated outputs are marked as AI-generated
● Labeling is persistent even if content is modified
● Source of AI model is disclosed (Anthropic, OpenAI, Google, etc.)
● Disclaimer regarding potential limitations and hallucinations
8.4 Model Card and System Documentation Requirements
Syntari AI maintains comprehensive documentation for all AI systems:
● Model cards documenting model architecture, training data, and performance
● System documentation describing capabilities and limitations
● Data documentation describing datasets used
● Evaluation results from testing and auditing
● Known limitations and edge cases
● Version history and change logsHuman Oversight
Syntari AI maintains meaningful human oversight over AI-powered decisions, particularly for high-risk or consequential decisions.
9.1 Human-in-the-Loop Requirements for High-Risk Decisions
High-risk AI systems include meaningful human involvement:
● Human review is required for all high-risk automated decisions
● Humans receive sufficient information to understand AI recommendations
● Humans have training and expertise for effective oversight
● Human review occurs before decisions are implemented
● Performance of humans and AI systems is monitored
9.2 Override and Intervention Mechanisms
Meaningful human control over AI systems is maintained:
● Humans can override AI recommendations without restriction
● Alternative decision-making pathways are available
● Technical systems support rapid human intervention
● No penalties for human overrides of AI recommendations
9.3 Escalation Procedures
Syntari AI maintains clear escalation procedures for AI-related issues:
● Unclear or concerning AI recommendations can be escalated
● Customer complaints about AI systems trigger escalation
● Escalated issues are reviewed by qualified personnel
● Clear timelines and responsibilities for escalation response
9.4 Monitoring Dashboards
Syntari AI maintains dashboards for AI system monitoring:
● Real-time dashboards track AI system performance metrics
● Bias and fairness metrics are continuously monitored
● Error rates and failure modes are tracked
● Customer complaint patterns are identifiedIncident Response for AI
Syntari AI implements AI-specific incident response procedures to address failures, bias, and other adverse events.
10.1 AI-Specific Incident Classification
AI incidents are classified by type and severity:
● Hallucination incidents: AI systems generating false or inaccurate information
● Bias incidents: AI systems demonstrating discriminatory behavior
● Security incidents: AI systems being compromised or manipulated
● Performance incidents: AI systems failing to meet performance standards
● Compliance incidents: AI systems violating regulatory requirements
10.2 Hallucination Response Procedures
When AI systems generate hallucinations or false information:
● Immediate investigation and validation of accuracy
● Communication to affected customers
● Removal or correction of affected AI-generated content
● Analysis of root causes
● Implementation of controls to prevent recurrence
● Documentation for regulatory compliance
10.3 Bias Incident Handling
Bias incidents are handled according to Section 7 remediation protocols, plus:
● Notification to affected customer groups
● Assessment of potential discriminatory harm
● Evaluation of legal/regulatory implications
● Board-level reporting if significant impact
10.4 Regulatory Notification Triggers
Syntari AI notifies relevant regulatory authorities when:
● AI system failures result in customer financial harm
● Discriminatory bias is detected in regulated decisions
● AI systems are compromised or manipulated
● Systemic failures affect service availability
● As otherwise required by applicable regulationsCompliance and Regulatory Alignment
Syntari AI aligns its AI governance practices with applicable regulatory frameworks and industry standards.
11.1 EU AI Act Compliance
Syntari AI implements requirements of the EU AI Act, effective August 2, 2026:
● Risk classification aligned with EU AI Act categories
● High-risk systems subject to stringent requirements (testing, documentation, human oversight)
● Transparency requirements for high-risk and biometric systems
● Compliance procedures for prohibited AI practices
● Documentation and record-keeping for regulatory compliance
● Readiness for competent authority audits and enforcement
11.2 NIST AI Risk Management Framework
Syntari AI implements the NIST AI Risk Management Framework:
● Map function: Identifying AI systems and stakeholders
● Measure function: Measuring AI system performance and risks
● Manage function: Managing identified risks through controls
● Monitor function: Continuous monitoring of AI system performance
11.3 ISO/IEC 42001 (AI Management System)
Syntari AI implements an AI management system aligned with ISO/IEC 42001:
● Risk assessment and management processes
● AI governance structures and roles
● Data and model management
● Performance monitoring and evaluation
● Regular auditing and improvement
11.4 Industry-Specific Requirements
Syntari AI complies with industry-specific requirements for its customer base:
● Insurance: Compliance with insurance regulator guidance on AI use
● Asset Management: Compliance with SEC and FINRA guidance on algorithmic trading and recommendations
● Financial Services: Compliance with banking regulators and consumer protection requirementsAudit and Accountability
Syntari AI maintains robust audit and accountability mechanisms for AI governance.
12.1 Annual AI Governance Audit
Syntari AI conducts an annual comprehensive audit of AI governance:
● Review of all deployed AI systems and their governance
● Assessment of compliance with this policy
● Evaluation of ethics principles implementation
● Review of incident response and remediation
● Assessment of control effectiveness
● Identification of improvement opportunities
12.2 Third-Party Assessment Requirements
Syntari AI engages third-party assessors for independent validation:
● Annual third-party audit of AI governance practices
● Evaluation of compliance with regulatory requirements
● Technical assessment of AI systems
● Bias and fairness independent verification
● Security assessment of AI systems
12.3 Board-Level AI Risk Reporting
Syntari AI maintains board-level oversight of AI risks:
● Quarterly board reporting on AI governance
● Reporting of material AI incidents and risks
● Review of compliance status with regulatory requirements
● Strategic discussion of AI governance priorities
12.4 Regulatory Examination Readiness
Syntari AI maintains readiness for regulatory examinations:
● Documentation readily available for regulator review
● Record-keeping systems maintained
● Testing procedures and results retained
● Audit trails and logs maintained for all AI systemsTraining and Awareness
Syntari AI ensures that employees understand and comply with responsible AI principles and governance requirements.
13.1 Employee AI Literacy Requirements
All Syntari AI employees receive baseline AI literacy training:
● Understanding of AI capabilities and limitations
● Awareness of responsible AI principles
● Basic understanding of bias, fairness, and ethics
● Privacy and data protection in AI contexts
● Incident reporting and escalation procedures
13.2 Role-Based AI Training Programs
Employees in AI-related roles receive specialized training:
● Product teams: Detailed training on responsible AI requirements for development
● Data teams: Training on data governance, privacy, and ethical data practices
● Support teams: Training on explaining AI features and handling customer concerns
● Leadership: Training on AI governance and strategic oversight
13.3 Responsible AI Certification
Syntari AI maintains certification programs:
● Responsible AI Practitioner certification for technical staff
● Annual recertification requirements
● Certification tracking and compliance monitoringVersion History and Review Schedule
14.1 Policy Versions
Version Effective Date Key Changes
1.0 February 23, 2026 Initial release
14.2 Review Schedule
This policy is reviewed and updated according to the following schedule:
● Quarterly review: Assessment of emerging risks and regulatory developments
● Annual review: Comprehensive policy review and updates
● Ad-hoc reviews: In response to significant regulatory changes or incidents
14.3 Policy Governance
Policy ownership and governance:
● Owner: Chief AI Officer and Legal Department
● Approval Authority: Board of Directors
● Implementation: All business units with AI systems
● Contact: ai-governance@syntari.ai
