LEGAL
AI TRANSPARENCY STATEMENT
Last Updated
AI TRANSPARENCY STATEMENT
Version 1.0
Syntari AI, Inc.
Effective Date: February 23, 2026
855 Boylston Street, Suite 1000 | Boston, MA 02116
Introduction and Commitment
Syntari AI, Inc. ("Syntari" or "we") is committed to advancing transparency, accountability, and responsible AI practices across the insurance, asset management, and financial services sectors. This AI Transparency Statement articulates our commitment to stakeholders—including customers, regulators, industry participants, and the public—regarding how we develop, deploy, and govern artificial intelligence systems.
We recognize that artificial intelligence systems carry both tremendous potential and significant responsibility. As an AI-native platform, Syntari integrates third-party AI providers to power transformative consulting capabilities. This statement reflects our dedication to open disclosure about our AI systems, their capabilities, limitations, and safeguards.
Audience: This statement is prepared for customers, regulators (particularly under EU AI Act Article 13), industry participants, and the general public seeking to understand Syntari's AI governance practices.AI Systems Overview
2.1 Platform AI Capabilities
Syntari's AI-native platform delivers the following core capabilities:
● Intelligent Recommendations: AI-powered analysis and recommendations for portfolio optimization, risk assessment, and strategic decision-making
● Document Generation: Automated creation of compliance reports, executive summaries, and policy analyses
● Predictive Analytics: Forecasting models for market trends, risk indicators, and business metrics
● Advanced Search: Natural language search across documents and data repositories with contextual relevance ranking
● Process Automation: Intelligent workflow automation for document processing, data classification, and routine tasks
2.2 EU AI Act Risk Classification
In accordance with EU AI Act requirements, Syntari classifies its AI systems by risk category:
Feature Category EU AI Act Classification Risk Level
Intelligent Recommendations (non-advisory) Limited Risk Low
Document Generation Limited Risk Low
Predictive Analytics (general) Limited Risk Low
Advanced Search Limited Risk Low
Process Automation Limited Risk Low
Financial/Insurance Advisory AI High Risk High
Underwriting Assistance High Risk High
Claims Assessment Support High Risk High
High-risk features are subject to enhanced governance, human oversight, and explainability requirements detailed in subsequent sections.Third-Party AI Providers
Syntari leverages carefully selected third-party AI providers to deliver platform capabilities. This approach allows us to benefit from cutting-edge foundational AI models while maintaining focus on domain-specific consulting expertise.
3.1 Anthropic (Claude)
● Model Versions: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku (context window: up to 200K tokens)
● Capabilities: Advanced reasoning, nuanced analysis, long-form document generation, complex financial analysis
● Primary Use Cases: Policy analysis, regulatory interpretation, financial advisory support, executive summarization
● Selection Rationale: Constitutional AI approach, superior reasoning on complex financial documents, strong performance on ambiguous insurance scenarios
3.2 OpenAI (GPT)
● Model Versions: GPT-4, GPT-4o (Omni), GPT-4 Turbo with Vision
● Capabilities: Multimodal understanding (text and images), rapid inference, broad knowledge base, structured output generation
● Primary Use Cases: Document processing with visual elements, rapid classification tasks, code generation for automation scripts, image analysis of submitted underwriting materials
● Selection Rationale: Multimodal capabilities for insurance document analysis, established enterprise support, strong API reliability
3.3 Google (Gemini)
● Model Versions: Gemini 2.0 Flash, Gemini 1.5 Pro (context window: up to 2M tokens for Pro)
● Capabilities: Extended context processing, real-time information retrieval, native multimodal processing, efficient cost scaling
● Primary Use Cases: Large-scale document processing, historical trend analysis across extended datasets, market research synthesis
● Selection Rationale: Exceptional context window for processing full client datasets, cost efficiency at scale, real-time data integration
3.4 Provider Selection Criteria and Evaluation
Syntari evaluates third-party AI providers using the following criteria:
● Safety and Alignment: Constitutional AI approach, red teaming results, alignment with human values
● Performance on Financial/Insurance Tasks: Benchmark testing on domain-specific scenarios, accuracy rates, failure mode analysis
● Data Privacy and Security: Data retention policies, encryption standards, compliance certifications, audit availability
● Reliability and Uptime: SLA commitments, incident history, geographic redundancy, disaster recovery
● Transparency and Accountability: Provider transparency reports, incident disclosure practices, model documentation
● Regulatory Alignment: EU AI Act compliance status, contractual safeguards, data processing agreementsData Handling Transparency
4.1 Data Sent to AI Providers
Syntari sends the following data categories to third-party AI providers to power platform features:
● Prompts and Queries: User-generated prompts, requests, and questions formatted with domain context
● Document Content: Uploaded documents (financial reports, policy documents, compliance materials) for analysis and processing
● Structured Data Context: Anonymized metrics, financial figures, and categorical data relevant to user queries
● Session Metadata: Feature requests, usage patterns, error messages (for debugging and improvement)
4.2 Data NOT Sent to AI Providers
The following sensitive data elements are NEVER transmitted to third-party AI providers:
● Raw Customer Databases: Unprocessed client lists, contact directories, or customer relationship management data
● Credentials and Secrets: API keys, passwords, authentication tokens, certificates
● Personally Identifiable Information (PII): Individual customer names, addresses, Social Security numbers, without explicit anonymization
● Proprietary Trading Strategies: Unpublished investment methodologies, confidential asset allocation frameworks
● Regulated Data: Raw healthcare information, genetic data, or other data subject to special protection
4.3 AI Provider Data Retention
All major third-party AI providers (Anthropic, OpenAI, Google) maintain data retention policies to protect user privacy:
Provider Retention Policy Key Details
Anthropic (Claude) Business Account: 30-day abuse monitoring Retained solely to detect violations; then deleted
OpenAI (GPT) Enterprise: 30-day abuse monitoring Can be configured for shorter retention; deletion confirmed
Google (Gemini) Default: 9 months; Paid Tiers: Configurable ISO 27001 certified; can request deletion
For paid tier customers, we select providers with configurable retention policies and request 30-day retention maximums aligned with abuse detection needs.
4.4 Model Training
Syntari customers using paid tier subscriptions benefit from explicit data non-training commitments:
● Anthropic: Business Account data is NOT used to train Claude models (documented in API Terms of Service)
● OpenAI: Enterprise Agreement explicitly excludes customer data from model training
● Google: Paid Gemini tiers do not train on submitted data (per Google Cloud Terms)
Syntari does NOT employ customer data to improve our internal models, analytics systems, or competitive capabilities.
4.5 Data Flow Architecture
The following describes Syntari's data flow from user input to AI provider output:
● User Input: Customer submits query or document via Syntari web/API interface
● Local Processing: Syntari systems apply pre-processing filters, identify sensitive data, and apply masking where applicable
● Provider Request: Formatted prompt + context sent to selected AI provider via encrypted HTTPS API calls
● Provider Processing: Third-party model processes request and generates response
● Output Filtering: Syntari applies post-processing filters to check for unintended PII leakage
● Result Delivery: Response returned to customer in Syntari interface; not retained on provider systems beyond retention policyAI Decision-Making
5.1 Recommendation Generation Process
Syntari's AI recommendations are generated through the following process:
● Prompt Construction: Syntari systems construct a detailed prompt including customer context, query specifics, and relevant domain constraints
● Model Processing: Selected AI provider processes prompt using foundational model(s) to generate recommendations
● Confidence Assessment: Syntari applies heuristic scoring to assess recommendation confidence based on model certainty signals
● Human Review: For high-risk features, Syntari experts review recommendations before presentation (see Section 6)
● User Presentation: Recommendations presented with transparency labels indicating confidence levels and limitations
5.2 Confidence Scoring and Uncertainty Disclosure
Syntari implements confidence scoring across AI recommendations:
● High Confidence (>85%): Recommendation based on clear evidence, convergent signals, well-understood domain patterns
● Medium Confidence (60-85%): Recommendation grounded in reasonable analysis but subject to material uncertainty
● Low Confidence (<60%): Recommendation preliminary; material risk of alternative conclusions
All recommendations are presented with explicit confidence labels and disclaimer language explaining limitations and uncertainty.
5.3 Limitations and Known Failure Modes
Syntari discloses the following known limitations affecting AI recommendation quality:
● Context Sensitivity: Recommendations may shift based on prompt framing, emphasis, or available context
● Data Quality Dependency: Recommendations reflect data quality; garbage-in-garbage-out principles apply
● Domain Specialization Gaps: AI models trained on general knowledge may underperform on highly specialized insurance/asset management scenarios
● Temporal Limitations: Training data has cutoff dates; real-time data may not be reflected
● Rare Event Underperformance: AI models may struggle with low-frequency, high-impact events (black swan scenarios)
● Regulatory Interpretation: Legal and regulatory interpretation by AI systems should be independently verified by compliance professionals
5.4 Hallucination Risk and Mitigation
AI language models risk generating plausible but inaccurate information ("hallucinations"). Syntari implements the following mitigations:
● Grounding Requirements: For factual claims, AI outputs must cite or reference supporting documents/data supplied by user
● Structured Output: High-risk recommendations enforce structured output formats that reduce free-form hallucination
● Consistency Checking: Multi-pass approaches verify consistency of reasoning and conclusions
● Human Verification: Mandatory human review for financial advisory and claims assessment (high-risk features)
● User Warning Labels: All AI outputs include warnings that users should independently verify recommendationsHuman Oversight
6.1 Human Review Requirements by Feature Tier
Syntari implements tiered human oversight based on feature risk classification:
Feature Tier Risk Level Human Review Requirement Reviewer Qualifications
Limited Risk (Search, Automation) Low Optional user review; Syntari spot-check audits Standard support staff
Limited Risk (Document Generation) Low Recommended user review; automated quality checks Domain specialists for audits
High Risk (Financial Advisory) High MANDATORY pre-release review by analyst CFA/CFP credential required
High Risk (Insurance Advisory) High MANDATORY pre-release review by specialist Claims adjuster or underwriter background
High Risk (Underwriting Support) High MANDATORY pre-release + expert second review Underwriting management approval
6.2 Override and Alternative Mechanisms
Syntari ensures users retain meaningful control over AI outputs:
● Direct Override: Users can explicitly reject, modify, or override AI recommendations
● Human Alternative: For any AI-generated output, users can request human-prepared alternative (may incur time/cost)
● Feature Disable: Users can disable specific AI features while retaining access to others
● Opt-Out Capability: Users can opt out of all AI features and use Syntari as traditional consulting platform
6.3 Escalation Paths for AI Concerns
Users experiencing issues with AI outputs or recommendations can escalate through the following channels:
● Support Escalation: Contact support@syntari.ai with AI concern; escalated to specialist team within 24 hours
● AI Governance: Email ai-governance@syntari.ai for systemic AI concerns, model bias issues, or governance questions
● Privacy/Data: Contact privacy@syntari.ai for data handling or transparency concerns
● Formal Review: Request independent audit of AI decisions via transparency@syntari.ai
6.4 No Fully Automated Consequential Decisions
Syntari explicitly prohibits fully automated decision-making on consequential matters affecting customer rights or finances. All high-risk decisions require human review and authorization. This includes:
● Insurance claim denials or reductions
● Portfolio divestment recommendations involving material capital
● Regulatory compliance decisions affecting legal obligations
● Customer eligibility determinations for new productsFairness and Bias
7.1 Bias Testing Methodology
Syntari conducts systematic bias testing before deploying AI features:
● Demographic Sensitivity Analysis: Test recommendations across demographic proxies (geographic region, firm size, asset class) to identify disparate treatment
● Domain-Specific Bias: Test for biases favoring certain insurance products, asset classes, or investment strategies
● Adversarial Testing: Prompt manipulation to surface latent biases in recommendation logic
● Historical Equity Review: Analyze whether recommendations perpetuate historical inequities in insurance/financial services
● Third-Party Audit: Quarterly independent bias audits by external consultants
7.2 Known Limitations and Biases
Third-party AI foundation models exhibit documented limitations affecting Syntari features:
● Training Data Biases: Models trained on public internet data reflecting societal biases, gender stereotypes, geographic disparities
● Financial Services Representation: Historical financial data underrepresents certain demographics, potentially replicating past discrimination
● Recency Bias: Recent events disproportionately influence recommendations; longer historical patterns may be underweighted
● Institutional Bias: Models may favor large established financial institutions over smaller or non-traditional firms
● Language Bias: Models perform better on English-language documents; non-English analysis may be less reliable
7.3 Mitigation Strategies
Syntari implements the following bias mitigation strategies:
● Diverse Training Data Augmentation: Supplement AI provider training with Syntari-curated data representing underrepresented segments
● Fairness Constraints: Implement algorithmic fairness constraints during recommendation generation where applicable
● Human Review of High-Impact Recommendations: Mandatory human review by diverse teams to catch systemic biases
● Transparency Labels: Flag recommendations involving underrepresented or sensitive segments with explicit bias warnings
● User Education: Provide guidance on recognizing and mitigating AI bias in financial decision-making
7.4 Ongoing Monitoring and Improvement
Syntari conducts continuous bias monitoring and improvement:
● Quarterly Bias Audits: Independent review of recommendation fairness metrics across demographic groups
● Customer Feedback Loop: Integrate customer-reported bias concerns into improvement process
● Model Version Updates: When AI providers release new model versions, conduct differential bias testing before adoption
● Public Bias Report: Publish annual public report on bias testing results, known biases, and mitigation improvementsPerformance and Reliability
8.1 AI Feature Accuracy Metrics
Where measurable, Syntari tracks the following accuracy metrics:
Feature Metric Target Performance Measurement Frequency
Document Classification Top-1 Accuracy >92% Monthly
Policy Analysis Summaries Factual Consistency vs. Source >95% Quarterly
Risk Scoring Correlation with Expert Assessment >0.85 Quarterly
Recommendation Relevance User Rating: 4+ out of 5 >80% of users Ongoing
Claim Categorization Agreement with Adjuster Review >88% Monthly
Metrics are calculated on held-out test datasets and customer usage data. Performance against targets is reviewed monthly with remediation for underperformance.
8.2 Uptime and Availability
Syntari maintains the following availability commitments:
● Platform Uptime: 99.9% monthly uptime SLA (excludes scheduled maintenance)
● AI Feature Availability: 99.5% availability for AI features during business hours (09:00-17:00 EST)
● Provider Redundancy: Critical AI features backed by multiple providers for failover capability
● Incident Response: <15 minute response time for critical outages; hourly status updates
8.3 Incident History and Response Times
Syntari publishes monthly incident reports detailing AI-related outages, false positives, and performance degradations. Public status page available at status.syntari.ai shows real-time platform and AI feature health.
Recent incidents (past 12 months): [To be populated with actual incident data]. Average mean-time-to-recovery (MTTR) for AI features: 34 minutes.
8.4 Continuous Improvement Methodology
Syntari implements continuous improvement processes for AI features:
● Monthly Performance Reviews: Compare actual accuracy metrics against targets; identify underperforming models
● A/B Testing: Deploy model variants to customer cohorts to measure real-world performance improvements
● Prompt Optimization: Refine prompts sent to AI providers based on output quality analysis
● Fine-Tuning: Where applicable, conduct domain-specific fine-tuning of AI provider models
● Customer Feedback Integration: Prioritize improvements based on customer-reported quality issuesUser Rights
9.1 Right to Know When Interacting with AI
Syntari clearly discloses when users are interacting with AI systems:
● UI Indicators: All AI-generated content labeled with "AI-Generated" badge and icon
● Explainers: Interactive tooltips explain which AI capabilities are being used and why
● Feature Documentation: Comprehensive help articles describing each AI feature's capabilities and limitations
● Disclosure on Output: Every AI recommendation includes metadata showing provider, model, and confidence level
9.2 Right to Human Alternative
Users have the right to request human-prepared alternatives to any AI-generated output:
● Cost: Included free for unlimited Limited Risk features; High Risk features may incur additional consulting fees
● Timeline: Standard human review within 2-5 business days; expedited (24 hour) service available at premium rate
● Scope: Human consultants match qualifications of automated review requirements
9.3 Right to Explanation
For any AI recommendation, Syntari provides explanations of how the recommendation was generated:
● Reasoning Chain: Structured explanation of key factors and logic in the recommendation
● Evidence References: Citations to source documents and data used as basis for recommendation
● Confidence Assessment: Quantified confidence score with explanation of confidence limitations
● Alternative Scenarios: Where applicable, presentation of alternative recommendations and why they were deprioritized
9.4 Right to Contest AI-Assisted Decisions
Users can formally challenge AI-assisted decisions through the following process:
● Formal Challenge: Submit written challenge via transparency@syntari.ai within 30 days of AI output
● Independent Review: Challenge reviewed by independent human reviewer with appropriate qualifications (not original reviewer)
● Reconsideration: If challenge has merit, output reconsidered with alternative AI provider or additional human review
● Appeal: If still unsatisfied, escalate to Syntari Executive Team for final review
9.5 Right to Opt Out of AI Features
Users can opt out of specific AI features or all AI capabilities:
● Granular Opt-Out: Disable specific AI features (e.g., "Disable AI Recommendations" while keeping "AI Document Generation")
● Complete Opt-Out: Request complete disabling of all AI features and use Syntari as traditional consulting platform
● Reverse Opt-Out: Re-enable AI features at any time via account settings
● No Penalty: Opting out of AI features does not affect pricing, support quality, or service accessRegulatory Compliance
10.1 EU AI Act Compliance
Syntari is implementing EU AI Act compliance across its AI systems in advance of the August 2, 2026 deadline:
● Article 13 Transparency: This statement fulfills Article 13 transparency requirements for high-risk AI systems
● Risk Assessment: Completed Article 6 risk classification for all Syntari AI features (above, Section 2.2)
● Technical Documentation: Maintaining detailed technical documentation of all AI systems per Article 11
● Quality Management System: Implementing Article 17 quality management processes for training, testing, and validation
● Conformity Assessment: Conducting Article 19 conformity assessments; third-party audit scheduled for Q2 2026
● CE Marking Preparation: Preparing CE marking and EU Declaration of Conformity for high-risk systems
● Registries: When EU AI registries become operational, registering high-risk systems per Article 71
10.2 NIST AI Risk Management Framework Alignment
Syntari aligns with NIST AI RMF governance and practices:
● Governance: Established AI Governance Board with accountability for risk management
● MAP (Mapping): Systematically mapped AI risks, impacts, and mitigation strategies for all features
● MEASURE (Measurement): Implemented performance metrics, bias detection, and continuous monitoring
● MANAGE (Management): Deployed risk mitigation controls, incident response, and escalation procedures
● GOVERN (Governance): Annual risk review, third-party audit, public transparency reporting
10.3 ISO/IEC 42001 Roadmap
Syntari is progressing toward ISO/IEC 42001 AI Management Systems certification:
● Q1 2026: Gap assessment completed; remediation plan developed
● Q2 2026: Internal audit and pre-assessment by certification body
● Q3 2026: Full audit and certification (target)
● Ongoing: Annual surveillance audits and continuous improvement
10.4 Industry-Specific Compliance
In addition to general AI regulations, Syntari ensures compliance with financial services and insurance regulations:
● FINRA Rules: Compliance with FINRA AI Disclosure Rules for investment advisory features
● SEC Guidance: Alignment with SEC guidance on use of AI in investment management
● Insurance Regulators: Compliance with state insurance commissioner guidance on AI underwriting and claims
● Fair Lending: Compliance with fair lending laws (ECOA, FHA) where AI recommendations affect credit/insurance decisions
● Data Protection: GDPR, CCPA, and other data protection regulations as applicable to customer dataSecurity and Safety
11.1 AI-Specific Security Measures
Syntari implements specialized security controls for AI systems:
● API Security: TLS 1.3 encryption for all API calls to third-party AI providers; certificate pinning on critical endpoints
● Key Management: Hardware security modules (HSM) for API key storage; automated key rotation
● Input Validation: Strict input filtering to remove credentials, PII, and other sensitive data before transmission
● Output Scanning: Post-processing output scanning for unintended PII leakage or sensitive information
● Access Controls: Role-based access to AI feature configuration; audit logging of all configuration changes
11.2 Prompt Injection Prevention
Syntari guards against adversarial prompt injection attacks:
● Prompt Sanitization: Remove or escape special characters and delimiters that could alter model behavior
● System Prompt Isolation: Core instructions isolated from user inputs; strict separation of concerns
● Input Length Limits: Enforce maximum input lengths to prevent context overflow attacks
● Adversarial Testing: Regular red teaming to identify and patch prompt injection vulnerabilities
● User Education: Guide customers on safe AI usage and risks of injecting unvetted user content
11.3 Output Filtering and Safety Guardrails
Syntari applies guardrails to filter unsafe or invalid AI outputs:
● Content Filtering: Detect and flag outputs containing illegal content, hate speech, or extreme violence
● PII Detection: Automated scanning for presence of sensitive data (SSN, credit card, bank account numbers)
● Factuality Scoring: Heuristic scoring to detect implausible or likely-hallucinated claims
● Compliance Checks: Verify outputs against regulatory requirements before presentation (e.g., no promised returns, no guaranteed outcomes)
● Manual Override: Humans can manually flag problematic outputs for additional review
11.4 Red Teaming and Adversarial Testing
Syntari conducts regular red teaming and adversarial testing:
● Quarterly Red Team Exercises: Internal and external red teams attempt to find vulnerabilities, biases, and safety failures
● Adversarial Prompt Library: Maintained library of known adversarial prompts tested against Syntari features
● Bug Bounty Program: Rewards for external researchers identifying novel vulnerabilities or safety issues
● Incident Response: Rapid response to red team findings; public disclosure and remediation timelineReporting and Accountability
12.1 Annual AI Transparency Report
Syntari publishes an Annual AI Transparency Report (each February, beginning February 2027) detailing:
● System Inventory: Complete listing of all AI systems, models, and versions deployed
● Performance Data: Accuracy metrics, uptime statistics, and performance against targets
● Bias Audit Results: Summary of bias testing and identified limitations
● Incident Summary: Overview of AI incidents, false positives, and customer complaints
● Governance Actions: Changes to AI policies, new controls, and improvement initiatives
● Compliance Status: Progress toward regulatory compliance milestones
● Customer Feedback: Aggregated (anonymized) customer feedback on AI feature quality and fairness
12.2 AI Incident Disclosure Policy
Syntari publishes AI incident disclosures under the following framework:
● Severity Levels: Classified as Critical, High, Medium, or Low based on customer impact
● Critical/High Incidents: Public disclosure within 5 business days of detection; initial summary + 30-day post-incident review
● Medium/Low Incidents: Disclosed in monthly incident summary; root cause analysis within 60 days
● Content: Each disclosure includes timeline, impact scope, root cause, and remediation steps
● Channel: Published on Syntari Transparency Portal (transparency.syntari.ai); incident@syntari.ai email notification
12.3 Regulatory Reporting Commitments
Syntari is prepared to fulfill regulatory reporting requirements:
● EU AI Act Article 72: Will report to EU AI Office when operating high-risk systems subject to compliance obligations
● State Insurance Commissioners: Provide AI transparency reports to state regulators upon request
● SEC/FINRA: Supply AI governance documentation to regulators overseeing investment advisory activities
● Customer Notification: Notify customers of serious AI incidents or safety issues per applicable regulations
12.4 Contact Information for AI Concerns
Concern Type Primary Contact Response Time
AI Feature Questions support@syntari.ai 24 hours
AI Governance & Policy ai-governance@syntari.ai 24 hours
Data Privacy & Handling privacy@syntari.ai 48 hours
Transparency & Disclosure transparency@syntari.ai 48 hours
Security & Safety Issues security@syntari.ai 4 hours (critical only)
AI Incident Reports incident@syntari.ai 24 hours
All inquiries are logged, tracked, and reviewed by appropriate teams. Escalation available upon request.Version History and Update Schedule
Version Effective Date Key Changes Next Review
1.0 February 23, 2026 Initial publication; baseline transparency commitments February 2027 (Annual)
This AI Transparency Statement is reviewed and updated annually in February. Interim updates may be published if material changes to AI systems, governance, or regulatory requirements occur. Customers are notified of substantive updates via email and in-platform notification.
Change log available at transparency.syntari.ai/changelog for detailed amendment tracking.
This AI Transparency Statement is prepared by Syntari AI, Inc. in good faith to promote transparent and responsible AI practices. It reflects the status as of the Effective Date and is subject to change. For the most current version, visit transparency.syntari.ai.
Questions or feedback: transparency@syntari.ai
Copyright © 2026 Syntari International, Inc. All rights reserved.
