Back to Three Intelligence Pillars

Interpretable Intelligence

Developer guide for configuring and using Interpretable Intelligence—comprehensive explanations of all optimization decisions with configurable detail levels.

Overview

Interpretable Intelligence provides comprehensive explanations of all optimization decisions. Use the explanation_level parameter to control detail — from a one-line summary up to a full audit trail suitable for compliance reporting.

What You Get

  • Natural language summaries: Human-readable explanations of optimization decisions
  • Technical decision logs: Detailed logs of all strategy selection decisions
  • Structured decision logs: Machine-readable decision records parseable by downstream systems
  • Audit trails: Complete records for regulatory compliance

Explanation Levels

Control the detail level of explanations to balance information needs with compute costs:

Explanation Level Guide
# Explanation Levels

Level 0: No explanations (fastest)
  - No explanation data returned
  - Minimal overhead

Level 1: Basic summary
  - Simple one-line summary
  - Strategy name and basic metrics

Level 2: Detailed (default)
  - Strategy rationale
  - Key decision points
  - Performance metrics

Level 3: Comprehensive
  - Full decision tree
  - Alternative strategies considered
  - Detailed performance analysis

Level 4: Full audit trail
  - Complete decision log
  - All alternatives with scores
  - Regulatory compliance ready

Level 5: Maximum detail
  - Every decision point logged
  - Full traceability
  - Research-grade documentation

Simple Configuration

Enable Interpretable Intelligence with a simple explanation level:

Enable Interpretable Intelligence
from sematryx import Sematryx

client = Sematryx(api_key="sk-your-api-key")

# Enable Interpretable Intelligence with explanation level
result = client.optimize(
    objective="minimize",
    variables=[{"name": "x", "bounds": (-5, 5)}, {"name": "y", "bounds": (-5, 5)}],
    objective_function=sphere,
    explanation_level=2  # 0=none, 1=basic, 2=detailed, 3=comprehensive, 4=full audit
)

print(result.explanation)  # Human-readable explanation of the solution

Advanced Configuration

Fine-tune Interpretable Intelligence behavior with advanced options:

Advanced Interpretable Configuration
from sematryx import Sematryx

client = Sematryx(api_key="sk-your-api-key")

# Advanced configuration: choose your explanation detail level
result = client.optimize(
    objective="minimize",
    variables=[{"name": "x", "bounds": (-5, 5)}, {"name": "y", "bounds": (-5, 5)}],
    objective_function=sphere,
    intelligence_config={
        "use_interpretable_intelligence": True,
        "explanation_level": 3  # 0=none, 1=basic, 2=detailed, 3=comprehensive, 4=full audit
    }
)

Configuration Options

  • explanation_level (int, 0-4, default: 0)

    Detail level for explanations. Higher levels provide more information. Pass as a top-level parameter or inside intelligence_config.

  • use_interpretable_intelligence (bool, default: false)

    Enable interpretable intelligence mode, which provides structured explanations of strategy selection and optimization decisions.

REST API Configuration

Configure Interpretable Intelligence via REST API:

REST API - Interpretable Intelligence
curl -X POST https://api.sematryx.com/v1/optimize \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "objective_function": "sphere",
    "variables": ["x", "y"],
    "bounds": [[-10, 10], [-10, 10]],
    "max_evaluations": 2000,
    "explanation_level": 3
  }'

JavaScript SDK Configuration

Configure Interpretable Intelligence using the JavaScript SDK:

JavaScript SDK - Interpretable Intelligence
// REST API via fetch (JavaScript SDK coming soon)
const response = await fetch('https://api.sematryx.com/v1/optimize', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${apiKey}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    objective_function: 'sphere',
    variables: ['x', 'y'],
    bounds: [[-5, 5], [-5, 5]],
    max_evaluations: 2000,
    explanation_level: 3
  })
})
const result = await response.json()
console.log(result.explanation)

Best Practices

  • Start with level 1-2: For most production use, level 1 (basic) or level 2 (detailed) gives useful context without added overhead.
  • Choose appropriate level: Use level 1-2 for production, level 3-4 for debugging, level 5 for compliance/audit.
  • Use level 3-4 for debugging: When an optimization behaves unexpectedly, higher explanation levels expose the full decision trace.
  • Level 4 for compliance: Use explanation_level=4 when you need complete audit trails for regulated industries.