MBE Getting Started Guide

Whether you're an API integration developer, DevOps engineer, or product manager, start here.

⏱ Estimated reading: 5-10 min | First API call: < 2 min


Table of Contents


1. What is MBE?

MBE (Mises Behavior Engine) is an AI decision analysis system built on Austrian School praxeology — the study of purposeful human action.

Core principle: Every response should be traceable, verifiable, and trustworthy.

User Query → Intent Understanding → Expert Matching → Knowledge Retrieval → Response Generation → Self-Critique → Output
                                                                                    ↑
                                                                            15 Safety Guardrails

Key Capabilities

Capability Description
Dynamic Expert System Upload knowledge base → Auto-create domain expert → Accept user queries
Multi-layer Self-Critique 15 verification modules (factuality + safety + bias + privacy + emotional safety)
Reliability Gate Experts must pass zero-hallucination + correct-refusal + source-fidelity gates before going live
Net Score Evaluation Net Score = correct_rate − incorrect_rate; encourages "refuse when uncertain"
Multi-channel Output REST API / WebSocket / MCP / Smart speakers / Home Assistant

2. Quick Start (2-min Hello World)

Prerequisites

  • An MBE account (Register here)
  • API Key (generate in the Developer Console)

2.1 Using cURL

curl -X POST https://mbe.hi-maker.com/api/expert/ask \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"query": "What are the legal consequences of breach of contract?"}'

2.2 Using Python

import requests

API_KEY = "YOUR_API_KEY"

response = requests.post(
    "https://mbe.hi-maker.com/api/expert/ask",
    headers={"Authorization": f"Bearer {API_KEY}"},
    json={"query": "What are the legal consequences of breach of contract?"}
)

data = response.json()
print(f"Expert: {data['expert']}")
print(f"Answer: {data['answer']}")
print(f"Sources: {data['sources']}")

2.3 Using JavaScript/Node.js

const response = await fetch("https://mbe.hi-maker.com/api/expert/ask", {
  method: "POST",
  headers: {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json",
  },
  body: JSON.stringify({ query: "What are the legal consequences of breach of contract?" }),
});

const data = await response.json();
console.log(`Expert: ${data.expert}`);
console.log(`Answer: ${data.answer}`);

Response Structure

{
  "success": true,
  "expert": "Civil Law Expert",
  "answer": "The legal consequences of breach of contract primarily include...",
  "sources": [
    {"name": "Civil Code Article 577", "confidence": 0.95}
  ],
  "confidence": 0.92
}

3. Core Concepts

3.1 Expert

An MBE "Expert" is a combination of Knowledge Base + System Prompt + Behavioral Constraints.

┌─────────────────────────────────────┐
│          Dynamic Expert              │
├─────────────────────────────────────┤
│  📚 Knowledge Base — PDF/TXT/QA     │
│  🎯 System Prompt — Role & bounds   │
│  🔒 Constraints — Gate + Critique   │
│  📊 Evaluation — Score + Trust      │
└─────────────────────────────────────┘

3.2 Self-Critique

Every response passes through 15 verification modules:

Module Function Category
SC-1~SC-11 Core quality checks (logic, memory, consistency, routing, knowledge) Quality
SC-12 Harmful content detection Safety
SC-13 Privacy leak detection (PII) Safety
SC-14 Bias detection (gender, race, age) Safety
SC-15 Emotional safety (crisis detection) Safety

3.3 Reliability Gate

Experts must pass two-level gates before going live:

Level 1 Gate (must reach 100%):
  ✅ Citation integrity  — 100% responses have citations
  ✅ Zero hallucination  — 100% hallucination-free
  ✅ Correct refusal     — 95%  refuse out-of-scope questions
  ✅ Source fidelity      — 95%  faithful to knowledge base

Level 2 Gate:
  ✅ Overall capability score ≥ 85

3.4 Net Score

Net Score = correct_rate − incorrect_rate

Three-class evaluation:
  correct    — Answer is correct
  incorrect  — Answer is wrong (including hallucinations)
  uncertain  — Actively refuses / expresses uncertainty  ← Encouraged behavior

Calibration = uncertain / (uncertain + incorrect)
  → Measures whether the expert chooses to refuse rather than fabricate

4. Integration Paths

4a. API Integration (Recommended)

Best for: Teams that want quick integration without managing infrastructure.

Your App  →  HTTPS  →  MBE API  →  Dynamic Expert  →  Response

Steps:

  1. Register an account and get an API Key
  2. Upload knowledge base and create experts via the admin dashboard
  3. Call /api/expert/ask to ask questions
  4. (Optional) Call /api/evaluation/net-score to assess quality

Full API docs: API Reference

4b. Self-hosted Deployment

Best for: Teams with strict data privacy requirements.

Prerequisites:

Component Minimum Version
Docker Desktop 24.x
Python 3.11+
PostgreSQL 15+
Redis 7+

Quick Start:

# 1. Clone the repository
git clone https://your-gitea-server/mises/mises-behavior-engine.git
cd mises-behavior-engine

# 2. Configure environment variables
cp .env.example .env.development.local
# Edit .env.development.local with your LLM API keys

# 3. Start development environment
docker compose -f docker-compose.dev.yml up -d

# 4. Verify
curl http://localhost:8001/api/health
# → {"status": "ok", "version": "..."}

4c. MCP Protocol Integration

Best for: AI clients supporting MCP (Cursor, Claude Desktop, etc.).

{
  "mcpServers": {
    "mbe": {
      "url": "https://mbe.hi-maker.com/mcp/sse",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  }
}

5. Create Your First Expert

# Step 1: Create expert
curl -X POST https://mbe.hi-maker.com/api/experts \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Legal Advisor",
    "description": "AI assistant specializing in civil and commercial law",
    "industry": "legal"
  }'

# Step 2: Upload knowledge base
curl -X POST https://mbe.hi-maker.com/api/experts/{expert_id}/knowledge-base \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "file=@contract_law.pdf"

# Step 3: Run evaluation gate
curl -X POST https://mbe.hi-maker.com/api/evaluation/gate \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"expert_id": "{expert_id}"}'

6. Evaluate Your Expert

Evaluation Type Purpose API
Gate Check Minimum quality bar POST /api/evaluation/gate
Net Score Core quality metric GET /api/evaluation/net-score/{id}
6D Benchmark Comprehensive assessment POST /api/evaluation/benchmark
Multi-turn Safety Attack resistance POST /api/evaluation/multi-turn-safety
Behavioral Audit Ongoing monitoring POST /api/audit/conversation

7. Safety & Compliance

MBE implements a 4-layer security architecture:

Layer Mechanism Coverage
L1 Input Filter 6 injection pattern types API Gateway
L2 Context Isolation REFERENCE markers separate user input from knowledge Expert QA
L3 Behavior Monitor Detect role-breaking, executable content, info leaks Expert QA
L4 Continuous Testing 98 multi-turn safety test cases × randomized runs Test Framework

Full details: Security Compliance Guide · System Card


8. Next Steps

Goal Document
Full API details API Reference
Prompt optimization Prompt Engineering Guide
Agent framework Agent Framework
Tool development Tool Use Guide
Evaluation methodology Evaluation Methodology
Pricing Pricing & Billing
Hands-on examples Cookbook

Appendix: Glossary

Term Definition
Expert Knowledge base + system prompt + Self-Critique config + HOPE preferences
Self-Critique 15-module quality verification system
Net Score correct_rate − incorrect_rate (range [-1, +1])
HOPE Hyper-personalized Optimization for Preference-aligned Expertise
TITANS Memory-enhanced multi-turn conversation
MIRAS Multi-dimensional Intelligent Response Analysis System
MCP Model Context Protocol for tool integration

Full glossary: Glossary


Appendix: FAQ

Q: What LLMs does MBE support?

A: MBE supports multiple LLM providers through SmartRouter: Anthropic Claude, OpenAI GPT-4, DeepSeek, Alibaba Qwen, Zhipu GLM, and more. The router automatically selects the optimal model based on query complexity.

Q: Can I deploy MBE on-premises?

A: Yes. MBE supports Docker-based self-hosted deployment with full data isolation. Enterprise and Flagship plans include private deployment options.

Q: How does MBE handle data privacy?

A: MBE offers Zero Data Retention mode where conversation content is deleted immediately after response. All data is encrypted at rest (AES-256) and in transit (TLS 1.2+). See Security Compliance Guide.

Q: What industries are supported?

A: MBE has industry-specific configurations for Legal, Finance, Healthcare, and Education, with customizable compliance rules and safety modules for each.


See also: 中文版入门指南