ð€ èªåšååéŠéæç³»ç»
ç®æ : å°åšçº¿è¡šåæ°æ®èªåšå¯Œå ¥MBEïŒç±TITANSè®°å¿å¹¶é©±åšå šèªåšåæ¹è¿
å®ç°ç¶æ: â å·²å®ç° (2026-01-28)
ð å¿«éåŒå§
1. è¿è¡æ°æ®åºè¿ç§»
# PostgreSQL
psql -U postgres -d mbe -f migrations/003_add_feedback_tables.sql
2. é 眮ç¯å¢åé
# .env æä»¶æ·»å
FEEDBACK_SYNC_INTERVAL=300
FEEDBACK_AUTO_ANALYZE=true
GIT_PLATFORM=gitea
GIT_API_URL=http://localhost:3000/api/v1
GIT_REPO=zenglx01/mises-behavior-engine
GIT_TOKEN=your-git-token
3. å¯åšåæ¥æå¡
# æ¹åŒ1: ç¬ç«è¿è¡
python scripts/start_feedback_sync.py
# æ¹åŒ2: éæå°äž»åºçš (èªåšå¯åš)
# åš main.py äžå·²é
眮
4. API端ç¹
POST /api/feedback/auto/webhook/tencent- è Ÿè®¯ææ¡£WebhookGET /api/feedback/auto/submissions- æ¥è¯¢åéŠå衚POST /api/feedback/auto/sync/trigger- æåšè§Šå忥POST /api/feedback/auto/analyze/trigger- æåšè§ŠååæGET /api/feedback/auto/analyze/summary- è·ååææ±æ»
ð¯ ç³»ç»æ¶æ
ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
â åéŠæ¶éå± â
â ââââââââââââ ââââââââââââ ââââââââââââ â
â âè
Ÿè®¯ææ¡£è¡šâ âGoogle衚åâ âSlackæ¶æ¯ â â
â ââââââ¬ââââââ ââââââ¬ââââââ ââââââ¬ââââââ â
âââââââââŒââââââââââââââŒââââââââââââââŒââââââââââââââââââââââ
â â â
âââââââââââââââŒââââââââââââââ
â
ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
â æ°æ®åæ¥æå¡ïŒæ°å¢ïŒ â
â ââââââââââââââââââââââââââââââââââââââââââââââââ â
â â feedback_sync_service.py â â
â â â¢ å®æ¶æåè¡šåæ°æ® â â
â â â¢ æ°æ®æž
æŽåæ åå â â
â â ⢠å»éåå¢é忥 â â
â ââââââââââââââââââââ¬ââââââââââââââââââââââââââââ â
âââââââââââââââââââââââŒââââââââââââââââââââââââââââââââââââ
â
ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
â MBE åéŠæ°æ®åº â
â ââââââââââââââââââââââââââââââââââââââââââââââââ â
â â feedback_submissions (PostgreSQL) â â
â â ⢠åå§æäº€æ°æ® â â
â â ⢠æäº€æ¶éŽãçšæ·ãè¯å â â
â â ⢠é®é¢æè¿°ãäŒå
级 â â
â ââââââââââââââââââââ¬ââââââââââââââââââââââââââââ â
âââââââââââââââââââââââŒââââââââââââââââââââââââââââââââââââ
â
ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
â TITANS è®°å¿ç³»ç»ïŒéæïŒ â
â ââââââââââââââââââââââââââââââââââââââââââââââââ â
â â â¢ çšæ·åéŠåéåååš â â
â â ⢠é®é¢æš¡åŒè¯å« â â
â â ⢠åå²åéŠå
³è â â
â â ⢠äžå®¶æ¹è¿å»ºè®®è®°å¿ â â
â ââââââââââââââââââââ¬ââââââââââââââââââââââââââââ â
âââââââââââââââââââââââŒââââââââââââââââââââââââââââââââââââ
â
ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
â AI èªåšåæåŒæïŒæ°å¢ïŒ â
â ââââââââââââââââââââââââââââââââââââââââââââââââ â
â â feedback_analyzer.py â â
â â ⢠èªåšé®é¢åç±» â â
â â ⢠äŒå
级æºèœæåº â â
â â ⢠çžäŒŒé®é¢èå â â
â â â¢ æ ¹å åæ â â
â â ⢠æ¹è¿æ¹æ¡çæ â â
â ââââââââââââââââââââ¬ââââââââââââââââââââââââââââ â
âââââââââââââââââââââââŒââââââââââââââââââââââââââââââââââââ
â
ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
â èªåšåå·¥åç³»ç»ïŒæ°å¢ïŒ â
â ââââââââââââââââââââââââââââââââââââââââââââââââ â
â â ⢠èªåšå建GitHub Issues / Gitea Issues â â
â â ⢠èªåšåé
äŒå
çº§åæ çŸ â â
â â ⢠èªåšå
³èçžå
³ä»£ç æä»¶ â â
â â ⢠èªåšçæä¿®å€å»ºè®® â â
â ââââââââââââââââââââ¬ââââââââââââââââââââââââââââ â
âââââââââââââââââââââââŒââââââââââââââââââââââââââââââââââââ
â
ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
â Cursor AI èªåšä¿®å€ïŒæªæ¥ïŒ â
â ⢠AI èªåšçæä¿®å€ä»£ç â
â ⢠èªåšè¿è¡æµè¯éªè¯ â
â ⢠èªåšæäº€PR â
â â¢ äººå·¥å®¡æ žååå¹¶ â
ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
ð§ å®ç°æ¹æ¡
Phase 1: æ°æ®åæ¥æå¡ïŒç«å³å®ç°ïŒâ
æä»¶ç»æïŒ
src/feedback/
âââ __init__.py
âââ sync_service.py # æ°æ®åæ¥æå¡
âââ models.py # æ°æ®æš¡å
âââ analyzers.py # AIåæåŒæ
âââ integrations/
â âââ tencent_docs.py # è
Ÿè®¯ææ¡£API
â âââ google_forms.py # Google Forms API
â âââ slack_api.py # Slack API
âââ automation/
âââ issue_creator.py # èªåšå建Issue
âââ report_generator.py # èªåšçææ¥å
1. æ°æ®åæ¥æå¡
src/feedback/sync_service.py
"""
åéŠæ°æ®èªåšåæ¥æå¡
宿¶ä»åšçº¿è¡šåæåæ°æ®å¹¶å¯Œå
¥MBE
"""
import asyncio
import hashlib
from datetime import datetime, timedelta
from typing import List, Dict, Optional
from loguru import logger
from sqlalchemy import select, and_
from src.core.database import AsyncSessionLocal
from src.feedback.models import FeedbackSubmission
from src.feedback.integrations.tencent_docs import TencentDocsClient
from src.feedback.integrations.google_forms import GoogleFormsClient
from src.core.titans_conversation_memory import TITANSMemoryIntegration
class FeedbackSyncService:
"""åéŠæ°æ®åæ¥æå¡"""
def __init__(self):
self.tencent_client = TencentDocsClient()
self.google_client = GoogleFormsClient()
self.titans_memory = TITANSMemoryIntegration()
self.sync_interval = 300 # 5åéåæ¥äžæ¬¡
async def start(self):
"""å¯åšåæ¥æå¡"""
logger.info("ð åéŠåæ¥æå¡å¯åš")
while True:
try:
await self.sync_all_sources()
await asyncio.sleep(self.sync_interval)
except Exception as e:
logger.error(f"åæ¥å€±èŽ¥: {e}")
await asyncio.sleep(60) # 倱莥åçåŸ
1åééè¯
async def sync_all_sources(self):
"""åæ¥æææ°æ®æº"""
logger.info("ð¥ åŒå§åæ¥åéŠæ°æ®...")
# 忥è
Ÿè®¯ææ¡£
tencent_count = await self.sync_tencent_docs()
# 忥Google Forms
google_count = await self.sync_google_forms()
# 忥Slackæ¶æ¯
slack_count = await self.sync_slack_messages()
total = tencent_count + google_count + slack_count
logger.info(f"â
忥宿: æ°å¢ {total} æ¡åéŠ")
# è§ŠåAIåæ
if total > 0:
await self.trigger_analysis()
async def sync_tencent_docs(self) -> int:
"""忥è
Ÿè®¯ææ¡£è¡šåæ°æ®"""
try:
# è·å衚åé
眮
form_id = await self._get_config("tencent_form_id")
if not form_id:
return 0
# æåæ°æ®
responses = await self.tencent_client.get_responses(form_id)
logger.info(f"ð è
Ÿè®¯ææ¡£: è·åå° {len(responses)} æ¡ååº")
# ä¿åå°æ°æ®åº
new_count = 0
async with AsyncSessionLocal() as session:
for response in responses:
# æ£æ¥æ¯åŠå·²ååšïŒåºäºååžå»éïŒ
response_hash = self._calculate_hash(response)
exists = await session.execute(
select(FeedbackSubmission).where(
FeedbackSubmission.response_hash == response_hash
)
)
if exists.scalar_one_or_none():
continue
# å建æ°è®°åœ
feedback = FeedbackSubmission(
source="tencent_docs",
response_hash=response_hash,
submitted_at=response.get("submitted_at"),
tester_name=response.get("name"),
test_duration=response.get("duration"),
# AI对è¯è¯å
expert_matching_score=response.get("expert_matching"),
answer_quality_score=response.get("answer_quality"),
context_understanding=response.get("context_understanding"),
response_speed=response.get("response_speed"),
# ææ¡£è¯å
doc_readability_score=response.get("doc_readability"),
# é®é¢æè¿°
high_priority_issues=response.get("high_priority"),
medium_priority_issues=response.get("medium_priority"),
low_priority_issues=response.get("low_priority"),
# æ»äœè¯ä»·
overall_score=response.get("overall_score"),
recommendation=response.get("recommendation"),
summary=response.get("summary"),
# åå§æ°æ®
raw_data=response
)
session.add(feedback)
new_count += 1
# åå
¥TITANSè®°å¿
await self._store_in_titans(feedback)
await session.commit()
logger.info(f"ðŸ è
Ÿè®¯ææ¡£: æ°å¢ {new_count} æ¡åéŠ")
return new_count
except Exception as e:
logger.error(f"è
Ÿè®¯ææ¡£åæ¥å€±èŽ¥: {e}")
return 0
async def sync_google_forms(self) -> int:
"""忥Google Formsæ°æ®"""
try:
form_id = await self._get_config("google_form_id")
if not form_id:
return 0
responses = await self.google_client.get_responses(form_id)
logger.info(f"ð Google Forms: è·åå° {len(responses)} æ¡ååº")
# 类䌌çå€çé»èŸ...
return 0
except Exception as e:
logger.error(f"Google Formsåæ¥å€±èŽ¥: {e}")
return 0
async def sync_slack_messages(self) -> int:
"""忥Slacké¢éäžçåéŠæ¶æ¯"""
# å®ç°Slackæ¶æ¯åæ¥
return 0
async def _store_in_titans(self, feedback: FeedbackSubmission):
"""å°åéŠåå
¥TITANSè®°å¿ç³»ç»"""
try:
# æå»ºè®°å¿ææ¬
memory_text = f"""
æµè¯åéŠ - {feedback.submitted_at.strftime('%Y-%m-%d %H:%M')}
æµè¯äºº: {feedback.tester_name}
æ»äœè¯å: {feedback.overall_score}/10
æšè床: {feedback.recommendation}
é«äŒå
级é®é¢:
{feedback.high_priority_issues}
äžäŒå
级é®é¢:
{feedback.medium_priority_issues}
æ»ç»: {feedback.summary}
"""
# åå
¥TITANSïŒäœ¿çšäžéšç"åéŠè®°å¿"äžå®¶ïŒ
await self.titans_memory.add_conversation(
user_id="feedback_system",
expert_name="æµè¯åéŠåæäžå®¶",
user_message=memory_text,
assistant_message="已记åœå¹¶åææ€åéŠ",
metadata={
"feedback_id": feedback.id,
"priority": self._calculate_priority(feedback),
"issues_count": self._count_issues(feedback),
}
)
logger.debug(f"ðŸ TITANSè®°å¿: å·²ååšåéŠ #{feedback.id}")
except Exception as e:
logger.error(f"TITANSè®°å¿ååšå€±èŽ¥: {e}")
def _calculate_hash(self, response: Dict) -> str:
"""计ç®ååºååžïŒçšäºå»éïŒ"""
key = f"{response.get('submitted_at')}_{response.get('name')}_{response.get('summary')}"
return hashlib.md5(key.encode()).hexdigest()
def _calculate_priority(self, feedback: FeedbackSubmission) -> str:
"""计ç®åéŠäŒå
级"""
if feedback.high_priority_issues and len(feedback.high_priority_issues) > 100:
return "critical"
elif feedback.overall_score < 6:
return "high"
elif feedback.overall_score < 8:
return "medium"
else:
return "low"
def _count_issues(self, feedback: FeedbackSubmission) -> int:
"""ç»è®¡é®é¢æ°é"""
count = 0
if feedback.high_priority_issues:
count += feedback.high_priority_issues.count('\n') + 1
if feedback.medium_priority_issues:
count += feedback.medium_priority_issues.count('\n') + 1
if feedback.low_priority_issues:
count += feedback.low_priority_issues.count('\n') + 1
return count
async def trigger_analysis(self):
"""è§ŠåAIèªåšåæ"""
from src.feedback.analyzers import FeedbackAnalyzer
analyzer = FeedbackAnalyzer()
await analyzer.analyze_recent_feedback()
async def _get_config(self, key: str) -> Optional[str]:
"""è·åé
眮"""
# 仿°æ®åºæç¯å¢åé读åé
眮
import os
return os.getenv(key.upper())
2. æ°æ®æš¡å
src/feedback/models.py
"""
åéŠæ°æ®æš¡å
"""
from datetime import datetime
from sqlalchemy import Column, Integer, String, Text, Float, DateTime, JSON
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class FeedbackSubmission(Base):
"""æµè¯åéŠæäº€è®°åœ"""
__tablename__ = "feedback_submissions"
id = Column(Integer, primary_key=True)
# å
æ°æ®
source = Column(String(50)) # tencent_docs, google_forms, slack
response_hash = Column(String(64), unique=True, index=True)
submitted_at = Column(DateTime, default=datetime.utcnow)
synced_at = Column(DateTime, default=datetime.utcnow)
# æµè¯äººä¿¡æ¯
tester_name = Column(String(100))
test_duration = Column(String(50))
completed_tests = Column(JSON) # å®æçæµè¯é¡¹å衚
# AI对è¯è¯å
expert_matching_score = Column(Integer) # 1-5
answer_quality_score = Column(Integer) # 1-5
context_understanding = Column(String(50)) # èœ/äžèœ/ææ¶èœ
response_speed = Column(String(50)) # <2ç§ / 2-5ç§ / 5-10ç§ / >10ç§
test_questions_results = Column(Text)
# ææ¡£è¯å
doc_readability_score = Column(Integer) # 1-5
docs_viewed = Column(JSON)
doc_issues = Column(Text)
# UI/UXè¯å
ui_score = Column(Integer, nullable=True)
ux_issues = Column(Text, nullable=True)
# æ§èœè¯å
performance_score = Column(Integer, nullable=True)
performance_issues = Column(Text, nullable=True)
# é®é¢æ±æ»
high_priority_issues = Column(Text)
medium_priority_issues = Column(Text)
low_priority_issues = Column(Text)
# æ»äœè¯ä»·
overall_score = Column(Integer) # 1-10
recommendation = Column(String(50)) # æšè/ä¿®å€å/æäž
summary = Column(Text)
# åæç»æïŒAIçæïŒ
ai_analyzed = Column(DateTime, nullable=True)
issue_category = Column(String(100), nullable=True)
priority = Column(String(20), nullable=True) # critical, high, medium, low
root_cause = Column(Text, nullable=True)
fix_suggestion = Column(Text, nullable=True)
related_files = Column(JSON, nullable=True)
# å€çç¶æ
status = Column(String(50), default="pending") # pending, analyzing, issue_created, fixed, verified
github_issue_url = Column(String(500), nullable=True)
fixed_at = Column(DateTime, nullable=True)
# åå§æ°æ®
raw_data = Column(JSON)
def __repr__(self):
return f"<FeedbackSubmission #{self.id} from {self.tester_name} score={self.overall_score}>"
class FeedbackPattern(Base):
"""åéŠæš¡åŒè¯å«ïŒèåçžäŒŒé®é¢ïŒ"""
__tablename__ = "feedback_patterns"
id = Column(Integer, primary_key=True)
pattern_name = Column(String(200))
category = Column(String(100))
occurrence_count = Column(Integer, default=1)
first_seen = Column(DateTime, default=datetime.utcnow)
last_seen = Column(DateTime, default=datetime.utcnow)
# å
³èçåéŠIDå衚
feedback_ids = Column(JSON)
# æš¡åŒæè¿°
description = Column(Text)
root_cause = Column(Text, nullable=True)
fix_priority = Column(String(20))
# å€çç¶æ
status = Column(String(50), default="identified")
github_issue_url = Column(String(500), nullable=True)
3. è Ÿè®¯ææ¡£éæ
src/feedback/integrations/tencent_docs.py
"""
è
Ÿè®¯ææ¡£APIéæ
"""
import aiohttp
from typing import List, Dict
from loguru import logger
class TencentDocsClient:
"""è
Ÿè®¯ææ¡£å®¢æ·ç«¯"""
def __init__(self):
self.api_base = "https://docs.qq.com/api"
self.access_token = None # éèŠé
眮
async def get_responses(self, form_id: str) -> List[Dict]:
"""
è·å衚åååºæ°æ®
泚æ: è
Ÿè®¯ææ¡£éèŠæåšå¯ŒåºExcelç¶åäžäŒ
æäœ¿çšè
Ÿè®¯äºAPIïŒéèŠäŒäžè®€è¯ïŒ
è¿éæäŸäž€ç§æ¹æ¡:
æ¹æ¡1: WebhookïŒæšèïŒ- 衚åæäº€æ¶æšéå°MBE
æ¹æ¡2: å®æå¯ŒåºExcelå¹¶èªåšè§£æ
"""
# æ¹æ¡2瀺äŸ: è§£æå¯ŒåºçExcelæä»¶
return await self._parse_excel_export()
async def _parse_excel_export(self) -> List[Dict]:
"""
è§£æå¯ŒåºçExcelæä»¶
å讟æä»¶è¢«èªåšäžäŒ å° /data/feedback/latest_export.xlsx
"""
try:
import pandas as pd
file_path = "/data/feedback/latest_export.xlsx"
df = pd.read_excel(file_path)
responses = []
for _, row in df.iterrows():
response = {
"submitted_at": row.get("æäº€æ¶éŽ"),
"name": row.get("å§å"),
"duration": row.get("æµè¯æ¶é¿"),
"expert_matching": row.get("äžå®¶å¹é
åç¡®æ§"),
"answer_quality": row.get("åç莚é"),
"context_understanding": row.get("äžäžæçè§£"),
"response_speed": row.get("ååºé床"),
"doc_readability": row.get("ææ¡£å¯è¯»æ§"),
"high_priority": row.get("é«äŒå
级é®é¢"),
"medium_priority": row.get("äžäŒå
级é®é¢"),
"low_priority": row.get("äœäŒå
级é®é¢"),
"overall_score": row.get("æ»äœè¯å"),
"recommendation": row.get("æ¯åŠæšèååž"),
"summary": row.get("äžå¥è¯æ»ç»"),
}
responses.append(response)
return responses
except Exception as e:
logger.error(f"Excelè§£æå€±èŽ¥: {e}")
return []
async def setup_webhook(self, form_id: str, webhook_url: str):
"""
讟眮WebhookïŒåŠæè
Ÿè®¯ææ¡£æ¯æïŒ
åœææ°æäº€æ¶ïŒèªåšæšéå°MBEçwebhook端ç¹
"""
# å®ç°Webhooké
眮
pass
class TencentDocsWebhook:
"""å€çè
Ÿè®¯ææ¡£Webhookæšé"""
@staticmethod
async def handle_submission(data: Dict):
"""å€ç衚åæäº€æšé"""
from src.feedback.sync_service import FeedbackSyncService
service = FeedbackSyncService()
# çŽæ¥å€çåæ¡æäº€
feedback = await service._create_feedback_from_response(
source="tencent_docs_webhook",
response=data
)
logger.info(f"â
Webhook: æ¶å°æ°åéŠ #{feedback.id}")
4. AIèªåšåæåŒæ
src/feedback/analyzers.py
"""
AIåéŠèªåšåæåŒæ
"""
from datetime import datetime, timedelta
from typing import List, Dict
from loguru import logger
from sqlalchemy import select, and_
from src.core.database import AsyncSessionLocal
from src.feedback.models import FeedbackSubmission, FeedbackPattern
from src.llm.base import LLMClient
from src.core.titans_conversation_memory import TITANSMemoryIntegration
class FeedbackAnalyzer:
"""åéŠæºèœåæåš"""
def __init__(self):
self.llm = LLMClient()
self.titans = TITANSMemoryIntegration()
async def analyze_recent_feedback(self, hours: int = 24):
"""åææè¿çåéŠ"""
logger.info(f"ð åŒå§åææè¿ {hours} å°æ¶çåéŠ...")
async with AsyncSessionLocal() as session:
# è·åæªåæçåéŠ
since = datetime.utcnow() - timedelta(hours=hours)
result = await session.execute(
select(FeedbackSubmission).where(
and_(
FeedbackSubmission.submitted_at >= since,
FeedbackSubmission.ai_analyzed.is_(None)
)
)
)
feedbacks = result.scalars().all()
if not feedbacks:
logger.info("â
没æåŸ
åæçåéŠ")
return
logger.info(f"ð åŸ
åæåéŠ: {len(feedbacks)} æ¡")
# éæ¡åæ
for feedback in feedbacks:
await self._analyze_single(feedback, session)
# æš¡åŒè¯å«
await self._identify_patterns(feedbacks, session)
# çææ±æ»æ¥å
await self._generate_summary_report(feedbacks)
await session.commit()
logger.info("â
åéŠåæå®æ")
async def _analyze_single(self, feedback: FeedbackSubmission, session):
"""åæåæ¡åéŠ"""
try:
# æå»ºåææç€ºè¯
prompt = f"""
äœ æ¯äžäžªäžäžç蜯件æµè¯åéŠåæäžå®¶ã请åæä»¥äžçšæ·åéŠ:
æµè¯äºº: {feedback.tester_name}
æ»äœè¯å: {feedback.overall_score}/10
æšè床: {feedback.recommendation}
AI对è¯è¯å:
- äžå®¶å¹é
: {feedback.expert_matching_score}/5
- åç莚é: {feedback.answer_quality_score}/5
- äžäžæçè§£: {feedback.context_understanding}
- ååºé床: {feedback.response_speed}
ææ¡£è¯å:
- å¯è¯»æ§: {feedback.doc_readability_score}/5
é«äŒå
级é®é¢:
{feedback.high_priority_issues or "æ "}
äžäŒå
级é®é¢:
{feedback.medium_priority_issues or "æ "}
äœäŒå
级é®é¢:
{feedback.low_priority_issues or "æ "}
äžå¥è¯æ»ç»:
{feedback.summary}
请åæå¹¶è¿åJSONæ ŒåŒ:
{{
"issue_category": "åèœ/æ§èœ/ææ¡£/UI/å
¶ä»",
"priority": "critical/high/medium/low",
"root_cause": "æ ¹æ¬åå åæ",
"fix_suggestion": "ä¿®å€å»ºè®®",
"related_files": ["å¯èœçžå
³çæä»¶è·¯åŸ"],
"estimated_effort": "é¢è®¡å·¥äœéïŒå°æ¶ïŒ"
}}
"""
# è°çšLLMåæ
response = await self.llm.chat(prompt)
# è§£æç»æ
import json
analysis = json.loads(response)
# æŽæ°åéŠè®°åœ
feedback.ai_analyzed = datetime.utcnow()
feedback.issue_category = analysis.get("issue_category")
feedback.priority = analysis.get("priority")
feedback.root_cause = analysis.get("root_cause")
feedback.fix_suggestion = analysis.get("fix_suggestion")
feedback.related_files = analysis.get("related_files", [])
logger.info(f"â
åæå®æ: #{feedback.id} - {analysis['priority']} - {analysis['issue_category']}")
# åŠææ¯é«äŒå
级ïŒèªåšå建Issue
if analysis["priority"] in ["critical", "high"]:
await self._auto_create_issue(feedback, analysis)
except Exception as e:
logger.error(f"åæå€±èŽ¥: {e}")
async def _identify_patterns(self, feedbacks: List[FeedbackSubmission], session):
"""è¯å«åéŠæš¡åŒïŒçžäŒŒé®é¢èåïŒ"""
logger.info("ð è¯å«åéŠæš¡åŒ...")
# 䜿çšTITANSçè¯ä¹æçŽ¢èœåæŸçžäŒŒé®é¢
# å®ç°èç±»ç®æ³
# ...
pass
async def _auto_create_issue(self, feedback: FeedbackSubmission, analysis: Dict):
"""èªåšå建GitHub/Gitea Issue"""
from src.feedback.automation.issue_creator import IssueCreator
creator = IssueCreator()
issue_url = await creator.create_issue(
title=f"[æµè¯åéŠ] {feedback.summary[:50]}",
body=self._format_issue_body(feedback, analysis),
labels=[analysis["priority"], analysis["issue_category"]],
assignees=[] # 坿 ¹æ®categoryèªåšåé
)
feedback.github_issue_url = issue_url
feedback.status = "issue_created"
logger.info(f"ð å·²å建Issue: {issue_url}")
def _format_issue_body(self, feedback: FeedbackSubmission, analysis: Dict) -> str:
"""æ ŒåŒåIssueæ£æ"""
return f"""
## æµè¯åéŠè¯Šæ
**åéŠæ¥æº**: {feedback.source}
**æäº€æ¶éŽ**: {feedback.submitted_at.strftime('%Y-%m-%d %H:%M')}
**æµè¯äºº**: {feedback.tester_name}
**æ»äœè¯å**: {feedback.overall_score}/10
**æšè床**: {feedback.recommendation}
---
## AIåæ
**é®é¢åç±»**: {analysis['issue_category']}
**äŒå
级**: {analysis['priority']}
**é¢è®¡å·¥äœé**: {analysis.get('estimated_effort', 'åŸ
è¯äŒ°')}
**æ ¹æ¬åå **:
{analysis['root_cause']}
**ä¿®å€å»ºè®®**:
{analysis['fix_suggestion']}
**å¯èœçžå
³æä»¶**:
{chr(10).join(f'- `{f}`' for f in analysis.get('related_files', []))}
---
## 诊ç»é®é¢
### é«äŒå
级é®é¢
{feedback.high_priority_issues or 'æ '}
### äžäŒå
级é®é¢
{feedback.medium_priority_issues or 'æ '}
### äœäŒå
级é®é¢
{feedback.low_priority_issues or 'æ '}
---
## æµè¯æ°æ®
**AI对è¯è¯å**:
- äžå®¶å¹é
: {feedback.expert_matching_score}/5
- åç莚é: {feedback.answer_quality_score}/5
- äžäžæçè§£: {feedback.context_understanding}
- ååºé床: {feedback.response_speed}
**ææ¡£è¯å**:
- å¯è¯»æ§: {feedback.doc_readability_score}/5
---
**åå§åéŠID**: #{feedback.id}
"""
async def _generate_summary_report(self, feedbacks: List[FeedbackSubmission]):
"""çææ±æ»æ¥å"""
# ç»è®¡æ°æ®
# çæè¶å¿åæ
# åéç»å¢é
pass
5. èªåšIssueå建
src/feedback/automation/issue_creator.py
"""
èªåšå建GitHub/Gitea Issue
"""
import aiohttp
from typing import List
from loguru import logger
class IssueCreator:
"""Issueèªåšå建åš"""
def __init__(self):
# ä»ç¯å¢åé读åé
眮
import os
self.git_platform = os.getenv("GIT_PLATFORM", "gitea") # gitea or github
self.api_url = os.getenv("GIT_API_URL", "http://localhost:3000/api/v1")
self.repo = os.getenv("GIT_REPO", "zenglx01/mises-behavior-engine")
self.token = os.getenv("GIT_TOKEN")
async def create_issue(
self,
title: str,
body: str,
labels: List[str] = None,
assignees: List[str] = None
) -> str:
"""å建Issueå¹¶è¿åURL"""
try:
url = f"{self.api_url}/repos/{self.repo}/issues"
headers = {
"Authorization": f"token {self.token}",
"Content-Type": "application/json"
}
data = {
"title": title,
"body": body,
"labels": labels or [],
"assignees": assignees or []
}
async with aiohttp.ClientSession() as session:
async with session.post(url, headers=headers, json=data) as response:
if response.status == 201:
result = await response.json()
issue_url = result.get("html_url")
logger.info(f"â
Issueå建æå: {issue_url}")
return issue_url
else:
logger.error(f"Issueå建倱莥: {response.status}")
return None
except Exception as e:
logger.error(f"Issueå建倱莥: {e}")
return None
ð é 眮æä»¶
config/feedback_sync.yaml
# åéŠåæ¥é
眮
sync:
enabled: true
interval: 300 # ç§
sources:
tencent_docs:
enabled: true
form_id: "${TENCENT_FORM_ID}"
method: "excel_export" # excel_export æ webhook
export_path: "/data/feedback/latest_export.xlsx"
google_forms:
enabled: false
form_id: "${GOOGLE_FORM_ID}"
credentials_file: "/secrets/google-credentials.json"
slack:
enabled: true
channel: "#dev-testing"
bot_token: "${SLACK_BOT_TOKEN}"
analysis:
enabled: true
auto_analyze: true
llm_model: "deepseek-chat"
automation:
auto_create_issue: true
issue_priority_threshold: "high" # critical, high, medium, low
git_platform: "gitea" # gitea æ github
git_api_url: "http://localhost:3000/api/v1"
git_repo: "zenglx01/mises-behavior-engine"
git_token: "${GIT_TOKEN}"
titans:
store_in_memory: true
memory_expert: "æµè¯åéŠåæäžå®¶"
ð éšçœ²å䜿çš
1. æ°æ®åºè¿ç§»
# å建åéŠè¡š
alembic revision --autogenerate -m "add feedback tables"
alembic upgrade head
2. é 眮ç¯å¢åé
# .env æä»¶
TENCENT_FORM_ID=your_form_id
GOOGLE_FORM_ID=your_form_id
SLACK_BOT_TOKEN=your_slack_token
GIT_PLATFORM=gitea
GIT_API_URL=http://localhost:3000/api/v1
GIT_REPO=zenglx01/mises-behavior-engine
GIT_TOKEN=your_git_token
3. å¯åšåæ¥æå¡
æ¹æ³A: äœäžºç¬ç«æå¡
# scripts/start_feedback_sync.py
import asyncio
from src.feedback.sync_service import FeedbackSyncService
async def main():
service = FeedbackSyncService()
await service.start()
if __name__ == "__main__":
asyncio.run(main())
è¿è¡:
python scripts/start_feedback_sync.py
æ¹æ³B: éæå°äž»åºçš
# src/main.py
from src.feedback.sync_service import FeedbackSyncService
@asynccontextmanager
async def lifespan(app: FastAPI):
# å¯åšæ¶
logger.info("ð å¯åšåéŠåæ¥æå¡...")
sync_service = FeedbackSyncService()
sync_task = asyncio.create_task(sync_service.start())
yield
# å
³éæ¶
sync_task.cancel()
logger.info("â
åéŠåæ¥æå¡å·²åæ¢")
app = FastAPI(lifespan=lifespan)
4. æ·»å API端ç¹ïŒWebhookæ¥æ¶ïŒ
# src/api/feedback.py
from fastapi import APIRouter, Request
from src.feedback.integrations.tencent_docs import TencentDocsWebhook
router = APIRouter(prefix="/api/feedback", tags=["feedback"])
@router.post("/webhook/tencent")
async def tencent_webhook(request: Request):
"""æ¥æ¶è
Ÿè®¯ææ¡£Webhookæšé"""
data = await request.json()
await TencentDocsWebhook.handle_submission(data)
return {"status": "ok"}
@router.get("/submissions")
async def get_submissions(limit: int = 50):
"""è·ååéŠå衚"""
async with AsyncSessionLocal() as session:
result = await session.execute(
select(FeedbackSubmission)
.order_by(FeedbackSubmission.submitted_at.desc())
.limit(limit)
)
submissions = result.scalars().all()
return submissions
@router.get("/analysis/summary")
async def get_analysis_summary():
"""è·ååææ±æ»"""
# è¿åç»è®¡æ°æ®ãè¶å¿åŸç
pass
ð äœ¿çšæµçš
宿Žèªåšåæµçš
1. çšæ·å¡«ååšçº¿è¡šå
â
2. æ°æ®åæ¥æå¡ïŒæ¯5åéïŒ
- ä»è
Ÿè®¯ææ¡£/Google Formsæåæ°æ®
- æéè¿Webhook宿¶æ¥æ¶
â
3. åå
¥PostgreSQLæ°æ®åº
- å»éæ£æ¥
- ååšåå§æ°æ®
â
4. TITANSè®°å¿ç³»ç»
- åéåååš
- 建ç«è¯ä¹çŽ¢åŒ
â
5. AIèªåšåæ
- é®é¢åç±»
- äŒå
级è¯äŒ°
- æ ¹å åæ
- ä¿®å€å»ºè®®
â
6. èªåšå建IssueïŒé«äŒå
级ïŒ
- çæIssueæ é¢åæ£æ
- èªåšææ çŸ
- å
³èä»£ç æä»¶
â
7. åŒå人åïŒæAIïŒä¿®å€
- æ¥çIssue
- ä¿®å€ä»£ç
- æäº€PR
â
8. èªåšéªè¯
- è¿è¡æµè¯
- æŽæ°Issueç¶æ
â
9. åéŠç»æµè¯äººå
- åééç¥
- 请æ±å€æµ
ð¯ Phase 2: é«çº§åèœïŒæªæ¥ïŒ
1. AIèªåšä¿®å€
# src/feedback/automation/auto_fixer.py
class AutoFixer:
"""AIèªåšä¿®å€åŒæïŒäžCursoréæïŒ"""
async def attempt_fix(self, issue: FeedbackSubmission):
"""å°è¯èªåšä¿®å€é®é¢"""
# 1. åæé®é¢
analysis = await self._analyze_issue(issue)
# 2. å®äœä»£ç
files = await self._locate_code(analysis)
# 3. çæä¿®å€ä»£ç
fix_code = await self._generate_fix(files, analysis)
# 4. å建PR忝
branch = await self._create_branch(f"auto-fix-{issue.id}")
# 5. åºçšä¿®å€
await self._apply_fix(branch, fix_code)
# 6. è¿è¡æµè¯
test_result = await self._run_tests()
# 7. å建PRïŒåŸ
äººå·¥å®¡æ žïŒ
if test_result.passed:
pr_url = await self._create_pr(
title=f"[Auto-Fix] {issue.summary}",
description=self._format_pr_description(issue, analysis),
labels=["auto-generated", "needs-review"]
)
return pr_url
else:
return None
2. åéŠè¶å¿åæ
class TrendAnalyzer:
"""åéŠè¶å¿åæ"""
async def generate_trend_report(self):
"""çæè¶å¿æ¥å"""
# åæææ :
# - å¹³åè¯ååå
# - é®é¢ç±»åååž
# - ååºæ¶éŽè¶å¿
# - çšæ·æ»¡æåºŠ
# - ä¿®å€æç
pass
3. æºèœæµè¯çšäŸçæ
class TestCaseGenerator:
"""åºäºåéŠèªåšçææµè¯çšäŸ"""
async def generate_from_feedback(self, issue: FeedbackSubmission):
"""æ ¹æ®çšæ·åéŠçææµè¯çšäŸ"""
# åæçšæ·æè¿°çé®é¢
# çæéç°æ¥éª€
# å建èªåšåæµè¯èæ¬
# æ·»å å°æµè¯å¥ä»¶
pass
ð æ»ç»
ç«å³å¯å®ç°ïŒPhase 1ïŒ
â æ°æ®åæ¥æå¡
- 宿¶ä»åšçº¿è¡šåæåæ°æ®
- åå ¥PostgreSQLæ°æ®åº
- å»éåå¢é忥
â TITANSè®°å¿éæ
- ææåéŠåéåååš
- è¯ä¹æçŽ¢çžäŒŒé®é¢
- å岿š¡åŒè¯å«
â AIèªåšåæ
- é®é¢èªåšåç±»
- äŒå 级æºèœè¯äŒ°
- æ ¹å åæ
- ä¿®å€å»ºè®®çæ
â èªåšIssueå建
- é«äŒå 级é®é¢èªåšå»ºIssue
- èªåšææ çŸååé
- å ³èä»£ç æä»¶
é¢è®¡å·¥äœé
Phase 1 å®ç°:
- æ°æ®åæ¥æå¡: 4å°æ¶
- æ°æ®æš¡å讟计: 2å°æ¶
- AIåæåŒæ: 6å°æ¶
- Issueèªåšå建: 2å°æ¶
- æµè¯åè°è¯: 4å°æ¶
æ»è®¡: 纊18å°æ¶ïŒ2-3倩ïŒ
ä»·åŒ
ð¯ å®å šèªåšåçåéŠå€çåŸªç¯ ð é¶äººå·¥å¹²é¢çé®é¢è¿œèžª ð¡ AI驱åšçæç»æ¹è¿ ð 宿¶åéŠæ°æ®åæ ð® 颿µæ§é®é¢è¯å«
è¿å°±æ¯AI驱åšåŒåçæªæ¥ïŒ ð
ææ¡£çæ¬: v1.0
å建æ¶éŽ: 2026-01-28
é¢è®¡å®ç°æ¶éŽ: 2-3倩