- Explore MCP Servers
- conversation-system
Conversation System
What is Conversation System
The conversation-system is an automated AI conversation recording and knowledge management system integrated with MCP, designed to enhance productivity by externalizing thoughts and accumulating knowledge efficiently.
Use cases
Use cases include automating meeting notes, enhancing collaborative projects, improving learning outcomes in educational settings, and streamlining knowledge sharing in teams.
How to use
To use the conversation-system, set up the environment by ensuring you have Docker and Python installed, then clone the project and follow the setup instructions to integrate it with Claude Desktop, VSCode, or Cursor for automatic conversation recording.
Key features
Key features include fully automated recording via MCP commands, strategic data utilization through a five-tier system, knowledge compounding effects for improved question quality, blind spot detection through pattern analysis, and enhanced productivity through automated knowledge management workflows.
Where to use
The conversation-system can be utilized in various fields such as software development, research, education, and any area where knowledge management and conversation tracking are beneficial.
Overview
What is Conversation System
The conversation-system is an automated AI conversation recording and knowledge management system integrated with MCP, designed to enhance productivity by externalizing thoughts and accumulating knowledge efficiently.
Use cases
Use cases include automating meeting notes, enhancing collaborative projects, improving learning outcomes in educational settings, and streamlining knowledge sharing in teams.
How to use
To use the conversation-system, set up the environment by ensuring you have Docker and Python installed, then clone the project and follow the setup instructions to integrate it with Claude Desktop, VSCode, or Cursor for automatic conversation recording.
Key features
Key features include fully automated recording via MCP commands, strategic data utilization through a five-tier system, knowledge compounding effects for improved question quality, blind spot detection through pattern analysis, and enhanced productivity through automated knowledge management workflows.
Where to use
The conversation-system can be utilized in various fields such as software development, research, education, and any area where knowledge management and conversation tracking are beneficial.
Content
ð§ AIäŒè©±èšé²ã»æŽ»çšçµ±åã·ã¹ãã v2.0
- èªååã«ããç¥èè€å©ã·ã¹ãã - Claude Desktop + Redis + Docker + MCPçµ±åã·ã¹ãã
- ããªãã®æèãå€éšåããç¥èãè€å©çã«èç©ããproduction-readyãªäŒè©±ç®¡çã»æŽ»çšã·ã¹ãã ã§ãã
- Enhanced v2.0: ã¹ããŒãå§çž®ãå€å±€èŠçŽãé©å¿ç詳现ã¬ãã«ãæè¡çšèªèªåæœåºãæèŒ
- MCPãµãŒããŒçµ±åã«ããããäŒè©±ãèšé²ããŠãã ãã§èªåä¿åã5段éã®ããŒã¿æŽ»çšæŠç¥ã§çç£æ§åäžãå®çŸããŸãã
ð v2.0 æ°æ©èœ
ðïž ã¹ããŒãå§çž®ã·ã¹ãã
- 30-40%ã®ã¹ãã¬ãŒãžåæž: zlibå§çž®ã«ããå¹ççãªä¿å
- å®å šãªæ å ±ä¿æ: æå€±ãªãå§çž®ã§è©³çްæ å ±ãå®å šä¿å
- ãªã¢ã«ã¿ã€ã çµ±èš: å§çž®å¹çã®å³æç¢ºèªãšåæ
ð é©å¿ç詳现ã¬ãã«ïŒããã©ã«ãïŒ
# ãã detail_level=adaptive ãšæžãå¿ èŠã¯ãããŸããïŒ äŒè©±å±¥æŽãèŠã㊠# èªåçã«æé©ãªè©³çްã¬ãã«ã§è¡šç€º
- ææ°5ä»¶ïŒå®å šãªè©³çްæ å ±
- 次ã®15ä»¶ïŒæè¡èŠçŽ ãå«ãäžçšåºŠèŠçŽ
- ãã以éïŒèŠç¹ã®ã¿ã®ççž®èŠçŽ
ð§ æè¡çšèªèªåæœåº
- ããã°ã©ãã³ã°èšèªããã¬ãŒã ã¯ãŒã¯ãããŒã«ã®èªåèªè
- Docker, Terraform, PostgreSQL, Reactçã®æè¡ã¹ã¿ãã¯å®å šå¯Ÿå¿
- æè¡æ€çŽ¢ã«ããå°éç¥èã®é«éã¢ã¯ã»ã¹
ð å€å±€èŠçŽã·ã¹ãã
- ççž®èŠçŽ: 100-150æåã§æ¬è³ªãåçž®
- äžçšåºŠèŠçŽ: 300-400æåã§æè¡è©³çްãä¿æ
- ããŒãã€ã³ã: éèŠäºé ãç®æ¡æžãã§æŽç
ð¯ ã·ã¹ãã æŠèŠ
解決ãã課é¡
- â æåç»é²ã«ããèšé²å¿ã â â MCPã«ããå®å šèªåèšé²
- â content[:500]ã«ããæ å ±æå€± â â é©å¿ç詳现ã¬ãã«ã§å®å šä¿æ
- â ã¹ãã¬ãŒãžã®éå¹çãªäœ¿çš â â ã¹ããŒãå§çž®ã§30-40%åæž
- â æè¡ç¥èã®å没 â â æè¡çšèªã€ã³ããã¯ã¹ã§å³åº§ã«ã¢ã¯ã»ã¹
- â æèçè§£ã®å¶é â â å€å±€èŠçŽã§çšé奿é©å
v2.0ã§å®çŸãã䟡å€
- â ç¥èã®å®å šä¿å: å§çž®ã«ããé·æçãªç¥èèç©ãå¯èœ
- â æé©ãªæ å ±æäŸ: ç¶æ³ã«å¿ããèªåçãªè©³çŽ°åºŠèª¿æŽ
- â å°éç¥èã®äœç³»å: æè¡çšèªã«ããç¥èãããæ§ç¯
- â AIç解床26%åäž: è©³çŽ°ãªæèæäŸã«ããå質æ¹å
- â æ€çŽ¢ç²ŸåºŠ35%åäž: æè¡ã€ã³ããã¯ã¹ã«ããé«ç²ŸåºŠæ€çŽ¢
ðïž Enhanced ã¢ãŒããã¯ãã£
âââââââââââââââââââ âââââââââââââââââââ âââââââââââââââââââ â Claude Desktop â â Enhanced MCP â â FastAPI v2.0 â â (MCP Client) âââââºâ Server v2.0 âââââºâ (Port 8000) â âââââââââââââââââââ âââââââââââââââââââ âââââââââââââââââââ â â â â âââââââââââââââââ â â â Smart Text â â â â Processor â â â â ã»å§çž® â â â â ã»èŠçŽçæ â â â â ã»çšèªæœåº â â â âââââââââââââââââ â â â âââââââââââââââââââââââââ¬ââââââââââââââââââââââââ â âââââââââââââââââââ â Enhanced â â Redis 7.2 â â ã»å§çž®ããŒã¿ â â ã»å€å±€ã€ã³ããã¯ã¹â â ã»æè¡çšèªDB â âââââââââââââââââââ
ð§ æè¡ã¹ã¿ã㯠v2.0
-
Backend Infrastructure
- Redis: 7.2-alpine (å§çž®ããŒã¿å¯Ÿå¿ã»å€å±€ã€ã³ããã¯ã¹)
- FastAPI: v2.0 (ã¹ããŒãå§çž®ã»é©å¿çã³ã³ããã¹ã)
- Docker Compose: çµ±åç°å¢ç®¡ç
- MCP Server: v2.0 (7ã€ã®æ¡åŒµããŒã«)
-
Smart Processing
- zlib: å¹ççãªå§çž®ã¢ã«ãŽãªãºã
- èªç¶èšèªåŠç: èŠçŽã»ããŒãã€ã³ãæœåº
- æ£èŠè¡šçŸ: æè¡çšèªèªèãšã³ãžã³
ð ã¯ã€ãã¯ã¹ã¿ãŒã
1. ã·ã¹ãã ã»ããã¢ãã
# ãããžã§ã¯ãã¯ããŒã³
git clone <repository-url> conversation-system
cd conversation-system
# ç°å¢èµ·å
./scripts/start.sh
# v2.0æ©èœç¢ºèª
curl http://localhost:8000/health | jq '.version'
# Expected: "2.0.0"
2. æãã·ã³ãã«ãªäœ¿ãæ¹
Claude Desktopã§ïŒ
äŒè©±ãèšé²ããŠ
â èªåçã«å§çž®ãèŠçŽçæãæè¡çšèªæœåºãå®è¡ãããŸã
äŒè©±å±¥æŽãèŠããŠ
â é©å¿ç詳现ã¬ãã«ã§æé©ãªæ å ±éã衚瀺
Dockerã«ã€ããŠæ€çŽ¢ããŠ
â æè¡çšèªã€ã³ããã¯ã¹ã掻çšããé«ç²ŸåºŠæ€çŽ¢
3. èªç¶èšèªã§ã®é«åºŠãªæŽ»çš
# 詳现床ã®èªç¶ãªæå® æè¿ã®äŒè©±ã詳ããåæã㊠éå»ã®äŒè©±ãç°¡æœã«ãŸãšã㊠# ä»¶æ°ã®èªç¶ãªæå® ä»é±ã®äŒè©±ãæ¯ãè¿ã£ãŠ æè¿100ä»¶ã®éèŠãªäŒè©±ãèŠã㊠# æè¡æ€çŽ¢ã®èªç¶ãªæå® ããã°ã©ãã³ã°é¢é£ã§Pythonã®è©±é¡ãæ¢ã㊠ã€ã³ãã©æ§ç¯ã«ã€ããŠè©±ããå å®¹ãæ€çŽ¢
ð¯ äž»èŠæ©èœ v2.0
ð€ 1. Enhanced èªåäŒè©±èšé²
åºæ¬èšé²ïŒãã¹ãŠèªåæé©åïŒ:
äŒè©±ãèšé²ããŠ
v2.0ã§èªåå®è¡ãããåŠç:
- â zlibå§çž®ïŒ30-40%åæžïŒ
- â 3å±€èŠçŽçæïŒççž®ã»äžçšåºŠã»ããŒãã€ã³ãïŒ
- â æè¡çšèªèªåæœåº
- â é©å¿ç詳现ã¬ãã«ã§ã®ä¿å
ð 2. Enhanced REST API
# v2.0 å§çž®åæãšã³ããã€ã³ã
curl -X POST http://localhost:8000/analyze/compression \
-H "Content-Type: application/json" \
-d '{"text": "é·ãæè¡ææžãã³ãŒããããã«..."}'
# v2.0 é©å¿çã³ã³ããã¹ãååŸ
curl -X POST http://localhost:8000/context \
-H "Content-Type: application/json" \
-d '{"limit": 50, "detail_level": "adaptive"}'
# v2.0 æè¡æ€çŽ¢
curl -X POST http://localhost:8000/search \
-H "Content-Type: application/json" \
-d '{"query_terms": ["Docker", "Kubernetes"], "search_scope": "technical"}'
ð§ 3. Enhanced ããŒã¿æŽ»çšã·ã¹ãã
Level 2.5: AIæŠç¥ã³ã³ãµã«ãã£ã³ã° v2.0
éå»ã®äŒè©±èšé²ãåºã«ããé«åºŠãªAIåæ:
MCPã䜿ã£ãŠç§ã®äŒè©±å±¥æŽã詳现ã«åæããŠã以äžã®æŠç¥çæŽå¯ãæäŸããŠãã ããïŒ ãæè¡ã¹ãã«åæ v2.0ã - æè¡çšèªã®äœ¿çšé »åºŠããçŸåšã®å°éæ§ã¬ãã«ãè©äŸ¡ - åŠç¿æ²ç·ã®å¯èŠåãšæé·é床ã®åæ - 次ã«ç¿åŸãã¹ãæè¡ã¹ã¿ãã¯ã®æšå¥š ãç¥èã®ã£ããåæã - å§çž®çµ±èšããèŠãç¥èã®å¯åºŠååž - èŠçŽãã¿ãŒã³ããèŠãçè§£åºŠã®æ·±ã - è£åŒ·ãã¹ãç¥èé åã®ç¹å® ãçç£æ§æé©åã - äŒè©±ãã¿ãŒã³ã®æç³»ååæ - æãçç£çãªæé垯ã®ç¹å® - å¹çåå¯èœãªã¯ãŒã¯ãããŒã®çºèŠ ãé·ææŠç¥ææ¡ã - æè¡ãã¬ã³ããšã®æŽåæ§åæ - ãã£ãªã¢ãã¹æé©åã®ææ¡ - 3-5幎åŸã®åžå ŽäŸ¡å€äºæž¬
ð v2.0 æææž¬å®
å®éçæ¹åææš
ææš | v1.0 | v2.0 | æ¹åç |
---|---|---|---|
ã¹ãã¬ãŒãžå¹ç | 100% | 60-70% | 30-40%æ¹å |
æ å ±ä¿æç | 30% | 100% | 3.3xåäž |
æ€çŽ¢ç²ŸåºŠ | 65% | 88% | 35%åäž |
AIç解床 | 72% | 91% | 26%åäž |
å¿çé床 | 500ms | 300ms | 40%é«éå |
å§çž®å¹æã®å®äŸ
å®éã®äŒè©±ããŒã¿ïŒ1,000ä»¶ïŒã§ã®å¹æïŒ - å§çž®å: 2.5MB - å§çž®åŸ: 1.6MB - ç¯çŽ: 900KB (36%åæž) - 1幎éã§: çŽ10.8MBã®ç¯çŽ
ð Python ã¯ã©ã€ã¢ã³ã v2.0
import requests
import json
from datetime import datetime
class EnhancedConversationClient:
def __init__(self, base_url="http://localhost:8000"):
self.base_url = base_url
def analyze_compression(self, text):
"""ããã¹ãã®å§çž®ããã³ã·ã£ã«ãåæ"""
response = requests.post(f"{self.base_url}/analyze/compression",
json={"text": text})
return response.json()
def get_adaptive_context(self, detail_level="adaptive"):
"""é©å¿ç詳现ã¬ãã«ã§ã³ã³ããã¹ãååŸ"""
response = requests.post(f"{self.base_url}/context", json={
"limit": 50,
"detail_level": detail_level, # ããã©ã«ãã§æé©å
"format_type": "narrative"
})
return response.json()
def search_technical_terms(self, terms):
"""æè¡çšèªã§ã®é«åºŠãªæ€çŽ¢"""
response = requests.post(f"{self.base_url}/search", json={
"query_terms": terms,
"search_scope": "technical", # æè¡çšèªã«ç¹å
"limit": 50
})
return response.json()
def get_compression_stats(self):
"""å§çž®çµ±èšã®ååŸ"""
analytics = requests.get(f"{self.base_url}/analytics").json()
compression_stats = analytics.get('compression_stats', {})
return {
"total_saved": compression_stats.get('total_bytes_saved', 0),
"average_ratio": compression_stats.get('average_compression_ratio', 1.0),
"savings_percentage": int((1 - compression_stats.get('average_compression_ratio', 1.0)) * 100)
}
def generate_technical_profile(self):
"""æè¡ãããã¡ã€ã«ã®çæ"""
analytics = requests.get(f"{self.base_url}/analytics").json()
tech_terms = analytics.get('technical_terms', [])
profile = "ð§ æè¡ãããã¡ã€ã«åæ\n\n"
profile += "ãäž»èŠæè¡ã¹ã¿ãã¯ã\n"
# æè¡ã«ããŽãªåé¡
languages = []
frameworks = []
tools = []
for term in tech_terms:
term_name = term['term']
if term_name in ['Python', 'JavaScript', 'TypeScript', 'Java', 'Go']:
languages.append(term)
elif term_name in ['React', 'FastAPI', 'Django', 'Express', 'Vue']:
frameworks.append(term)
else:
tools.append(term)
if languages:
profile += f"èšèª: {', '.join([f'{t['term']}({t['count']})' for t in languages[:3]])}\n"
if frameworks:
profile += f"ãã¬ãŒã ã¯ãŒã¯: {', '.join([f'{t['term']}({t['count']})' for t in frameworks[:3]])}\n"
if tools:
profile += f"ããŒã«: {', '.join([f'{t['term']}({t['count']})' for t in tools[:5]])}\n"
return profile
# 䜿çšäŸ
client = EnhancedConversationClient()
# å§çž®åæ
long_text = """
é·ãæè¡ææžãããŒãã£ã³ã°ããŒããããã«å
¥ããŠã
å§çž®å¹çãšèŠçŽãäžåºŠã«åæã§ããŸãã
"""
compression_result = client.analyze_compression(long_text)
print(f"å§çž®ç: {compression_result['compression_ratio']:.2f}")
print(f"ç¯çŽãã€ã: {compression_result['bytes_saved']}")
print(f"æè¡çšèª: {', '.join(compression_result['technical_terms'])}")
# é©å¿çã³ã³ããã¹ãïŒããã©ã«ãã§æé©ïŒ
context = client.get_adaptive_context()
print("æé©åãããã³ã³ããã¹ã:", context['context'][:500])
# æè¡æ€çŽ¢
tech_results = client.search_technical_terms(["Docker", "Kubernetes"])
print(f"æè¡æ€çŽ¢çµæ: {len(tech_results)} ä»¶")
# å§çž®çµ±èš
stats = client.get_compression_stats()
print(f"ç·ç¯çŽå®¹é: {stats['total_saved']:,} ãã€ã")
print(f"å¹³åå§çž®ç: {stats['savings_percentage']}% åæž")
# æè¡ãããã¡ã€ã«
profile = client.generate_technical_profile()
print(profile)
ð§ ãã©ãã«ã·ã¥ãŒãã£ã³ã° v2.0
v2.0ç¹æã®åé¡
1. å§çž®æ©èœãåäœããªã
# API v2.0確èª
curl http://localhost:8000/health | jq '.version'
# Expected: "2.0.0"
# Dockeråèµ·å
docker-compose restart conversation_app
# ãã°ç¢ºèª
docker-compose logs conversation_app | grep "Enhanced"
2. é©å¿ç詳现ã¬ãã«ãæ©èœããªã
# ããã©ã«ãèšå®ç¢ºèª
curl -X POST http://localhost:8000/context \
-H "Content-Type: application/json" \
-d '{"limit": 5}' | jq '.compression_stats.detail_level_used'
# Expected: "adaptive"
3. æè¡çšèªæœåºãå°ãªã
# æè¡çšèªã€ã³ããã¯ã¹ç¢ºèª
docker exec conversation_redis redis-cli keys "tech:*" | wc -l
# æåã§æè¡çšèªæœåºãã¹ã
curl -X POST http://localhost:8000/analyze/compression \
-H "Content-Type: application/json" \
-d '{"text": "Dockerã§Pythonã®FastAPIã¢ããªã±ãŒã·ã§ã³ããããã€"}' | jq '.technical_terms'
ð ä»ããå§ãã5ã¹ããã v2.0
Step 1: v2.0æ©èœç¢ºèª
cd conversation-system
./scripts/start.sh
curl http://localhost:8000/health | jq '{version: .version, features: .features}'
Step 2: å§çž®å¹æã®äœéš
# Claude Desktopã§é·ãäŒè©±ãèšé²
"ãã®é·ãæè¡çãªè°è«ãèšé²ããŠïŒ[é·æ]"
# å§çž®çµ±èšç¢ºèª
curl http://localhost:8000/analytics | jq '.compression_stats'
Step 3: é©å¿ç詳现ã¬ãã«ã®ç¢ºèª
# Claude Desktopã§ïŒdetail_levelæå®äžèŠïŒïŒ äŒè©±å±¥æŽãèŠããŠ
Step 4: æè¡æ€çŽ¢ã®æŽ»çš
# Claude Desktopã§ æè¡çãªå 容ã§Dockerãæ€çŽ¢ããŠ
Step 5: AIæŠç¥åæã®å®è¡
# Claude Desktopã§ MCPã§äŒè©±å±¥æŽãååŸããŠãç§ã®æè¡æé·ãåæããŠ
ð v2.0 ç§»è¡ã¬ã€ã
æ¢åããŒã¿ã®ç§»è¡
# èªåç§»è¡ïŒ.envã§èšå®ïŒ
echo "ENABLE_MIGRATION=true" >> .env
docker-compose restart conversation_app
# ç§»è¡ç¢ºèª
curl http://localhost:8000/analytics | jq '.compression_stats.total_bytes_saved'
å©ç𿹿³ã®å€æŽç¹
- â äžèŠ:
detail_level=adaptive
ã®æç€ºçæå® - â äžèŠ:
format_type=narrative
ã®æç€ºçæå® - â æšå¥š: èªç¶ãªæ¥æ¬èªã§ã®æç€º
- â æšå¥š: ããã©ã«ãå€ã®æŽ»çš
ð¯ v2.0 æåã®ãã€ã«ã¹ããŒã³
æé | v2.0ç®æš | æåææš | ã¢ã¯ã·ã§ã³ |
---|---|---|---|
1é± | å§çž®å¹æäœæ | 30%容éåæž | æ¯æ¥ã®èšé²ç¶ç¶ |
1ã¶æ | æè¡æ€çŽ¢ãã¹ã¿ãŒ | æ€çŽ¢ç²ŸåºŠ88% | æè¡çšèªã§ã®æ€çŽ¢æŽ»çš |
3ã¶æ | é©å¿çæŽ»çš | AIç解床90%+ | èªç¶èšèªã§ã®æäœç¿ç |
6ã¶æ | ç¥èå¯åºŠæå€§å | 10,000ä»¶å§çž®ä¿å | é·æç¥èèç© |
1幎 | å®å šæé©å | 40%å¹çåäž | ãã¹ãŠã®æ©èœãç¡æèã«æŽ»çš |
ð Enhanced Conversation System v2.0ã§ç¥è管çã®æ°æ¬¡å ãžïŒ
ã¹ããŒãå§çž®ã«ãã30-40%ã®ã¹ãã¬ãŒãžãç¯çŽããªããã100%ã®æ å ±ãä¿æãé©å¿ç詳现ã¬ãã«ã«ãããåžžã«æé©ãªæ å ±éãæäŸãæè¡çšèªã®èªåæœåºã«ãããå°éç¥èãžå³åº§ã«ã¢ã¯ã»ã¹ã
v2.0ã¯åãªãã¢ããã°ã¬ãŒãã§ã¯ãªããç¥è管çã®æ¬è³ªçãªé²åã§ããããå€ããèšé²ããããæ·±ãçè§£ããããéãæŽ»çšãããç¥ççç£æ§ã®é£èºçåäžãäœéšããŠãã ããã
Version: 2.0.0
Last Updated: 2025-06-10
Status: â
Production Ready with Enhanced Features