Building an AI-Powered Recycling Agent with Amadeus Blockchain
Table of Contents
- Introduction
- Overall Architecture
- Building an MCP Agent
- Amadeus Blockchain Integration
- On-chain Training Data Storage
- Connecting the Agent to the Frontend
- Self-learning and Continuous Improvement
- Practical Implementation
- Security and Best Practices
- References
Introduction
The Recycle Guru project requires an AI agent capable of automatically evaluating the value of electronic devices, continuously improving through collected data, and interacting with the Amadeus blockchain for reward management and traceability. This guide explains how to build this agent using the Model Context Protocol (MCP) and the Amadeus blockchain infrastructure.
The Model Context Protocol is an open standard that allows AI agents to connect to external data sources and tools in a standardized way. The Amadeus blockchain provides a dedicated MCP server that enables agents to interact directly with the blockchain to create transactions, query smart contracts, and manage tokens.
Overall Architecture
The Recycle Guru architecture with MCP integration consists of four main layers that work together to create an intelligent and decentralized system.
Architecture Diagram
┌─────────────────────────────────────────────────────────────────┐
│ Frontend (React + tRPC) │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Device Form │ │ Dashboard │ │ Wallet UI │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Backend API (Node.js/tRPC) │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Device API │ │ Rewards API │ │ User API │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ MCP Host (AI Application) │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ MCP Client 1 │ │ MCP Client 2 │ │ MCP Client 3 │ │
│ │ (Amadeus) │ │ (Valuation) │ │ (zkVerify) │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
└─────────┼──────────────────┼──────────────────┼─────────────────┘
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Amadeus MCP │ │ Valuation MCP │ │ zkVerify MCP │
│ Server │ │ Server │ │ Server │
│ │ │ │ │ │
│ • Transactions │ │ • AI Model │ │ • Identity │
│ • Smart │ │ • Training Data │ │ • Verification │
│ Contracts │ │ • Predictions │ │ • Proofs │
│ • Token Mgmt │ │ │ │ │
└────────┬────────┘ └────────┬────────┘ └────────┬────────┘
│ │ │
▼ ▼ ▼
┌─────────────────────────────────────────────────────────────────┐
│ Amadeus Blockchain Layer │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ ECO Token │ │ Training Data│ │ Identity │ │
│ │ Contract │ │ Storage │ │ Registry │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────────────────┘
Main Components
| Component | Role | Technology |
|---|---|---|
| Frontend | User interface for submitting devices and viewing rewards | React, tRPC, Amadeus Wallet SDK |
| Backend API | Request orchestration, business logic, user management | Node.js, Express, tRPC, PostgreSQL |
| MCP Host | AI application that coordinates MCP clients | Claude Desktop, Google AI Studio, or custom |
| Amadeus MCP Server | MCP server for blockchain interactions | Rust, Cloudflare Workers |
| Valuation MCP Server | MCP server for AI device valuation | Python, FastAPI, TensorFlow/PyTorch |
| zkVerify MCP Server | MCP server for identity verification | Rust, zkSNARK libraries |
| Amadeus Blockchain | Decentralized storage of transactions, tokens and data | Amadeus Protocol |
Building an MCP Agent
The Model Context Protocol (MCP) follows a client-server architecture where an AI host application establishes connections with one or more MCP servers. Each MCP server provides specific tools that the agent can use to accomplish its tasks.
Step 1: Choose the MCP Host
For Recycle Guru, you have several options for hosting your MCP agent:
Option A: Claude Desktop / Claude Code
Claude Desktop is the simplest option to get started. It provides an intuitive user interface and natively supports the MCP protocol.
Configuration (~/.claude.json):
{
"mcpServers": {
"amadeus": {
"type": "http",
"url": "https://mcp.ama.one"
},
"recycle-valuation": {
"command": "python",
"args": ["/path/to/valuation-server/main.py"]
},
"zkverify": {
"type": "http",
"url": "https://zkverify-mcp.example.com"
}
}
}
Option B: Google AI Studio (Gemini CLI)
Google AI Studio with Gemini offers a powerful alternative with multimodal support (images, videos).
Configuration (~/.gemini/settings.json):
{
"mcpServers": {
"amadeus": {
"httpUrl": "https://mcp.ama.one"
},
"recycle-valuation": {
"command": "python",
"args": ["/path/to/valuation-server/main.py"]
}
}
}
Option C: Custom MCP Host (Recommended for production)
For full integration into your backend, create a custom MCP Host that integrates directly into your Node.js API.
Installation:
npm install @modelcontextprotocol/sdk
Implementation (server/mcp-host.ts):
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
export class RecycleGuruMCPHost {
private amadeusClient: Client;
private valuationClient: Client;
async initialize() {
// Connect to Amadeus MCP server
this.amadeusClient = new Client({
name: "recycle-guru-amadeus",
version: "1.0.0"
}, {
capabilities: {}
});
const amadeusTransport = new StdioClientTransport({
command: "curl",
args: ["-X", "POST", "https://mcp.ama.one"]
});
await this.amadeusClient.connect(amadeusTransport);
// Connect to valuation server
this.valuationClient = new Client({
name: "recycle-guru-valuation",
version: "1.0.0"
}, {
capabilities: {}
});
const valuationTransport = new StdioClientTransport({
command: "python",
args: ["./valuation-server/main.py"]
});
await this.valuationClient.connect(valuationTransport);
}
async evaluateDevice(deviceData: {
reference: string;
type: string;
manufacturer: string;
model: string;
condition: string;
}) {
// Call valuation server via MCP
const result = await this.valuationClient.callTool({
name: "evaluate_device",
arguments: deviceData
});
return result;
}
async claimReward(userAddress: string, amount: number) {
// Create reward transaction via Amadeus MCP
const transferBlob = await this.amadeusClient.callTool({
name: "create_transfer",
arguments: {
to: userAddress,
amount: amount,
token: "ECO"
}
});
// Submit signed transaction
const txHash = await this.amadeusClient.callTool({
name: "submit_transaction",
arguments: {
signedBlob: transferBlob
}
});
return txHash;
}
}
Step 2: Create the Valuation MCP Server
The valuation MCP server is responsible for AI evaluation of electronic devices. It exposes tools that the agent can call to get value estimates.
Project structure:
valuation-server/
├── main.py # MCP entry point
├── model.py # AI valuation model
├── training.py # Training script
├── requirements.txt # Python dependencies
└── data/
├── training_data.json
└── model_weights.h5
Implementation (valuation-server/main.py):
#!/usr/bin/env python3
import sys
import json
import asyncio
from mcp.server import Server
from mcp.server.stdio import stdio_server
from model import DeviceValuationModel
# Initialize model
valuation_model = DeviceValuationModel()
valuation_model.load_weights("data/model_weights.h5")
# Create MCP server
app = Server("recycle-guru-valuation")
@app.list_tools()
async def list_tools():
"""List available tools"""
return [
{
"name": "evaluate_device",
"description": "Evaluate the value of an electronic device",
"inputSchema": {
"type": "object",
"properties": {
"reference": {"type": "string"},
"type": {"type": "string"},
"manufacturer": {"type": "string"},
"model": {"type": "string"},
"condition": {"type": "string"}
},
"required": ["reference", "type", "condition"]
}
},
{
"name": "record_evaluation",
"description": "Record an evaluation for learning",
"inputSchema": {
"type": "object",
"properties": {
"device_data": {"type": "object"},
"estimated_value": {"type": "number"},
"actual_value": {"type": "number"},
"feedback": {"type": "string"}
},
"required": ["device_data", "estimated_value"]
}
}
]
@app.call_tool()
async def call_tool(name: str, arguments: dict):
"""Execute a tool"""
if name == "evaluate_device":
# Prepare features for the model
features = valuation_model.prepare_features(arguments)
# Predict value
prediction = valuation_model.predict(features)
return {
"content": [{
"type": "text",
"text": json.dumps({
"estimated_value_usd": float(prediction["usd"]),
"estimated_value_eco": float(prediction["eco_tokens"]),
"confidence": float(prediction["confidence"]),
"factors": prediction["factors"]
})
}]
}
elif name == "record_evaluation":
# Record data for future retraining
await valuation_model.record_training_data(arguments)
return {
"content": [{
"type": "text",
"text": "Evaluation recorded for future training"
}]
}
raise ValueError(f"Unknown tool: {name}")
async def main():
async with stdio_server() as (read_stream, write_stream):
await app.run(read_stream, write_stream, app.create_initialization_options())
if __name__ == "__main__":
asyncio.run(main())
AI Model (valuation-server/model.py):
import numpy as np
import tensorflow as tf
from typing import Dict, Any
class DeviceValuationModel:
def __init__(self):
self.model = self._build_model()
self.device_database = self._load_device_database()
def _build_model(self):
"""Build neural network model"""
model = tf.keras.Sequential([
tf.keras.layers.Dense(128, activation='relu', input_shape=(20,)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(1, activation='linear') # Value in USD
])
model.compile(
optimizer='adam',
loss='mse',
metrics=['mae']
)
return model
def prepare_features(self, device_data: Dict[str, Any]) -> np.ndarray:
"""Prepare features for the model"""
# Encode device type
device_type_encoding = {
"smartphone": 0,
"laptop": 1,
"tablet": 2,
"desktop": 3
}
# Encode condition
condition_encoding = {
"excellent": 1.0,
"good": 0.75,
"fair": 0.5,
"poor": 0.25
}
# Look up specs in database
specs = self.device_database.get(
device_data.get("reference", ""),
{}
)
features = np.array([
device_type_encoding.get(device_data["type"], 0),
condition_encoding.get(device_data["condition"], 0.5),
specs.get("release_year", 2020) - 2020, # Relative age
specs.get("original_price", 500) / 1000, # Normalized price
specs.get("ram_gb", 4) / 32, # Normalized RAM
specs.get("storage_gb", 64) / 1024, # Normalized storage
specs.get("screen_size", 6) / 15, # Normalized screen size
specs.get("battery_mah", 3000) / 5000, # Normalized battery
# ... other features (total 20)
])
return features.reshape(1, -1)
def predict(self, features: np.ndarray) -> Dict[str, Any]:
"""Predict device value"""
value_usd = float(self.model.predict(features, verbose=0)[0][0])
# Convert to ECO tokens (1 USD = 10 ECO)
value_eco = value_usd * 10
# Calculate confidence based on available data
confidence = 0.85 # Simplified
return {
"usd": value_usd,
"eco_tokens": value_eco,
"confidence": confidence,
"factors": {
"condition_impact": 0.3,
"age_impact": 0.25,
"specs_impact": 0.45
}
}
def load_weights(self, path: str):
"""Load model weights"""
self.model.load_weights(path)
async def record_training_data(self, data: Dict[str, Any]):
"""Record data for retraining"""
# Save to JSON file for later processing
import json
with open("data/training_data.json", "a") as f:
json.dump(data, f)
f.write("\n")
Amadeus Blockchain Integration
The Amadeus blockchain provides an official MCP server at https://mcp.ama.one that allows agents to interact directly with the blockchain without needing to implement low-level protocols.
Available Tools in Amadeus MCP
The Amadeus MCP server exposes the following tools that your agent can use:
| Tool | Description | Parameters |
|---|---|---|
create_transfer | Creates an unsigned transfer transaction | to (address), amount (number), token (string) |
submit_transaction | Submits a signed transaction to the blockchain | signedBlob (string) |
get_account_balance | Retrieves all token balances of an account | address (string) |
get_chain_stats | Gets blockchain statistics | None |
get_block_by_height | Retrieves blockchain entries at a specific height | height (number) |
get_transaction | Gets transaction details by hash | hash (string) |
get_transaction_history | Retrieves transaction history of an account | address (string), page (number), limit (number) |
get_validators | Lists current validator nodes | None |
get_contract_state | Queries smart contract storage | address (string), key (string) |
claim_testnet_ama | Claims testnet AMA tokens | address (string) |
Using Amadeus MCP Tools
Here's how to use these tools in your agent to manage ECO rewards:
Example: Distribute ECO Rewards
async function distributeEcoReward(
mcpClient: Client,
userAddress: string,
deviceValue: number
) {
// 1. Calculate reward amount (10% of value in ECO tokens)
const rewardAmount = deviceValue * 0.1;
// 2. Create transfer transaction
const transferResult = await mcpClient.callTool({
name: "create_transfer",
arguments: {
to: userAddress,
amount: rewardAmount,
token: "ECO"
}
});
const unsignedBlob = JSON.parse(transferResult.content[0].text).blob;
// 3. Sign transaction (use system private key)
const signedBlob = await signTransaction(unsignedBlob, SYSTEM_PRIVATE_KEY);
// 4. Submit transaction
const submitResult = await mcpClient.callTool({
name: "submit_transaction",
arguments: {
signedBlob: signedBlob
}
});
const txHash = JSON.parse(submitResult.content[0].text).hash;
return {
success: true,
transactionHash: txHash,
amount: rewardAmount
};
}
Example: Check User's ECO Balance
async function getUserEcoBalance(
mcpClient: Client,
userAddress: string
): Promise<number> {
const balanceResult = await mcpClient.callTool({
name: "get_account_balance",
arguments: {
address: userAddress
}
});
const balances = JSON.parse(balanceResult.content[0].text);
const ecoBalance = balances.find((b: any) => b.token === "ECO");
return ecoBalance ? ecoBalance.amount : 0;
}
On-chain Training Data Storage
To enable the agent to continuously improve, it's crucial to store training data securely and traceably. The Amadeus blockchain offers an ideal solution for this, but a hybrid approach should be adopted to optimize costs and performance.
Hybrid Strategy: On-chain vs Off-chain
AI models often rely on millions of data points, and storing all of that directly on a blockchain is neither practical nor efficient. The best approach is to use a hybrid system where metadata and cryptographic fingerprints are stored on-chain, while large data is stored off-chain.
| Data Type | Storage | Reason |
|---|---|---|
| Dataset hashes | On-chain | Integrity verification, proof of existence |
| Evaluation metadata | On-chain | Traceability, audit, transparency |
| Training results | On-chain | Model performance, versioning |
| Complete datasets | Off-chain (IPFS/S3) | Large volume, cost |
| Model weights | Off-chain (IPFS/S3) | Large files (several MB/GB) |
| Device images | Off-chain (IPFS/S3) | Media files |
Storage Architecture
┌─────────────────────────────────────────────────────────────────┐
│ Valuation Agent │
└────────────────────────┬────────────────────────────────────────┘
│
▼
┌───────────────────────────────┐
│ New Evaluation │
│ • Device data │
│ • Estimated value │
│ • Actual value (feedback) │
└───────────────┬───────────────┘
│
┌───────────────┴───────────────┐
│ │
▼ ▼
┌────────────────────┐ ┌────────────────────┐
│ Off-chain Storage │ │ On-chain Storage │
│ (IPFS / S3) │ │ (Amadeus) │
│ │ │ │
│ • Full dataset │◄─────────│ • Dataset hash │
│ • Model weights │ Link │ • Metadata │
│ • Images │ │ • Metrics │
│ │ │ • Timestamp │
└────────────────────┘ └────────────────────┘
Smart Contract Implementation
Create a smart contract on Amadeus to store learning metadata:
Contract (contracts/TrainingDataRegistry.sol):
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
contract TrainingDataRegistry {
struct TrainingRecord {
bytes32 datasetHash; // SHA-256 hash of dataset
string ipfsUrl; // IPFS URL of complete dataset
uint256 timestamp; // Record timestamp
address submitter; // Submitter address
uint256 recordCount; // Number of records
uint256 modelVersion; // Model version
string metrics; // Metrics JSON (MAE, RMSE, etc.)
}
struct ModelVersion {
bytes32 weightsHash; // Hash of model weights
string ipfsUrl; // IPFS URL of weights
uint256 timestamp;
uint256 trainingRecords; // Total training records used
string performance; // Performance metrics JSON
}
mapping(uint256 => TrainingRecord) public trainingRecords;
mapping(uint256 => ModelVersion) public modelVersions;
uint256 public recordCount;
uint256 public currentModelVersion;
address public owner;
event TrainingDataRecorded(
uint256 indexed recordId,
bytes32 datasetHash,
string ipfsUrl,
uint256 recordCount
);
event ModelVersionUpdated(
uint256 indexed version,
bytes32 weightsHash,
string ipfsUrl,
string performance
);
constructor() {
owner = msg.sender;
currentModelVersion = 1;
}
modifier onlyOwner() {
require(msg.sender == owner, "Only owner can call this");
_;
}
function recordTrainingData(
bytes32 _datasetHash,
string memory _ipfsUrl,
uint256 _recordCount,
string memory _metrics
) external onlyOwner returns (uint256) {
recordCount++;
trainingRecords[recordCount] = TrainingRecord({
datasetHash: _datasetHash,
ipfsUrl: _ipfsUrl,
timestamp: block.timestamp,
submitter: msg.sender,
recordCount: _recordCount,
modelVersion: currentModelVersion,
metrics: _metrics
});
emit TrainingDataRecorded(
recordCount,
_datasetHash,
_ipfsUrl,
_recordCount
);
return recordCount;
}
function updateModelVersion(
bytes32 _weightsHash,
string memory _ipfsUrl,
uint256 _trainingRecords,
string memory _performance
) external onlyOwner {
currentModelVersion++;
modelVersions[currentModelVersion] = ModelVersion({
weightsHash: _weightsHash,
ipfsUrl: _ipfsUrl,
timestamp: block.timestamp,
trainingRecords: _trainingRecords,
performance: _performance
});
emit ModelVersionUpdated(
currentModelVersion,
_weightsHash,
_ipfsUrl,
_performance
);
}
function getTrainingRecord(uint256 _recordId)
external
view
returns (TrainingRecord memory)
{
return trainingRecords[_recordId];
}
function getModelVersion(uint256 _version)
external
view
returns (ModelVersion memory)
{
return modelVersions[_version];
}
function getCurrentModelVersion()
external
view
returns (ModelVersion memory)
{
return modelVersions[currentModelVersion];
}
}
Data Storage Workflow
Here's the complete process for recording a new evaluation and using it for learning:
1. Collect Evaluation Data
// After each device evaluation
async function recordEvaluation(
deviceData: DeviceData,
estimatedValue: number,
actualValue?: number,
userFeedback?: string
) {
const evaluationRecord = {
device: deviceData,
estimated: estimatedValue,
actual: actualValue,
feedback: userFeedback,
timestamp: Date.now()
};
// Save temporarily in database
await db.evaluations.insert(evaluationRecord);
}
2. Aggregate and Upload to IPFS
// Script run periodically (e.g., daily)
async function aggregateAndUploadTrainingData() {
// Retrieve all new evaluations
const newEvaluations = await db.evaluations.findUnprocessed();
// Create JSON dataset
const dataset = {
version: "1.0",
records: newEvaluations,
count: newEvaluations.length,
created_at: new Date().toISOString()
};
// Calculate SHA-256 hash
const datasetJson = JSON.stringify(dataset);
const datasetHash = crypto
.createHash('sha256')
.update(datasetJson)
.digest('hex');
// Upload to IPFS
const ipfsUrl = await uploadToIPFS(datasetJson);
// Record on-chain via smart contract
await recordOnChain(datasetHash, ipfsUrl, newEvaluations.length);
// Mark evaluations as processed
await db.evaluations.markAsProcessed(newEvaluations.map(e => e.id));
return {
datasetHash,
ipfsUrl,
recordCount: newEvaluations.length
};
}
3. On-chain Recording
async function recordOnChain(
datasetHash: string,
ipfsUrl: string,
recordCount: number
) {
// Use Amadeus MCP to interact with contract
const contractAddress = "0x..."; // TrainingDataRegistry address
// Encode function call
const callData = encodeContractCall(
"recordTrainingData",
[datasetHash, ipfsUrl, recordCount, "{}"]
);
// Create transaction
const txBlob = await amadeusClient.callTool({
name: "create_contract_call",
arguments: {
contract: contractAddress,
data: callData,
value: 0
}
});
// Sign and submit
const signedBlob = await signTransaction(txBlob, SYSTEM_PRIVATE_KEY);
const txHash = await amadeusClient.callTool({
name: "submit_transaction",
arguments: { signedBlob }
});
return txHash;
}
4. Model Retraining
// Script run periodically (e.g., weekly)
async function retrainModel() {
// Retrieve all datasets from contract
const recordCount = await contract.recordCount();
const datasets = [];
for (let i = 1; i <= recordCount; i++) {
const record = await contract.getTrainingRecord(i);
// Download dataset from IPFS
const data = await fetchFromIPFS(record.ipfsUrl);
datasets.push(data);
}
// Combine all datasets
const allRecords = datasets.flatMap(d => d.records);
// Retrain model
const newModel = await trainModel(allRecords);
// Evaluate performance
const metrics = await evaluateModel(newModel, testSet);
// Upload new weights to IPFS
const weightsBuffer = await newModel.save();
const weightsHash = crypto
.createHash('sha256')
.update(weightsBuffer)
.digest('hex');
const weightsUrl = await uploadToIPFS(weightsBuffer);
// Record new version on-chain
await contract.updateModelVersion(
weightsHash,
weightsUrl,
allRecords.length,
JSON.stringify(metrics)
);
// Deploy new model
await deployModel(newModel);
return {
version: await contract.currentModelVersion(),
metrics
};
}
Benefits of This Approach
This hybrid strategy offers several key advantages for Recycle Guru:
Complete Traceability: Every device evaluation can be traced back to its source, allowing model decisions to be audited and potential biases identified.
Data Integrity: Cryptographic hashes ensure that datasets haven't been modified after recording, ensuring learning reliability.
Model Versioning: Each model version is linked to the training data used, allowing results to be reproduced and performance evolution understood.
Transparency: Users can verify that their contributions (evaluation feedback) are actually used to improve the system.
Decentralization: Data is stored on IPFS, a distributed file system, avoiding dependence on a centralized provider.
Connecting the Agent to the Frontend
For the MCP agent to interact with the Recycle Guru frontend, you need to create an API layer that exposes the agent's functionality via tRPC endpoints.
Connection Architecture
Frontend (React)
│
│ tRPC calls
│
▼
Backend API (Node.js)
│
│ MCP protocol
│
▼
MCP Host (Custom)
│
├──► Amadeus MCP Server (blockchain)
├──► Valuation MCP Server (AI)
└──► zkVerify MCP Server (identity)
tRPC Procedures Implementation
Add the following procedures in server/routers.ts:
import { z } from "zod";
import { publicProcedure, protectedProcedure, router } from "./_core/trpc";
import { RecycleGuruMCPHost } from "./mcp-host";
// Initialize MCP Host
const mcpHost = new RecycleGuruMCPHost();
await mcpHost.initialize();
export const appRouter = router({
// ... other existing routers
ai: router({
// Evaluate device with AI
evaluateDevice: publicProcedure
.input(z.object({
reference: z.string(),
type: z.enum(["smartphone", "laptop", "tablet", "desktop"]),
manufacturer: z.string().optional(),
model: z.string().optional(),
condition: z.enum(["excellent", "good", "fair", "poor"]),
images: z.array(z.string()).optional() // Image URLs
}))
.mutation(async ({ input }) => {
// Call MCP valuation server
const evaluation = await mcpHost.evaluateDevice(input);
// Record evaluation in database
const evaluationRecord = await db.createEvaluation({
deviceReference: input.reference,
deviceType: input.type,
condition: input.condition,
estimatedValueUsd: evaluation.estimated_value_usd,
estimatedValueEco: evaluation.estimated_value_eco,
confidence: evaluation.confidence,
factors: evaluation.factors,
timestamp: new Date()
});
return {
id: evaluationRecord.id,
...evaluation
};
}),
// Submit feedback on evaluation
submitFeedback: protectedProcedure
.input(z.object({
evaluationId: z.number(),
actualValue: z.number().optional(),
feedback: z.string().optional(),
sold: z.boolean()
}))
.mutation(async ({ input, ctx }) => {
// Update evaluation with feedback
await db.updateEvaluation(input.evaluationId, {
actualValue: input.actualValue,
userFeedback: input.feedback,
sold: input.sold,
userId: ctx.user.id
});
// Record for future retraining
await mcpHost.recordFeedback({
evaluationId: input.evaluationId,
actualValue: input.actualValue,
feedback: input.feedback
});
return { success: true };
}),
// Get model statistics
getModelStats: publicProcedure
.query(async () => {
// Query TrainingDataRegistry contract
const currentVersion = await mcpHost.getCurrentModelVersion();
return {
version: currentVersion.version,
trainingRecords: currentVersion.trainingRecords,
performance: JSON.parse(currentVersion.performance),
lastUpdated: new Date(currentVersion.timestamp * 1000)
};
})
}),
rewards: router({
// Claim ECO rewards
claimReward: protectedProcedure
.input(z.object({
evaluationId: z.number(),
walletAddress: z.string()
}))
.mutation(async ({ input, ctx }) => {
// Check if user hasn't already claimed
const evaluation = await db.getEvaluation(input.evaluationId);
if (evaluation.rewardClaimed) {
throw new Error("Reward already claimed");
}
// Check zkVerify identity
const isVerified = await mcpHost.checkZkVerifyStatus(ctx.user.id);
if (!isVerified) {
throw new Error("Identity verification required");
}
// Check reward limits
const userRewards = await db.getUserTotalRewards(ctx.user.id);
const MAX_REWARDS = 1000; // ECO tokens
if (userRewards >= MAX_REWARDS) {
throw new Error("Reward limit reached");
}
// Distribute rewards via Amadeus blockchain
const rewardAmount = evaluation.estimatedValueEco * 0.1; // 10%
const txHash = await mcpHost.claimReward(
input.walletAddress,
rewardAmount
);
// Update database
await db.updateEvaluation(input.evaluationId, {
rewardClaimed: true,
rewardAmount: rewardAmount,
rewardTxHash: txHash
});
await db.createReward({
userId: ctx.user.id,
evaluationId: input.evaluationId,
amount: rewardAmount,
transactionHash: txHash,
timestamp: new Date()
});
return {
success: true,
amount: rewardAmount,
transactionHash: txHash
};
}),
// Get reward history
getRewardHistory: protectedProcedure
.query(async ({ ctx }) => {
const rewards = await db.getUserRewards(ctx.user.id);
// Enrich with blockchain data
const enrichedRewards = await Promise.all(
rewards.map(async (reward) => {
const txDetails = await mcpHost.getTransaction(
reward.transactionHash
);
return {
...reward,
status: txDetails.status,
blockHeight: txDetails.blockHeight,
confirmations: txDetails.confirmations
};
})
);
return enrichedRewards;
})
})
});
Frontend Components
Create React components to interact with these procedures:
Evaluation Component (client/src/components/DeviceEvaluator.tsx):
import { useState } from "react";
import { trpc } from "@/lib/trpc";
import { Button } from "@/components/ui/button";
import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card";
import { Loader2, Sparkles } from "lucide-react";
export function DeviceEvaluator({ deviceData }: { deviceData: DeviceData }) {
const [evaluation, setEvaluation] = useState<Evaluation | null>(null);
const evaluateMutation = trpc.ai.evaluateDevice.useMutation({
onSuccess: (data) => {
setEvaluation(data);
}
});
const handleEvaluate = () => {
evaluateMutation.mutate(deviceData);
};
return (
<Card>
<CardHeader>
<CardTitle className="flex items-center gap-2">
<Sparkles className="h-5 w-5 text-primary" />
AI Valuation
</CardTitle>
</CardHeader>
<CardContent>
{!evaluation ? (
<Button
onClick={handleEvaluate}
disabled={evaluateMutation.isLoading}
>
{evaluateMutation.isLoading && (
<Loader2 className="mr-2 h-4 w-4 animate-spin" />
)}
Get AI Valuation
</Button>
) : (
<div className="space-y-4">
<div className="grid grid-cols-2 gap-4">
<div>
<p className="text-sm text-muted-foreground">USD Value</p>
<p className="text-2xl font-bold">
${evaluation.estimated_value_usd.toFixed(2)}
</p>
</div>
<div>
<p className="text-sm text-muted-foreground">ECO Tokens</p>
<p className="text-2xl font-bold text-primary">
{evaluation.estimated_value_eco.toFixed(0)} ECO
</p>
</div>
</div>
<div>
<p className="text-sm text-muted-foreground">Confidence</p>
<div className="flex items-center gap-2">
<div className="flex-1 bg-muted rounded-full h-2">
<div
className="bg-primary h-2 rounded-full"
style={{ width: `${evaluation.confidence * 100}%` }}
/>
</div>
<span className="text-sm font-medium">
{(evaluation.confidence * 100).toFixed(0)}%
</span>
</div>
</div>
<div>
<p className="text-sm font-medium mb-2">Value Factors</p>
<div className="space-y-1 text-sm">
<div className="flex justify-between">
<span>Condition Impact</span>
<span>{(evaluation.factors.condition_impact * 100).toFixed(0)}%</span>
</div>
<div className="flex justify-between">
<span>Age Impact</span>
<span>{(evaluation.factors.age_impact * 100).toFixed(0)}%</span>
</div>
<div className="flex justify-between">
<span>Specs Impact</span>
<span>{(evaluation.factors.specs_impact * 100).toFixed(0)}%</span>
</div>
</div>
</div>
</div>
)}
</CardContent>
</Card>
);
}
Reward Claim Component (client/src/components/RewardClaimer.tsx):
import { trpc } from "@/lib/trpc";
import { Button } from "@/components/ui/button";
import { toast } from "sonner";
import { Coins, ExternalLink } from "lucide-react";
export function RewardClaimer({
evaluationId,
walletAddress
}: {
evaluationId: number;
walletAddress: string;
}) {
const claimMutation = trpc.rewards.claimReward.useMutation({
onSuccess: (data) => {
toast.success(`Successfully claimed ${data.amount} ECO tokens!`, {
description: "Check your wallet for the tokens",
action: {
label: "View Transaction",
onClick: () => window.open(
`https://explorer.amadeus.bot/tx/${data.transactionHash}`,
"_blank"
)
}
});
},
onError: (error) => {
toast.error("Failed to claim reward", {
description: error.message
});
}
});
const handleClaim = () => {
claimMutation.mutate({
evaluationId,
walletAddress
});
};
return (
<Button
onClick={handleClaim}
disabled={claimMutation.isLoading}
className="gap-2"
>
<Coins className="h-4 w-4" />
{claimMutation.isLoading ? "Claiming..." : "Claim ECO Rewards"}
</Button>
);
}
Self-learning and Continuous Improvement
One of the key objectives of Recycle Guru is to create an agent that continuously improves through collected data. Here's how to implement this self-learning system.
Continuous Learning Cycle
┌─────────────────────────────────────────────────────────────────┐
│ 1. Data Collection │
│ • Device evaluations │
│ • User feedback │
│ • Actual sale prices (Storex.io) │
└────────────────────────┬────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ 2. Aggregation and Validation │
│ • Data cleaning │
│ • Anomaly detection │
│ • Hash calculation and IPFS upload │
└────────────────────────┬────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ 3. On-chain Recording │
│ • Store hash on Amadeus │
│ • Link to IPFS │
│ • Metadata (count, timestamp) │
└────────────────────────┬────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ 4. Periodic Retraining │
│ • Retrieve all datasets │
│ • Train new model │
│ • Evaluate performance │
└────────────────────────┬────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ 5. New Model Deployment │
│ • Upload weights to IPFS │
│ • Record version on-chain │
│ • Update MCP server │
└────────────────────────┬────────────────────────────────────────┘
│
▼
(Back to step 1)
Data to Store On-chain
To enable self-learning, here are the key data points to record on the Amadeus blockchain:
| Data | Description | Frequency | Utility |
|---|---|---|---|
| Dataset hash | Cryptographic fingerprint of training dataset | Daily | Integrity verification |
| Evaluation count | Counter of evaluations in dataset | Daily | Growth metrics |
| Model metrics | MAE, RMSE, R² of current model | Weekly | Performance tracking |
| Model version | Version number and weights hash | Weekly | Versioning and rollback |
| User feedback | Aggregated satisfaction scores | Daily | Perceived quality |
| Actual sale prices | Actual selling prices (Storex.io) | Real-time | Ground truth for validation |
Self-learning Script
Create a script that runs periodically to retrain the model:
Script (scripts/auto-learning.ts):
import { RecycleGuruMCPHost } from "../server/mcp-host";
import { TrainingDataRegistry } from "../contracts/TrainingDataRegistry";
import * as tf from "@tensorflow/tfjs-node";
async function autoLearningCycle() {
console.log("Starting auto-learning cycle...");
// 1. Retrieve new data from database
const newEvaluations = await db.evaluations.findUnprocessed();
console.log(`Found ${newEvaluations.length} new evaluations`);
if (newEvaluations.length < 100) {
console.log("Not enough data for retraining, skipping...");
return;
}
// 2. Aggregate and upload to IPFS
const dataset = {
version: "1.0",
records: newEvaluations.map(e => ({
device: {
reference: e.deviceReference,
type: e.deviceType,
condition: e.condition,
manufacturer: e.manufacturer,
model: e.model
},
estimated_value: e.estimatedValueUsd,
actual_value: e.actualValue,
feedback: e.userFeedback,
timestamp: e.timestamp
})),
count: newEvaluations.length,
created_at: new Date().toISOString()
};
const datasetJson = JSON.stringify(dataset);
const datasetHash = crypto
.createHash('sha256')
.update(datasetJson)
.digest('hex');
const ipfsUrl = await uploadToIPFS(datasetJson);
console.log(`Dataset uploaded to IPFS: ${ipfsUrl}`);
// 3. Record on-chain
await recordOnChain(datasetHash, ipfsUrl, newEvaluations.length);
console.log(`Dataset recorded on-chain with hash: ${datasetHash}`);
// 4. Retrieve all historical datasets
const allDatasets = await fetchAllDatasetsFromChain();
const allRecords = allDatasets.flatMap(d => d.records);
console.log(`Total training records: ${allRecords.length}`);
// 5. Prepare training data
const { X_train, y_train, X_test, y_test } = prepareTrainingData(allRecords);
// 6. Load current model
const currentModel = await loadCurrentModel();
// 7. Retrain model
console.log("Retraining model...");
await currentModel.fit(X_train, y_train, {
epochs: 50,
batchSize: 32,
validationData: [X_test, y_test],
callbacks: {
onEpochEnd: (epoch, logs) => {
console.log(`Epoch ${epoch + 1}: loss = ${logs.loss.toFixed(4)}, val_loss = ${logs.val_loss.toFixed(4)}`);
}
}
});
// 8. Evaluate performance
const evaluation = currentModel.evaluate(X_test, y_test);
const [loss, mae] = await Promise.all([
evaluation[0].data(),
evaluation[1].data()
]);
const metrics = {
loss: loss[0],
mae: mae[0],
test_samples: X_test.shape[0],
training_samples: X_train.shape[0]
};
console.log("Model performance:", metrics);
// 9. Save and upload new weights
const weightsBuffer = await currentModel.save("file://./temp-model");
const weightsHash = crypto
.createHash('sha256')
.update(weightsBuffer)
.digest('hex');
const weightsUrl = await uploadToIPFS(weightsBuffer);
console.log(`Model weights uploaded to IPFS: ${weightsUrl}`);
// 10. Record new version on-chain
await contract.updateModelVersion(
weightsHash,
weightsUrl,
allRecords.length,
JSON.stringify(metrics)
);
console.log("New model version registered on-chain");
// 11. Deploy new model to MCP server
await deployModelToMCPServer(currentModel);
console.log("Model deployed to MCP server");
// 12. Mark evaluations as processed
await db.evaluations.markAsProcessed(newEvaluations.map(e => e.id));
console.log("Auto-learning cycle completed successfully!");
}
// Run cycle every week
setInterval(autoLearningCycle, 7 * 24 * 60 * 60 * 1000);
Performance Metrics to Track
To evaluate continuous model improvement, track these key metrics:
Technical Metrics:
- MAE (Mean Absolute Error): Average absolute error between predictions and actual values
- RMSE (Root Mean Square Error): Root mean square error
- R² Score: Coefficient of determination (fit quality)
- Confidence calibration: Correlation between predicted confidence and actual accuracy
Business Metrics:
- Acceptance rate: Percentage of users who accept the evaluation
- Average deviation: Average difference between estimated value and actual sale price
- User satisfaction: Average user feedback score
- Conversion rate: Percentage of evaluations leading to actual recycling
Monitoring Dashboard
Create a dashboard to visualize model evolution:
// client/src/pages/ModelDashboard.tsx
import { trpc } from "@/lib/trpc";
import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card";
import { LineChart, Line, XAxis, YAxis, CartesianGrid, Tooltip, Legend } from "recharts";
export function ModelDashboard() {
const { data: stats } = trpc.ai.getModelStats.useQuery();
const { data: history } = trpc.ai.getModelHistory.useQuery();
return (
<div className="space-y-6">
<div className="grid grid-cols-1 md:grid-cols-3 gap-4">
<Card>
<CardHeader>
<CardTitle>Current Version</CardTitle>
</CardHeader>
<CardContent>
<p className="text-3xl font-bold">{stats?.version}</p>
<p className="text-sm text-muted-foreground">
Updated {stats?.lastUpdated.toLocaleDateString()}
</p>
</CardContent>
</Card>
<Card>
<CardHeader>
<CardTitle>Training Records</CardTitle>
</CardHeader>
<CardContent>
<p className="text-3xl font-bold">
{stats?.trainingRecords.toLocaleString()}
</p>
<p className="text-sm text-muted-foreground">
Total evaluations used
</p>
</CardContent>
</Card>
<Card>
<CardHeader>
<CardTitle>MAE</CardTitle>
</CardHeader>
<CardContent>
<p className="text-3xl font-bold">
${stats?.performance.mae.toFixed(2)}
</p>
<p className="text-sm text-muted-foreground">
Mean Absolute Error
</p>
</CardContent>
</Card>
</div>
<Card>
<CardHeader>
<CardTitle>Model Performance Over Time</CardTitle>
</CardHeader>
<CardContent>
<LineChart width={800} height={400} data={history}>
<CartesianGrid strokeDasharray="3 3" />
<XAxis dataKey="version" />
<YAxis />
<Tooltip />
<Legend />
<Line
type="monotone"
dataKey="mae"
stroke="#10b981"
name="MAE (USD)"
/>
<Line
type="monotone"
dataKey="rmse"
stroke="#3b82f6"
name="RMSE (USD)"
/>
</LineChart>
</CardContent>
</Card>
</div>
);
}
Practical Implementation
Here's a step-by-step guide to implement the MCP agent in Recycle Guru.
Step 1: Install Dependencies
cd /home/ubuntu/recycle_guru
# Backend dependencies
pnpm add @modelcontextprotocol/sdk @tensorflow/tfjs-node ipfs-http-client
# Python dependencies for valuation MCP server
cd valuation-server
pip install mcp tensorflow pandas numpy scikit-learn
Step 2: Deploy TrainingDataRegistry Contract
# Compile contract
npx hardhat compile
# Deploy on Amadeus testnet
npx hardhat run scripts/deploy-training-registry.ts --network amadeus-testnet
Step 3: Configure Valuation MCP Server
cd valuation-server
python main.py # Test in stdio mode
# Verify server responds
echo '{"jsonrpc":"2.0","method":"tools/list","id":1}' | python main.py
Step 4: Integrate MCP Host in Backend
Add MCP Host code in server/mcp-host.ts and initialize it on server startup:
// server/_core/index.ts
import { RecycleGuruMCPHost } from "../mcp-host";
// Initialize MCP Host on startup
const mcpHost = new RecycleGuruMCPHost();
await mcpHost.initialize();
// Make globally available
global.mcpHost = mcpHost;
Step 5: Test Integration
Create a test to verify everything works:
// server/mcp-integration.test.ts
import { describe, it, expect } from "vitest";
import { RecycleGuruMCPHost } from "./mcp-host";
describe("MCP Integration", () => {
it("should evaluate a device", async () => {
const mcpHost = new RecycleGuruMCPHost();
await mcpHost.initialize();
const result = await mcpHost.evaluateDevice({
reference: "IPHONE-13-PRO-128GB",
type: "smartphone",
manufacturer: "Apple",
model: "iPhone 13 Pro",
condition: "good"
});
expect(result.estimated_value_usd).toBeGreaterThan(0);
expect(result.estimated_value_eco).toBeGreaterThan(0);
expect(result.confidence).toBeGreaterThan(0);
expect(result.confidence).toBeLessThanOrEqual(1);
});
it("should claim rewards on Amadeus blockchain", async () => {
const mcpHost = new RecycleGuruMCPHost();
await mcpHost.initialize();
const result = await mcpHost.claimReward(
"0x1234567890abcdef1234567890abcdef12345678",
100 // 100 ECO tokens
);
expect(result.success).toBe(true);
expect(result.transactionHash).toBeDefined();
});
});
Step 6: Deploy to Production
Once tests pass, deploy the application:
# Build frontend and backend
pnpm build
# Start server
pnpm start
# Start valuation MCP server in background
cd valuation-server
nohup python main.py &
Security and Best Practices
Private Key Security
The private key used to sign reward transactions must be stored securely:
// Use environment variables
const SYSTEM_PRIVATE_KEY = process.env.AMADEUS_SYSTEM_SK;
// Or use a secrets management service (AWS Secrets Manager, HashiCorp Vault)
import { SecretsManager } from "@aws-sdk/client-secrets-manager";
async function getPrivateKey() {
const client = new SecretsManager({ region: "us-east-1" });
const response = await client.getSecretValue({
SecretId: "recycle-guru/amadeus-private-key"
});
return response.SecretString;
}
Input Validation
Always validate user data before passing it to the AI model:
import { z } from "zod";
const DeviceDataSchema = z.object({
reference: z.string().min(1).max(100),
type: z.enum(["smartphone", "laptop", "tablet", "desktop"]),
manufacturer: z.string().max(50).optional(),
model: z.string().max(100).optional(),
condition: z.enum(["excellent", "good", "fair", "poor"])
});
// Validate before calling model
const validatedData = DeviceDataSchema.parse(userInput);
Rate Limiting
Implement rate limiting to prevent abuse:
import rateLimit from "express-rate-limit";
const evaluationLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 10, // 10 evaluations max per IP
message: "Too many evaluation requests, please try again later"
});
app.use("/api/trpc/ai.evaluateDevice", evaluationLimiter);
Monitoring and Alerts
Configure alerts to detect anomalies:
// Detect suspicious evaluations
async function detectAnomalies(evaluation: Evaluation) {
// Value too high
if (evaluation.estimated_value_usd > 5000) {
await notifyOwner({
title: "Suspicious Evaluation Detected",
content: `High value evaluation: $${evaluation.estimated_value_usd} for ${evaluation.deviceReference}`
});
}
// Confidence too low
if (evaluation.confidence < 0.3) {
console.warn("Low confidence evaluation:", evaluation);
}
}
Backup and Recovery
Regularly backup critical data:
# Daily backup script
#!/bin/bash
DATE=$(date +%Y-%m-%d)
pg_dump $DATABASE_URL > backups/db-$DATE.sql
aws s3 cp backups/db-$DATE.sql s3://recycle-guru-backups/
References
[1] Model Context Protocol - Architecture Overview
[2] Amadeus AIChain GitHub Repository
[3] Amadeus MCP Server Documentation
[4] How Blockchain Secures AI Training Data
[5] Decentralized AI: Training Models on Blockchain
[7] TensorFlow.js Node Documentation
[8] IPFS HTTP Client Documentation
[9] Recycle Guru: An Autonomous Agent for a Circular Economy
Author: Manus AI
Date: December 22, 2025
Version: 1.0