Model Registry
Model Registry
Section titled “Model Registry”The ModelRegistry is the central contract for managing AI models and user inference requests with automatic task creation. It handles model registration, pricing, user requests, and result management in the Hyra Network.
Overview
Section titled “Overview”The ModelRegistry contract serves as the entry point for the entire Hyra AI Network, managing:
- AI Model Registration: Register and manage AI models with pricing
- User Inference Requests: Handle user requests for AI inference
- Automatic Task Creation: Create AI tasks automatically when users request inference
- Result Management: Store and manage AI inference results
- Payment Processing: Handle HYRA token payments for inference
Key Features
Section titled “Key Features”Model Management
Section titled “Model Management”- Owner-only Registration: Only contract owner can register new models
- Model Pricing: Support for both fixed and flexible pricing models
- Model Activation: Enable/disable models as needed
- Model Metadata: Store model names, descriptions, and pricing information
Pricing Models
Section titled “Pricing Models”Fixed Pricing (Type 0)
Section titled “Fixed Pricing (Type 0)”- Token Count: Always 1
- Total Cost:
1 × token_price - Suitable for: Image classification, object detection, simple tasks
Flexible Pricing (Type 1)
Section titled “Flexible Pricing (Type 1)”- Token Count: Calculated from prompt bytes (4 bytes = 1 token)
- Total Cost:
token_count × token_price - Suitable for: LLM, text generation, complex tasks
Automatic Task Creation
Section titled “Automatic Task Creation”- Seamless Integration: Automatically creates AI tasks when users request inference
- Task Distribution: Integrates with TaskRouter and TaskPool for task distribution
- Result Updates: Automatically updates results when tasks are completed
Contract Structure
Section titled “Contract Structure”State Variables
Section titled “State Variables”// Model managementmapping(string => Model) public models;string[] public modelIds;
// Inference requestsmapping(uint256 => InferenceRequest) public inferenceRequests;uint256 public requestCounter;
// System integrationaddress public taskRouter;mapping(address => bool) public verifiers;
// Constantsuint256 public constant BYTES_PER_TOKEN = 4;Data Structures
Section titled “Data Structures”Model Struct
Section titled “Model Struct”struct Model { string name; // Model name string description; // Model description ModelPricingType modelPricingType; // 0 = FIXED, 1 = FLEXIBLE bool isActive; // Model status uint256 createdAt; // Creation timestamp uint256 tokenPrice; // Price per token in HYRA (wei)}InferenceRequest Struct
Section titled “InferenceRequest Struct”struct InferenceRequest { address user; // User who requested inference string modelId; // Model identifier string prompt; // User input prompt uint256 tokenCount; // Calculated token count uint256 totalCost; // Total cost in HYRA uint256 timestamp; // Request timestamp string resultData; // AI result data bool hasResult; // Result availability}Core Functions
Section titled “Core Functions”Model Registration
Section titled “Model Registration”Register Model
Section titled “Register Model”function registerModel( string memory modelId, string memory name, string memory description, ModelPricingType modelPricingType, // 0 = FIXED, 1 = FLEXIBLE uint256 tokenPrice) external onlyOwnerParameters:
modelId: Unique identifier for the modelname: Human-readable model namedescription: Model descriptionmodelPricingType: Pricing type (FIXED or FLEXIBLE)tokenPrice: Price per token in HYRA (wei)
Example:
// Register a fixed price modelawait modelRegistry.registerModel( "image-classifier-v1", "Image Classification Model", "Classifies images into categories", 0, // FIXED pricing ethers.parseEther("0.1") // 0.1 HYRA per inference);
// Register a flexible pricing modelawait modelRegistry.registerModel( "llm-chat-v1", "LLM Chat Model", "Large language model for chat", 1, // FLEXIBLE pricing ethers.parseEther("0.001") // 0.001 HYRA per token);Update Model
Section titled “Update Model”function updateModel( string memory modelId, bool isActive) external onlyOwnerUser Inference Requests
Section titled “User Inference Requests”Request Inference
Section titled “Request Inference”function requestInference( string memory modelId, string memory prompt) external payable returns (uint256 requestId)Parameters:
modelId: Model to use for inferenceprompt: User input prompt
Returns:
requestId: Unique identifier for the request
Example:
// Calculate cost before requestingconst prompt = "What is the capital of France?";const [tokenCount, totalCost] = await modelRegistry.calculateInferenceCost( "llm-chat-v1", prompt);
console.log(`Token count: ${tokenCount}`);console.log(`Total cost: ${ethers.formatEther(totalCost)} HYRA`);
// Request inference (automatically creates AI task)const tx = await modelRegistry.requestInference( "llm-chat-v1", prompt, { value: totalCost });const receipt = await tx.wait();
// Get the request ID from the eventconst requestId = (receipt?.logs[0] as any)?.args?.requestId;console.log(`Request ID: ${requestId}`);Calculate Inference Cost
Section titled “Calculate Inference Cost”function calculateInferenceCost( string memory modelId, string memory prompt) external view returns (uint256 tokenCount, uint256 totalCost)Returns:
tokenCount: Number of tokens in the prompttotalCost: Total cost in HYRA tokens
Result Management
Section titled “Result Management”Update Inference Result
Section titled “Update Inference Result”function updateInferenceResult( uint256 requestId, string calldata resultData) external onlyVerifierParameters:
requestId: Request identifierresultData: AI inference result
Access Control: Only verifiers can update results
Get Inference Request
Section titled “Get Inference Request”function getInferenceRequest( uint256 requestId) external view returns (InferenceRequest memory)Token Calculation
Section titled “Token Calculation”Fixed Pricing Models
Section titled “Fixed Pricing Models”- Token Count: Always 1
- Calculation:
totalCost = 1 × tokenPrice
Flexible Pricing Models
Section titled “Flexible Pricing Models”- Token Count:
prompt.length / BYTES_PER_TOKEN - Calculation:
totalCost = tokenCount × tokenPrice - Bytes per Token: 4 bytes = 1 token
Example:
// Fixed pricing modelconst fixedModel = await modelRegistry.models("image-classifier-v1");// tokenCount = 1// totalCost = 1 × 0.1 HYRA = 0.1 HYRA
// Flexible pricing modelconst prompt = "Hello, how are you?"; // 19 characters = 19 bytesconst tokenCount = Math.ceil(19 / 4); // 5 tokensconst totalCost = tokenCount * 0.001; // 0.005 HYRAEvents
Section titled “Events”Model Events
Section titled “Model Events”event ModelRegistered( string indexed modelId, ModelPricingType modelPricingType, uint256 tokenPrice);
event ModelUpdated(string indexed modelId, bool isActive);Inference Events
Section titled “Inference Events”event InferenceRequested( uint256 indexed requestId, address indexed user, string indexed modelId, uint256 tokenCount, uint256 totalCost);
event InferenceCompleted(uint256 indexed requestId, string resultData);System Events
Section titled “System Events”event VerifierAdded(address indexed verifier);event VerifierRemoved(address indexed verifier);event TaskRouterUpdated(address indexed oldRouter, address indexed newRouter);Access Control
Section titled “Access Control”Owner Functions
Section titled “Owner Functions”registerModel(): Register new AI modelsupdateModel(): Update model statusaddVerifier(): Add verifiersremoveVerifier(): Remove verifierssetTaskRouter(): Set TaskRouter addresswithdrawTokens(): Withdraw collected HYRA tokens
Verifier Functions
Section titled “Verifier Functions”updateInferenceResult(): Update inference resultsupdateInferenceResultBatch(): Update multiple results
Public Functions
Section titled “Public Functions”requestInference(): Request AI inferencecalculateInferenceCost(): Calculate inference costgetInferenceRequest(): Get request details
Integration with Other Contracts
Section titled “Integration with Other Contracts”TaskRouter Integration
Section titled “TaskRouter Integration”// Set TaskRouter addressawait modelRegistry.setTaskRouter(taskRouterAddress);
// TaskRouter can create tasks automaticallyawait modelRegistry.requestInference(modelId, prompt);TaskPool Integration
Section titled “TaskPool Integration”// ModelRegistry creates tasks in TaskPools// TaskPools update results in ModelRegistryawait modelRegistry.updateInferenceResult(requestId, resultData);Error Handling
Section titled “Error Handling”Custom Errors
Section titled “Custom Errors”error ModelNotFound(); // Model doesn't existerror ModelInactive(); // Model is not activeerror InsufficientPayment(); // Payment too lowerror InvalidInput(); // Invalid parameterserror Unauthorized(); // Access deniederror RequestNotFound(); // Request doesn't existerror ResultAlreadySubmitted(); // Result already existsInput Validation
Section titled “Input Validation”- Model existence checks
- Payment amount validation
- Parameter validation
- Access control checks
Gas Optimization
Section titled “Gas Optimization”Efficient Storage
Section titled “Efficient Storage”- Packed structs for gas efficiency
- Minimal storage operations
- Optimized data types
Batch Operations
Section titled “Batch Operations”function updateInferenceResultBatch( uint256[] calldata requestIds, string[] calldata resultDataArray) external onlyVerifierCustom Errors
Section titled “Custom Errors”- Use custom errors instead of string messages
- Reduced gas costs for error handling
Usage Examples
Section titled “Usage Examples”Complete Workflow
Section titled “Complete Workflow”// 1. Register a modelawait modelRegistry.registerModel( "llm-chat-v1", "LLM Chat Model", "Large language model for chat", 1, // FLEXIBLE pricing ethers.parseEther("0.001"));
// 2. Calculate costconst prompt = "What is the capital of France?";const [tokenCount, totalCost] = await modelRegistry.calculateInferenceCost( "llm-chat-v1", prompt);
// 3. Request inferenceconst tx = await modelRegistry.requestInference( "llm-chat-v1", prompt, { value: totalCost });
// 4. Get request IDconst receipt = await tx.wait();const requestId = (receipt?.logs[0] as any)?.args?.requestId;
// 5. Check result (after AI worker processes)const request = await modelRegistry.getInferenceRequest(requestId);console.log(`Result: ${request.resultData}`);console.log(`Has result: ${request.hasResult}`);Security Considerations
Section titled “Security Considerations”Access Control
Section titled “Access Control”- Owner-only model registration
- Verifier-only result updates
- Proper authorization checks
Reentrancy Protection
Section titled “Reentrancy Protection”- All external functions use ReentrancyGuard
- Safe external calls
Input Validation
Section titled “Input Validation”- Comprehensive parameter validation
- Model existence checks
- Payment validation
Next Steps
Section titled “Next Steps”Ready to learn about other smart contracts? Check out:
- Task Pool - Learn about task management
- Task Router - Explore task routing
- Task Pool Factory - Learn about pool creation