Skip to content

Model Registry

The ModelRegistry is the central contract for managing AI models and user inference requests with automatic task creation. It handles model registration, pricing, user requests, and result management in the Hyra Network.

The ModelRegistry contract serves as the entry point for the entire Hyra AI Network, managing:

  • AI Model Registration: Register and manage AI models with pricing
  • User Inference Requests: Handle user requests for AI inference
  • Automatic Task Creation: Create AI tasks automatically when users request inference
  • Result Management: Store and manage AI inference results
  • Payment Processing: Handle HYRA token payments for inference
  • Owner-only Registration: Only contract owner can register new models
  • Model Pricing: Support for both fixed and flexible pricing models
  • Model Activation: Enable/disable models as needed
  • Model Metadata: Store model names, descriptions, and pricing information
  • Token Count: Always 1
  • Total Cost: 1 × token_price
  • Suitable for: Image classification, object detection, simple tasks
  • Token Count: Calculated from prompt bytes (4 bytes = 1 token)
  • Total Cost: token_count × token_price
  • Suitable for: LLM, text generation, complex tasks
  • Seamless Integration: Automatically creates AI tasks when users request inference
  • Task Distribution: Integrates with TaskRouter and TaskPool for task distribution
  • Result Updates: Automatically updates results when tasks are completed
// Model management
mapping(string => Model) public models;
string[] public modelIds;
// Inference requests
mapping(uint256 => InferenceRequest) public inferenceRequests;
uint256 public requestCounter;
// System integration
address public taskRouter;
mapping(address => bool) public verifiers;
// Constants
uint256 public constant BYTES_PER_TOKEN = 4;
struct Model {
string name; // Model name
string description; // Model description
ModelPricingType modelPricingType; // 0 = FIXED, 1 = FLEXIBLE
bool isActive; // Model status
uint256 createdAt; // Creation timestamp
uint256 tokenPrice; // Price per token in HYRA (wei)
}
struct InferenceRequest {
address user; // User who requested inference
string modelId; // Model identifier
string prompt; // User input prompt
uint256 tokenCount; // Calculated token count
uint256 totalCost; // Total cost in HYRA
uint256 timestamp; // Request timestamp
string resultData; // AI result data
bool hasResult; // Result availability
}
function registerModel(
string memory modelId,
string memory name,
string memory description,
ModelPricingType modelPricingType, // 0 = FIXED, 1 = FLEXIBLE
uint256 tokenPrice
) external onlyOwner

Parameters:

  • modelId: Unique identifier for the model
  • name: Human-readable model name
  • description: Model description
  • modelPricingType: Pricing type (FIXED or FLEXIBLE)
  • tokenPrice: Price per token in HYRA (wei)

Example:

// Register a fixed price model
await modelRegistry.registerModel(
"image-classifier-v1",
"Image Classification Model",
"Classifies images into categories",
0, // FIXED pricing
ethers.parseEther("0.1") // 0.1 HYRA per inference
);
// Register a flexible pricing model
await modelRegistry.registerModel(
"llm-chat-v1",
"LLM Chat Model",
"Large language model for chat",
1, // FLEXIBLE pricing
ethers.parseEther("0.001") // 0.001 HYRA per token
);
function updateModel(
string memory modelId,
bool isActive
) external onlyOwner
function requestInference(
string memory modelId,
string memory prompt
) external payable returns (uint256 requestId)

Parameters:

  • modelId: Model to use for inference
  • prompt: User input prompt

Returns:

  • requestId: Unique identifier for the request

Example:

// Calculate cost before requesting
const prompt = "What is the capital of France?";
const [tokenCount, totalCost] = await modelRegistry.calculateInferenceCost(
"llm-chat-v1",
prompt
);
console.log(`Token count: ${tokenCount}`);
console.log(`Total cost: ${ethers.formatEther(totalCost)} HYRA`);
// Request inference (automatically creates AI task)
const tx = await modelRegistry.requestInference(
"llm-chat-v1",
prompt,
{ value: totalCost }
);
const receipt = await tx.wait();
// Get the request ID from the event
const requestId = (receipt?.logs[0] as any)?.args?.requestId;
console.log(`Request ID: ${requestId}`);
function calculateInferenceCost(
string memory modelId,
string memory prompt
) external view returns (uint256 tokenCount, uint256 totalCost)

Returns:

  • tokenCount: Number of tokens in the prompt
  • totalCost: Total cost in HYRA tokens
function updateInferenceResult(
uint256 requestId,
string calldata resultData
) external onlyVerifier

Parameters:

  • requestId: Request identifier
  • resultData: AI inference result

Access Control: Only verifiers can update results

function getInferenceRequest(
uint256 requestId
) external view returns (InferenceRequest memory)
  • Token Count: Always 1
  • Calculation: totalCost = 1 × tokenPrice
  • Token Count: prompt.length / BYTES_PER_TOKEN
  • Calculation: totalCost = tokenCount × tokenPrice
  • Bytes per Token: 4 bytes = 1 token

Example:

// Fixed pricing model
const fixedModel = await modelRegistry.models("image-classifier-v1");
// tokenCount = 1
// totalCost = 1 × 0.1 HYRA = 0.1 HYRA
// Flexible pricing model
const prompt = "Hello, how are you?"; // 19 characters = 19 bytes
const tokenCount = Math.ceil(19 / 4); // 5 tokens
const totalCost = tokenCount * 0.001; // 0.005 HYRA
event ModelRegistered(
string indexed modelId,
ModelPricingType modelPricingType,
uint256 tokenPrice
);
event ModelUpdated(string indexed modelId, bool isActive);
event InferenceRequested(
uint256 indexed requestId,
address indexed user,
string indexed modelId,
uint256 tokenCount,
uint256 totalCost
);
event InferenceCompleted(uint256 indexed requestId, string resultData);
event VerifierAdded(address indexed verifier);
event VerifierRemoved(address indexed verifier);
event TaskRouterUpdated(address indexed oldRouter, address indexed newRouter);
  • registerModel(): Register new AI models
  • updateModel(): Update model status
  • addVerifier(): Add verifiers
  • removeVerifier(): Remove verifiers
  • setTaskRouter(): Set TaskRouter address
  • withdrawTokens(): Withdraw collected HYRA tokens
  • updateInferenceResult(): Update inference results
  • updateInferenceResultBatch(): Update multiple results
  • requestInference(): Request AI inference
  • calculateInferenceCost(): Calculate inference cost
  • getInferenceRequest(): Get request details
// Set TaskRouter address
await modelRegistry.setTaskRouter(taskRouterAddress);
// TaskRouter can create tasks automatically
await modelRegistry.requestInference(modelId, prompt);
// ModelRegistry creates tasks in TaskPools
// TaskPools update results in ModelRegistry
await modelRegistry.updateInferenceResult(requestId, resultData);
error ModelNotFound(); // Model doesn't exist
error ModelInactive(); // Model is not active
error InsufficientPayment(); // Payment too low
error InvalidInput(); // Invalid parameters
error Unauthorized(); // Access denied
error RequestNotFound(); // Request doesn't exist
error ResultAlreadySubmitted(); // Result already exists
  • Model existence checks
  • Payment amount validation
  • Parameter validation
  • Access control checks
  • Packed structs for gas efficiency
  • Minimal storage operations
  • Optimized data types
function updateInferenceResultBatch(
uint256[] calldata requestIds,
string[] calldata resultDataArray
) external onlyVerifier
  • Use custom errors instead of string messages
  • Reduced gas costs for error handling
// 1. Register a model
await modelRegistry.registerModel(
"llm-chat-v1",
"LLM Chat Model",
"Large language model for chat",
1, // FLEXIBLE pricing
ethers.parseEther("0.001")
);
// 2. Calculate cost
const prompt = "What is the capital of France?";
const [tokenCount, totalCost] = await modelRegistry.calculateInferenceCost(
"llm-chat-v1",
prompt
);
// 3. Request inference
const tx = await modelRegistry.requestInference(
"llm-chat-v1",
prompt,
{ value: totalCost }
);
// 4. Get request ID
const receipt = await tx.wait();
const requestId = (receipt?.logs[0] as any)?.args?.requestId;
// 5. Check result (after AI worker processes)
const request = await modelRegistry.getInferenceRequest(requestId);
console.log(`Result: ${request.resultData}`);
console.log(`Has result: ${request.hasResult}`);
  • Owner-only model registration
  • Verifier-only result updates
  • Proper authorization checks
  • All external functions use ReentrancyGuard
  • Safe external calls
  • Comprehensive parameter validation
  • Model existence checks
  • Payment validation

Ready to learn about other smart contracts? Check out: