Skip to content

Process Tasks

This guide explains how to process AI inference tasks you’ve claimed in Hyra Network, including running AI models and generating Zero-Knowledge Proofs (ZKP) for verification.

AI task processing involves:

  1. Running AI Models: Execute the specified AI model with input data
  2. Generating Results: Produce AI inference results
  3. Creating ZKP Proofs: Generate cryptographic proof of computation
  4. Preparing Submission: Package results and proofs for blockchain submission
from hyra_sdk import HyraClient
# Initialize client
client = HyraClient()
# Get your active task
status = client.get_task_status()
if status['has_active_task']:
task_id = status['active_task_id']
# Task data is already available from claiming
const { HyraClient } = require("@hyra-network/sdk");
// Initialize client
const client = new HyraClient();
// Get your active task
const status = await client.getTaskStatus();
if (status.hasActiveTask) {
const taskId = status.activeTaskId;
// Task data is already available from claiming
}

Process the task with the specified AI model:

# Example: Text generation with LLM
def process_llm_task(input_data, model_name):
# Load your AI model
if model_name == "gpt-3.5-turbo":
result = run_gpt_model(input_data)
elif model_name == "claude-3":
result = run_claude_model(input_data)
else:
result = run_default_model(input_data)
return result
# Process the task
result = process_llm_task(task['input_data'], task['model_name'])
// Example: Text generation with LLM
async function processLLMTask(inputData, modelName) {
// Load your AI model
let result;
if (modelName === "gpt-3.5-turbo") {
result = await runGPTModel(inputData);
} else if (modelName === "claude-3") {
result = await runClaudeModel(inputData);
} else {
result = await runDefaultModel(inputData);
}
return result;
}
// Process the task
const result = await processLLMTask(task.inputData, task.modelName);

The SDK automatically handles ZKP generation, but you can understand the process:

# ZKP generation is handled automatically by the SDK
# The SDK will:
# 1. Create a cryptographic proof of your computation
# 2. Prove you actually ran the AI model
# 3. Hide your private computation process
# 4. Allow verifiers to validate without re-running
# Your result is automatically prepared with ZKP proof
// ZKP generation is handled automatically by the SDK
// The SDK will:
// 1. Create a cryptographic proof of your computation
// 2. Prove you actually ran the AI model
// 3. Hide your private computation process
// 4. Allow verifiers to validate without re-running
// Your result is automatically prepared with ZKP proof
  • Large Language Models (LLMs): GPT, Claude, LLaMA, etc.
  • Image Models: DALL-E, Stable Diffusion, CLIP, etc.
  • Audio Models: Whisper, MusicLM, etc.
  • Custom Models: Your own trained models
# Example: GPT-3.5 Integration
import openai
def run_gpt_model(prompt):
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}],
max_tokens=1000
)
return response.choices[0].message.content
# Example: Custom Model Integration
def run_custom_model(input_data):
# Load your custom model
model = load_your_model()
result = model.predict(input_data)
return result
// Example: GPT-3.5 Integration
const openai = require("openai");
async function runGPTModel(prompt) {
const response = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: prompt }],
max_tokens: 1000,
});
return response.choices[0].message.content;
}
// Example: Custom Model Integration
async function runCustomModel(inputData) {
// Load your custom model
const model = await loadYourModel();
const result = await model.predict(inputData);
return result;
}

The Zero-Knowledge Proof system ensures you cannot cheat by:

  1. Proving Computation: You must generate cryptographic proof you actually ran the AI model
  2. Hiding Private Data: Your computation process remains private
  3. Verifying Correctness: Verifiers can confirm you did the work without re-running it
  4. Preventing Lazy Workers: You cannot submit random results without doing the work
# The SDK automatically handles ZKP generation
# Here's what happens behind the scenes:
def generate_zkp_proof(input_data, result, model_name):
# 1. Create circuit for your computation
circuit = create_computation_circuit(input_data, result, model_name)
# 2. Generate witness (private computation details)
witness = create_witness(input_data, result, model_name)
# 3. Generate proof
proof = circuit.prove(witness)
# 4. Return proof and public inputs
return {
'proof': proof,
'public_input': hash(input_data + result),
'public_output': hash(result)
}
// The SDK automatically handles ZKP generation
// Here's what happens behind the scenes:
function generateZKPProof(inputData, result, modelName) {
// 1. Create circuit for your computation
const circuit = createComputationCircuit(inputData, result, modelName);
// 2. Generate witness (private computation details)
const witness = createWitness(inputData, result, modelName);
// 3. Generate proof
const proof = circuit.prove(witness);
// 4. Return proof and public inputs
return {
proof: proof,
publicInput: hash(inputData + result),
publicOutput: hash(result),
};
}
  • Input Encryption: Task inputs are encrypted before submission to smart contracts
  • Output Encryption: Your results are encrypted during transmission
  • ZKP Privacy: Zero-knowledge proofs verify computation without revealing your process
  • No Data Leakage: Your AI model and computation process remain private
# Always validate and sanitize inputs
def process_task_safely(task):
# Validate input data
if not task['input_data']:
raise ValueError("Invalid input data")
# Sanitize input to prevent injection attacks
sanitized_input = sanitize_input(task['input_data'])
# Process with your AI model
result = your_ai_model.process(sanitized_input)
# Validate result
if not result:
raise ValueError("Invalid result")
return result
// Always validate and sanitize inputs
async function processTaskSafely(task) {
// Validate input data
if (!task.inputData) {
throw new Error("Invalid input data");
}
// Sanitize input to prevent injection attacks
const sanitizedInput = sanitizeInput(task.inputData);
// Process with your AI model
const result = await yourAIModel.process(sanitizedInput);
// Validate result
if (!result) {
throw new Error("Invalid result");
}
return result;
}
# Monitor your system resources
import psutil
def monitor_resources():
cpu_percent = psutil.cpu_percent()
memory_percent = psutil.virtual_memory().percent
if cpu_percent > 90:
print("Warning: High CPU usage")
if memory_percent > 90:
print("Warning: High memory usage")
return cpu_percent, memory_percent
# Use during processing
cpu, memory = monitor_resources()
// Monitor your system resources
const os = require("os");
function monitorResources() {
const cpuUsage = process.cpuUsage();
const memoryUsage = process.memoryUsage();
if (memoryUsage.heapUsed > 1000000000) {
// 1GB
console.log("Warning: High memory usage");
}
return { cpuUsage, memoryUsage };
}
// Use during processing
const resources = monitorResources();
# Process multiple tasks efficiently
def process_batch_tasks(tasks):
results = []
for task in tasks:
try:
result = process_single_task(task)
results.append(result)
except Exception as e:
print(f"Error processing task {task['task_id']}: {e}")
continue
return results
// Process multiple tasks efficiently
async function processBatchTasks(tasks) {
const results = [];
for (const task of tasks) {
try {
const result = await processSingleTask(task);
results.push(result);
} catch (error) {
console.log(`Error processing task ${task.taskId}: ${error.message}`);
continue;
}
}
return results;
}
try:
result = process_task(task)
except Exception as e:
if "ModelNotFound" in str(e):
print("AI model not available")
elif "InsufficientResources" in str(e):
print("Not enough computational resources")
elif "InvalidInput" in str(e):
print("Invalid task input data")
else:
print(f"Processing error: {e}")
try {
const result = await processTask(task);
} catch (error) {
if (error.message.includes("ModelNotFound")) {
console.log("AI model not available");
} else if (error.message.includes("InsufficientResources")) {
console.log("Not enough computational resources");
} else if (error.message.includes("InvalidInput")) {
console.log("Invalid task input data");
} else {
console.log(`Processing error: ${error.message}`);
}
}
def validate_result(result, task):
# Check result format
if not isinstance(result, str):
return False
# Check result length
if len(result) < 10:
return False
# Check for common errors
error_indicators = ["error", "failed", "invalid"]
if any(indicator in result.lower() for indicator in error_indicators):
return False
return True
function validateResult(result, task) {
// Check result format
if (typeof result !== "string") {
return false;
}
// Check result length
if (result.length < 10) {
return false;
}
// Check for common errors
const errorIndicators = ["error", "failed", "invalid"];
if (
errorIndicators.some((indicator) =>
result.toLowerCase().includes(indicator)
)
) {
return false;
}
return true;
}
  • Choose Appropriate Tasks: Select tasks you can complete within the deadline
  • Maintain Quality: Ensure your AI results are accurate and relevant
  • Monitor Performance: Track your success rate and earnings
  • Stay Updated: Keep your AI models and SDKs updated
  • Secure Setup: Protect your private keys and computational resources
  • Validate Inputs: Always check task input data before processing
  • Use Reliable Models: Ensure your AI models are working correctly
  • Test Your Setup: Verify your ZKP generation works before claiming tasks
  • Monitor Resources: Keep track of CPU, memory, and network usage
  • Handle Errors: Implement proper error handling and recovery

Once processing is complete, learn how to submit your results with ZKP proofs to earn HYRA tokens.