Process Tasks
Process Tasks
Section titled “Process Tasks”This guide explains how to process AI inference tasks you’ve claimed in Hyra Network, including running AI models and generating Zero-Knowledge Proofs (ZKP) for verification.
Task Processing Overview
Section titled “Task Processing Overview”AI task processing involves:
- Running AI Models: Execute the specified AI model with input data
- Generating Results: Produce AI inference results
- Creating ZKP Proofs: Generate cryptographic proof of computation
- Preparing Submission: Package results and proofs for blockchain submission
Processing Steps
Section titled “Processing Steps”1. Get Task Data
Section titled “1. Get Task Data”from hyra_sdk import HyraClient
# Initialize clientclient = HyraClient()
# Get your active taskstatus = client.get_task_status()if status['has_active_task']: task_id = status['active_task_id'] # Task data is already available from claimingconst { HyraClient } = require("@hyra-network/sdk");
// Initialize clientconst client = new HyraClient();
// Get your active taskconst status = await client.getTaskStatus();if (status.hasActiveTask) { const taskId = status.activeTaskId; // Task data is already available from claiming}2. Run AI Model
Section titled “2. Run AI Model”Process the task with the specified AI model:
# Example: Text generation with LLMdef process_llm_task(input_data, model_name): # Load your AI model if model_name == "gpt-3.5-turbo": result = run_gpt_model(input_data) elif model_name == "claude-3": result = run_claude_model(input_data) else: result = run_default_model(input_data)
return result
# Process the taskresult = process_llm_task(task['input_data'], task['model_name'])// Example: Text generation with LLMasync function processLLMTask(inputData, modelName) { // Load your AI model let result; if (modelName === "gpt-3.5-turbo") { result = await runGPTModel(inputData); } else if (modelName === "claude-3") { result = await runClaudeModel(inputData); } else { result = await runDefaultModel(inputData); }
return result;}
// Process the taskconst result = await processLLMTask(task.inputData, task.modelName);3. Generate ZKP Proof
Section titled “3. Generate ZKP Proof”The SDK automatically handles ZKP generation, but you can understand the process:
# ZKP generation is handled automatically by the SDK# The SDK will:# 1. Create a cryptographic proof of your computation# 2. Prove you actually ran the AI model# 3. Hide your private computation process# 4. Allow verifiers to validate without re-running
# Your result is automatically prepared with ZKP proof// ZKP generation is handled automatically by the SDK// The SDK will:// 1. Create a cryptographic proof of your computation// 2. Prove you actually ran the AI model// 3. Hide your private computation process// 4. Allow verifiers to validate without re-running
// Your result is automatically prepared with ZKP proofAI Model Integration
Section titled “AI Model Integration”Supported Model Types
Section titled “Supported Model Types”- Large Language Models (LLMs): GPT, Claude, LLaMA, etc.
- Image Models: DALL-E, Stable Diffusion, CLIP, etc.
- Audio Models: Whisper, MusicLM, etc.
- Custom Models: Your own trained models
Model Setup Examples
Section titled “Model Setup Examples”# Example: GPT-3.5 Integrationimport openai
def run_gpt_model(prompt): response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": prompt}], max_tokens=1000 ) return response.choices[0].message.content
# Example: Custom Model Integrationdef run_custom_model(input_data): # Load your custom model model = load_your_model() result = model.predict(input_data) return result// Example: GPT-3.5 Integrationconst openai = require("openai");
async function runGPTModel(prompt) { const response = await openai.chat.completions.create({ model: "gpt-3.5-turbo", messages: [{ role: "user", content: prompt }], max_tokens: 1000, }); return response.choices[0].message.content;}
// Example: Custom Model Integrationasync function runCustomModel(inputData) { // Load your custom model const model = await loadYourModel(); const result = await model.predict(inputData); return result;}ZKP Anti-Cheating System
Section titled “ZKP Anti-Cheating System”How ZKP Prevents Cheating
Section titled “How ZKP Prevents Cheating”The Zero-Knowledge Proof system ensures you cannot cheat by:
- Proving Computation: You must generate cryptographic proof you actually ran the AI model
- Hiding Private Data: Your computation process remains private
- Verifying Correctness: Verifiers can confirm you did the work without re-running it
- Preventing Lazy Workers: You cannot submit random results without doing the work
ZKP Generation Process
Section titled “ZKP Generation Process”# The SDK automatically handles ZKP generation# Here's what happens behind the scenes:
def generate_zkp_proof(input_data, result, model_name): # 1. Create circuit for your computation circuit = create_computation_circuit(input_data, result, model_name)
# 2. Generate witness (private computation details) witness = create_witness(input_data, result, model_name)
# 3. Generate proof proof = circuit.prove(witness)
# 4. Return proof and public inputs return { 'proof': proof, 'public_input': hash(input_data + result), 'public_output': hash(result) }// The SDK automatically handles ZKP generation// Here's what happens behind the scenes:
function generateZKPProof(inputData, result, modelName) { // 1. Create circuit for your computation const circuit = createComputationCircuit(inputData, result, modelName);
// 2. Generate witness (private computation details) const witness = createWitness(inputData, result, modelName);
// 3. Generate proof const proof = circuit.prove(witness);
// 4. Return proof and public inputs return { proof: proof, publicInput: hash(inputData + result), publicOutput: hash(result), };}Privacy and Security
Section titled “Privacy and Security”Encrypted Data Handling
Section titled “Encrypted Data Handling”- Input Encryption: Task inputs are encrypted before submission to smart contracts
- Output Encryption: Your results are encrypted during transmission
- ZKP Privacy: Zero-knowledge proofs verify computation without revealing your process
- No Data Leakage: Your AI model and computation process remain private
Security Best Practices
Section titled “Security Best Practices”# Always validate and sanitize inputsdef process_task_safely(task): # Validate input data if not task['input_data']: raise ValueError("Invalid input data")
# Sanitize input to prevent injection attacks sanitized_input = sanitize_input(task['input_data'])
# Process with your AI model result = your_ai_model.process(sanitized_input)
# Validate result if not result: raise ValueError("Invalid result")
return result// Always validate and sanitize inputsasync function processTaskSafely(task) { // Validate input data if (!task.inputData) { throw new Error("Invalid input data"); }
// Sanitize input to prevent injection attacks const sanitizedInput = sanitizeInput(task.inputData);
// Process with your AI model const result = await yourAIModel.process(sanitizedInput);
// Validate result if (!result) { throw new Error("Invalid result"); }
return result;}Performance Optimization
Section titled “Performance Optimization”Resource Management
Section titled “Resource Management”# Monitor your system resourcesimport psutil
def monitor_resources(): cpu_percent = psutil.cpu_percent() memory_percent = psutil.virtual_memory().percent
if cpu_percent > 90: print("Warning: High CPU usage") if memory_percent > 90: print("Warning: High memory usage")
return cpu_percent, memory_percent
# Use during processingcpu, memory = monitor_resources()// Monitor your system resourcesconst os = require("os");
function monitorResources() { const cpuUsage = process.cpuUsage(); const memoryUsage = process.memoryUsage();
if (memoryUsage.heapUsed > 1000000000) { // 1GB console.log("Warning: High memory usage"); }
return { cpuUsage, memoryUsage };}
// Use during processingconst resources = monitorResources();Batch Processing
Section titled “Batch Processing”# Process multiple tasks efficientlydef process_batch_tasks(tasks): results = [] for task in tasks: try: result = process_single_task(task) results.append(result) except Exception as e: print(f"Error processing task {task['task_id']}: {e}") continue
return results// Process multiple tasks efficientlyasync function processBatchTasks(tasks) { const results = []; for (const task of tasks) { try { const result = await processSingleTask(task); results.push(result); } catch (error) { console.log(`Error processing task ${task.taskId}: ${error.message}`); continue; } }
return results;}Error Handling
Section titled “Error Handling”Common Issues
Section titled “Common Issues”try: result = process_task(task)except Exception as e: if "ModelNotFound" in str(e): print("AI model not available") elif "InsufficientResources" in str(e): print("Not enough computational resources") elif "InvalidInput" in str(e): print("Invalid task input data") else: print(f"Processing error: {e}")try { const result = await processTask(task);} catch (error) { if (error.message.includes("ModelNotFound")) { console.log("AI model not available"); } else if (error.message.includes("InsufficientResources")) { console.log("Not enough computational resources"); } else if (error.message.includes("InvalidInput")) { console.log("Invalid task input data"); } else { console.log(`Processing error: ${error.message}`); }}Quality Assurance
Section titled “Quality Assurance”Result Validation
Section titled “Result Validation”def validate_result(result, task): # Check result format if not isinstance(result, str): return False
# Check result length if len(result) < 10: return False
# Check for common errors error_indicators = ["error", "failed", "invalid"] if any(indicator in result.lower() for indicator in error_indicators): return False
return Truefunction validateResult(result, task) { // Check result format if (typeof result !== "string") { return false; }
// Check result length if (result.length < 10) { return false; }
// Check for common errors const errorIndicators = ["error", "failed", "invalid"]; if ( errorIndicators.some((indicator) => result.toLowerCase().includes(indicator) ) ) { return false; }
return true;}Best Practices
Section titled “Best Practices”For AI Workers
Section titled “For AI Workers”- Choose Appropriate Tasks: Select tasks you can complete within the deadline
- Maintain Quality: Ensure your AI results are accurate and relevant
- Monitor Performance: Track your success rate and earnings
- Stay Updated: Keep your AI models and SDKs updated
- Secure Setup: Protect your private keys and computational resources
For Task Processing
Section titled “For Task Processing”- Validate Inputs: Always check task input data before processing
- Use Reliable Models: Ensure your AI models are working correctly
- Test Your Setup: Verify your ZKP generation works before claiming tasks
- Monitor Resources: Keep track of CPU, memory, and network usage
- Handle Errors: Implement proper error handling and recovery
Next Steps
Section titled “Next Steps”Once processing is complete, learn how to submit your results with ZKP proofs to earn HYRA tokens.