Use this file to discover all available pages before exploring further.
PrismIntelligence unifies four intelligence backends behind a single actor-based client. You can train a Core ML model from any Codable struct, run text classification or tabular regression on-device, generate text via Apple Intelligence using the FoundationModels framework, or call a remote LLM over the network—all with the same API surface.
PrismIntelligenceClient is the primary entry point. Create it with one of four factory methods, then call the appropriate prediction or generation method.
let client = try await PrismIntelligenceClient.local(modelID: "sentiment")let label = try await client.classify(text: "I love this product!")// e.g. "positive"
let client = PrismIntelligenceClient.apple()let summary = try await client.generate("Summarize quantum computing in one sentence.")// With a system promptlet answer = try await client.generate( "What's the capital of France?", systemPrompt: "You are a concise geography assistant.")
PrismCodableTrainingData converts any array of Codable structs into training data using Mirror reflection. You point it at target and feature key paths—no CSV export required.
PrismIntelligenceFeatureValue is the typed wrapper for individual feature inputs:
PrismIntelligenceFeatureValue.string("downtown")PrismIntelligenceFeatureValue.int(3)PrismIntelligenceFeatureValue.double(120.5)PrismIntelligenceFeatureValue.bool(true)// Infer from Any at runtimelet value = PrismIntelligenceFeatureValue(someValue)
You can also pass [String: Any] directly to classify(features:) and regress(features:)—the client converts supported types automatically.
Calling a method the backend does not support—for example, classify(text:) on a remote client—throws PrismIntelligenceError.unsupportedOperation. Check status().capabilities before dispatching requests if the backend type is not known at compile time.