Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.prism.byescaleira.com/llms.txt

Use this file to discover all available pages before exploring further.

PrismIntelligence gives you a single, unified API — PrismIntelligenceClient — to run predictions no matter where the model lives: a locally trained Core ML artifact, Apple Intelligence on the device, or a remote language model endpoint. You choose the backend with a factory method; the rest of your code stays the same.

Training a model from Codable data

PrismCodableTrainingData bridges the gap between your Swift data model and CreateML. Pass any array of Codable structs and it extracts feature rows automatically via Mirror, so you never have to write CSV export code. The house price example from the README illustrates a full regressor training pipeline:
struct HouseData: Codable {
    var rooms: Int
    var area: Double
    var neighborhood: String
    var price: Double
}

let houses: [HouseData] = [
    HouseData(rooms: 3, area: 120, neighborhood: "Centro",   price: 450_000),
    HouseData(rooms: 2, area: 80,  neighborhood: "Zona Sul", price: 320_000),
    HouseData(rooms: 4, area: 200, neighborhood: "Centro",   price: 780_000),
    // … more data …
]

let training = PrismCodableTrainingData(data: houses)
let result = await training.trainRegressor(
    id: "house_price",
    name: "House Price Predictor",
    target: \.price
)

switch result {
case .success:
    print("Model trained successfully")
case .failure(let error):
    print("Training failed: \(error)")
}
PrismCodableTrainingData accepts three optional parameters in its initializer:
ParameterDefaultDescription
testRatio0.2Fraction of data held out for testing.
seed42Random seed for reproducible train/test splits.
trainerPrismIntelligenceLocalTrainer()The underlying CreateML trainer.
To train a text classifier instead, use trainTextClassifier(id:name:text:label:):
struct ReviewData: Codable {
    var text: String
    var sentiment: String   // "positive" or "negative"
}

let reviews = PrismCodableTrainingData(data: reviewDataset)
let result = await reviews.trainTextClassifier(
    id: "sentiment",
    name: "Sentiment Classifier",
    text: \.text,
    label: \.sentiment
)
Training requires CreateML and TabularData, which are available on macOS. On iOS and other platforms, trainRegressor and trainTextClassifier throw PrismIntelligenceError.unsupportedPlatform.

Loading a trained model

After training, the model is persisted to the catalog automatically. Load it by passing its identifier to PrismIntelligenceClient.local(modelID:):
let client = try await PrismIntelligenceClient.local(modelID: "house_price")
If you already have a PrismIntelligenceModel descriptor, pass it directly:
let client = await PrismIntelligenceClient.local(model: myModel)
local(modelID:) throws PrismIntelligenceError.modelNotFound(_:) if the identifier is not in the catalog.

Checking backend status

Call client.status() before running predictions to confirm the backend is ready:
let status = await client.status()
if status.isAvailable {
    print("Backend ready, capabilities: \(status.capabilities)")
} else {
    print("Unavailable: \(status.reason ?? "unknown reason")")
}
PrismIntelligenceStatus includes:
PropertyDescription
backendThe PrismIntelligenceBackendKind (.local, .apple, .remote).
isAvailableWhether the backend can accept requests.
reasonHuman-readable explanation when isAvailable is false.
capabilitiesArray of PrismIntelligenceCapability values the backend supports.
modelID / modelNameIdentifier and display name, for local backends.
supportsStreamingWhether the backend supports streaming responses.

Running text classification

Use classify(text:) on a local text-classifier model:
let sentimentClient = try await PrismIntelligenceClient.local(modelID: "sentiment")
let label = try await sentimentClient.classify(text: "I love this product!")
print(label)   // "positive"
classify(text:) throws PrismIntelligenceError.unsupportedOperation when the backend is not a local text classifier.

Running tabular regression

Use regress(features:) with a typed PrismIntelligenceFeatureRow dictionary, or the untyped [String: Any] overload for ergonomic call sites:
let client = try await PrismIntelligenceClient.local(modelID: "house_price")
let price = try await client.regress(
    features: [
        "rooms":        .int(3),
        "area":         .double(120),
        "neighborhood": .string("Centro"),
    ]
)
print("Predicted price: \(price)")
For tabular classification (predicting a label from features), use classify(features:), which returns a [String: Double] probability map:
let probabilities = try await client.classify(
    features: ["temperature": .double(38.5), "duration_days": .int(3)]
)
let topLabel = probabilities.max(by: { $0.value < $1.value })?.key

Using Apple Intelligence

PrismIntelligenceClient.apple() routes requests through the FoundationModels framework on supported devices. Pass an optional configuration to customise the model and system instructions:
let apple = PrismIntelligenceClient.apple()

let summary = try await apple.generate(
    "Summarize the following article in two sentences.",
    systemPrompt: "You are a concise technical writer."
)
print(summary)
Apple Intelligence requires an Apple Silicon device running iOS 18.1+ or macOS 15.1+. Check status().isAvailable before calling generate to handle gracefully on unsupported hardware.

Connecting to a remote LLM

Use PrismIntelligenceClient.remote(endpoint:token:model:) to connect to any OpenAI-compatible API endpoint:
let remote = PrismIntelligenceClient.remote(
    endpoint: URL(string: "https://api.openai.com/v1/chat/completions")!,
    token: "sk-…",
    model: "gpt-4o"
)

let answer = try await remote.generate(
    "Explain Swift concurrency in plain English.",
    systemPrompt: "Keep answers under 150 words.",
    options: PrismLanguageGenerationOptions()
)
If you need custom headers instead of Bearer-token auth, use the headers overload:
let remote = PrismIntelligenceClient.remote(
    endpoint: url,
    model: "claude-3-5-sonnet",
    headers: [
        "x-api-key": apiKey,
        "anthropic-version": "2023-06-01",
    ]
)
The full generate signature supports optional systemPrompt, context (additional context strings), options, and metadata:
public func generate(
    _ prompt: String,
    systemPrompt: String? = nil,
    context: [String] = [],
    options: PrismLanguageGenerationOptions = .init(),
    metadata: [String: String] = [:]
) async throws -> String

Using the unified execute API

PrismIntelligenceRequest and execute(_:) provide a single dispatch point when you need to work with multiple request types polymorphically:
let request = PrismIntelligenceRequest.classifyText("Great product!")
let response = try await client.execute(request)

switch response {
case .textClassification(let label):
    print("Label: \(label)")
case .tabularRegression(let value):
    print("Value: \(value)")
case .language(let r):
    print("Generated: \(r.content)")
default:
    break
}

Capability matrix

Factory.textClassification.tabularClassification.tabularRegression.languageGeneration
local(modelID:) — text classifier
local(modelID:) — tabular classifier
local(modelID:) — tabular regressor
apple()
remote(endpoint:token:model:)