Documentation Index
Fetch the complete documentation index at: https://docs.prism.byescaleira.com/llms.txt
Use this file to discover all available pages before exploring further.
PrismIntelligence gives you a single, unified API — PrismIntelligenceClient — to run predictions no matter where the model lives: a locally trained Core ML artifact, Apple Intelligence on the device, or a remote language model endpoint. You choose the backend with a factory method; the rest of your code stays the same.
Training a model from Codable data
PrismCodableTrainingData bridges the gap between your Swift data model and CreateML. Pass any array of Codable structs and it extracts feature rows automatically via Mirror, so you never have to write CSV export code.
The house price example from the README illustrates a full regressor training pipeline:
PrismCodableTrainingData accepts three optional parameters in its initializer:
| Parameter | Default | Description |
|---|---|---|
testRatio | 0.2 | Fraction of data held out for testing. |
seed | 42 | Random seed for reproducible train/test splits. |
trainer | PrismIntelligenceLocalTrainer() | The underlying CreateML trainer. |
trainTextClassifier(id:name:text:label:):
Training requires CreateML and TabularData, which are available on macOS. On iOS and other platforms,
trainRegressor and trainTextClassifier throw PrismIntelligenceError.unsupportedPlatform.Loading a trained model
After training, the model is persisted to the catalog automatically. Load it by passing its identifier toPrismIntelligenceClient.local(modelID:):
PrismIntelligenceModel descriptor, pass it directly:
local(modelID:) throws PrismIntelligenceError.modelNotFound(_:) if the identifier is not in the catalog.
Checking backend status
Callclient.status() before running predictions to confirm the backend is ready:
PrismIntelligenceStatus includes:
| Property | Description |
|---|---|
backend | The PrismIntelligenceBackendKind (.local, .apple, .remote). |
isAvailable | Whether the backend can accept requests. |
reason | Human-readable explanation when isAvailable is false. |
capabilities | Array of PrismIntelligenceCapability values the backend supports. |
modelID / modelName | Identifier and display name, for local backends. |
supportsStreaming | Whether the backend supports streaming responses. |
Running text classification
Useclassify(text:) on a local text-classifier model:
classify(text:) throws PrismIntelligenceError.unsupportedOperation when the backend is not a local text classifier.
Running tabular regression
Useregress(features:) with a typed PrismIntelligenceFeatureRow dictionary, or the untyped [String: Any] overload for ergonomic call sites:
- Typed features
- Untyped features
classify(features:), which returns a [String: Double] probability map:
Using Apple Intelligence
PrismIntelligenceClient.apple() routes requests through the FoundationModels framework on supported devices. Pass an optional configuration to customise the model and system instructions:
Connecting to a remote LLM
UsePrismIntelligenceClient.remote(endpoint:token:model:) to connect to any OpenAI-compatible API endpoint:
generate signature supports optional systemPrompt, context (additional context strings), options, and metadata:
Using the unified execute API
PrismIntelligenceRequest and execute(_:) provide a single dispatch point when you need to work with multiple request types polymorphically:
Capability matrix
| Factory | .textClassification | .tabularClassification | .tabularRegression | .languageGeneration |
|---|---|---|---|---|
local(modelID:) — text classifier | ✓ | |||
local(modelID:) — tabular classifier | ✓ | |||
local(modelID:) — tabular regressor | ✓ | |||
apple() | ✓ | |||
remote(endpoint:token:model:) | ✓ |