Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.prism.byescaleira.com/llms.txt

Use this file to discover all available pages before exploring further.

PrismIntelligence

PrismIntelligence provides a single API surface for running machine learning and AI workloads across three backends — all without third-party dependencies.

Local CoreML

Train and run classification/regression models on-device using CreateML. Zero network required.

Apple Intelligence

Access Apple’s FoundationModels framework for language generation, content tagging, and model adapters.

Remote LLM

Connect to remote language models via a provider protocol. Supports streaming and custom instructions.

Backend Kinds

Every operation routes through a PrismIntelligenceBackendKind:
Backend Selection
public enum PrismIntelligenceBackendKind: String, Codable, Sendable, CaseIterable {
    case local   // On-device CoreML / CreateML
    case apple   // Apple Intelligence (FoundationModels)
    case remote  // Remote LLM over the network
}

Capabilities

Each backend declares which capabilities it supports:
Capabilities
public enum PrismIntelligenceCapability: String, Codable, Sendable, CaseIterable {
    case textClassification       // Classify text into labels
    case tabularClassification    // Classify rows of tabular data
    case tabularRegression        // Predict numeric values from tabular data
    case languageGeneration       // Generate natural-language text
}

Checking Backend Status

Use PrismIntelligenceStatus to query what’s available at runtime:
Status Check
let status = PrismIntelligenceStatus(
    backend: .apple,
    isAvailable: true,
    capabilities: [.languageGeneration],
    modelName: "Apple Language Model",
    supportsStreaming: true,
    supportsCustomInstructions: true,
    supportsModelAdapters: true
)

if status.isAvailable {
    print("Backend ready: \(status.capabilities)")
}
PrismIntelligenceStatus includes:
PropertyTypeDescription
backendPrismIntelligenceBackendKindWhich backend this status describes
isAvailableBoolWhether the backend can accept requests
reasonString?Explanation when unavailable
capabilities[PrismIntelligenceCapability]Supported operations
modelIDString?Model identifier
modelNameString?Human-readable model name
supportsStreamingBoolWhether streaming responses work
supportsCustomInstructionsBoolWhether system instructions are supported
supportsModelAdaptersBoolWhether model adapters can be loaded

Architecture

┌─────────────────────────────────────────┐
│         PrismIntelligenceClient         │
├─────────┬───────────────┬───────────────┤
│  Local  │     Apple     │    Remote     │
│ CoreML  │FoundationModels│   LLM API   │
│ CreateML│               │              │
└─────────┴───────────────┴───────────────┘
The client selects the appropriate backend based on the requested capability and configuration. Local models handle classification and regression. Apple Intelligence handles language generation on supported devices. Remote providers handle everything else.
PrismIntelligence requires no third-party ML libraries. Local training uses CreateML, NLP uses the NaturalLanguage framework, and Apple Intelligence uses FoundationModels — all Apple-native.

Next Steps

On-Device Training

Train text and tabular classifiers directly on the device using CreateML.

Apple Intelligence

Generate text with Apple’s FoundationModels framework and model adapters.

NLP & RAG

Sentiment analysis, entity extraction, embeddings, and retrieval-augmented generation.