Skip to Content
DocumentationEdge SDKiOSv0.1.0Quick Start

You are viewing iOS SDK documentation for v0.1.0. This version is outdated. See the latest version for current features including the model downloader and updated APIs.

iOS Quick Start Guide

Prerequisites

You should already have:

  • An iOS project created in Xcode. You may create an empty project with the wizard. Leap iOS SDK is Swift-first and requires iOS 15.0+ or macOS 12.0+.
  • A working iOS device or simulator. For real-time performance, a physical device is recommended.
  • Xcode 15.0+ with Swift 5.9+
// In your project's deployment target settings iOS Deployment Target: 15.0 macOS Deployment Target: 12.0

While the SDK works on iOS Simulator, performance may be significantly slower than on physical devices. A physical iPhone or iPad is recommended for optimal inference speed.

Import the LeapSDK

Add LeapSDK to your project in Xcode:

  1. Open your project in Xcode
  2. Go to File → Add Package Dependencies
  3. Enter the repository URL: https://github.com/Liquid4All/leap-ios.git
  4. Select the latest version and add to your target

CocoaPods

Add LeapSDK to your Podfile:

pod 'Leap-SDK', '~> 0.1.0'

Then run:

pod install

Manual Installation

Alternatively, you can download the pre-built XCFramework:

  1. Download the latest LeapSDK.xcframework.zip from GitHub Releases 
  2. Unzip and drag LeapSDK.xcframework into your Xcode project
  3. Ensure “Embed & Sign” is selected in the frameworks settings

Download Model Bundles

Browse the Leap Model Library  to find and download a model bundle that matches your needs.

For iOS development, you can include the model bundle directly in your app bundle:

  1. Drag the downloaded .bundle file into your Xcode project
  2. Ensure “Add to target” is checked for your app target
  3. The model will be accessible via Bundle.main.url(forResource:withExtension:)

Load Model in Code

Import LeapSDK and load a model bundle using the Leap.load function. This function is async and should be called from a Task or async context:

import LeapSDK class ChatViewModel: ObservableObject { @Published var isModelLoading = true private var modelRunner: ModelRunner? func setupModel() async { do { guard let modelURL = Bundle.main.url( forResource: "qwen3-0_6b", withExtension: "bundle" ) else { print("Could not find model bundle") return } modelRunner = try await Leap.load(url: modelURL) isModelLoading = false } catch { print("Failed to load model: \(error)") } } }

Generate Content with the Model

Create a conversation from the model runner and use it to generate streaming responses. LeapSDK supports both free-form text generation and structured output using constrained generation.

import LeapSDK @MainActor func sendMessage(_ input: String) async { guard let modelRunner = modelRunner else { return } // Create a conversation with system prompt let conversation = modelRunner.createConversation( systemPrompt: "You are a helpful assistant." ) // Create a user message let userMessage = ChatMessage(role: .user, content: [.text(input)]) // Generate streaming response for await response in conversation.generateResponse(message: userMessage) { switch response { case .chunk(let text): print("Received text chunk: \(text)") // Update your UI with the text chunk case .reasoningChunk(let text): print("Received reasoning chunk: \(text)") // Handle reasoning content if needed case .complete(let fullText, let info): print("Generation complete!") print("Full text: \(fullText)") print("Finish reason: \(info.finishReason)") } } }

Structured Output with Constrained Generation

LeapSDK supports generating structured JSON output using Swift macros. This ensures the AI model produces responses that conform to your predefined Swift types.

Basic Structured Output

First, import the constrained generation package and define your structure:

import LeapSDK import LeapSDKConstrainedGeneration @Generatable("A joke with metadata") struct Joke: Codable { @Guide("The joke text") let text: String @Guide("The category of humor (pun, dad-joke, programming, etc.)") let category: String @Guide("Humor rating from 1-10") let rating: Int @Guide("Whether the joke is suitable for children") let kidFriendly: Bool }

Using Constrained Generation

@MainActor func generateStructuredJoke() async { guard let modelRunner = modelRunner else { return } let conversation = modelRunner.createConversation( systemPrompt: "You are a comedian. Respond with valid JSON only." ) // Configure generation options for structured output var options = GenerationOptions() options.temperature = 0.7 do { // Set the response format to your custom type try options.setResponseFormat(type: Joke.self) let message = ChatMessage( role: .user, content: [.text("Create a programming joke in JSON format")] ) // Generate structured response for await response in conversation.generateResponse( message: message, generationOptions: options ) { switch response { case .chunk(let token): print(token, terminator: "") case .complete(let fullText, let info): // Parse the structured JSON response do { let jokeData = fullText.data(using: .utf8)! let joke = try JSONDecoder().decode(Joke.self, from: jokeData) print("Generated joke:") print("Text: \(joke.text)") print("Category: \(joke.category)") print("Rating: \(joke.rating)/10") print("Kid-friendly: \(joke.kidFriendly)") } catch { print("Failed to parse structured response: \(error)") } case .reasoningChunk(_): break } } } catch { print("Failed to set response format: \(error)") } }

See the Constrained Generation guide for comprehensive documentation on defining complex types and advanced usage patterns.

Complete Example

Here’s a complete SwiftUI example integrating the LeapSDK:

import SwiftUI import LeapSDK @main struct LeapChatApp: App { var body: some Scene { WindowGroup { ContentView() } } } struct ContentView: View { @StateObject private var chatStore = ChatStore() var body: some View { VStack { if chatStore.isModelLoading { ProgressView("Loading model...") } else { // Your chat UI here ChatView() .environmentObject(chatStore) } } .task { await chatStore.setupModel() } } } @Observable class ChatStore { var isModelLoading = true var messages: [String] = [] private var modelRunner: ModelRunner? private var conversation: Conversation? @MainActor func setupModel() async { do { guard let modelURL = Bundle.main.url( forResource: "qwen3-0_6b", withExtension: "bundle" ) else { print("Could not find model bundle") return } modelRunner = try await Leap.load(url: modelURL) conversation = modelRunner!.createConversation( systemPrompt: "You are a helpful assistant." ) isModelLoading = false } catch { print("Error loading model: \(error)") } } @MainActor func sendMessage(_ input: String) async { guard let conversation = conversation else { return } let userMessage = ChatMessage(role: .user, content: [.text(input)]) messages.append("User: \(input)") var assistantResponse = "" for await response in conversation.generateResponse(message: userMessage) { switch response { case .chunk(let text): assistantResponse += text case .reasoningChunk(_): break // Handle reasoning if needed case .complete(let fullText, let info): assistantResponse = fullText.isEmpty ? assistantResponse : fullText messages.append("Assistant: \(assistantResponse)") print("Generation completed with reason: \(info.finishReason)") } } } }

Key API Concepts

  • Leap.load(url:): Loads a model bundle and returns a ModelRunner
  • ModelRunner.createConversation(systemPrompt:): Creates a new conversation with optional system prompt
  • Conversation: Manages message state and generates responses
  • ChatMessage: Represents messages with role (.user, .assistant, .system) and content
  • MessageResponse: Streaming response types (.chunk, .reasoningChunk, .complete)
  • GenerationOptions: Configure generation parameters like temperature, topP, and structured output
  • @Generatable and @Guide: Swift macros for defining structured output types

Performance Tips

  • Load models once and reuse the ModelRunner instance
  • Use physical devices for better inference performance
  • Consider showing loading indicators as model loading can take several seconds
  • Handle errors gracefully as model loading may fail on low-memory devices

Examples

See LeapSDK Examples  for complete example applications demonstrating:

  • Basic chat interface with SwiftUI
  • Streaming response handling
  • Error handling and model loading states
  • Message history management
Last updated on