Skip to main content

Scope SDK Quickstart, Fetch and Render LLM Prompts in Minutes

This guide walks you through fetching and rendering your first prompt with the Scope SDK.

1. Set Environment Variables

The SDK needs three required environment variables for authentication, plus two for the API endpoints:

export SCOPE_ORG_ID="your-org-id"
export SCOPE_API_KEY="your-api-key"
export SCOPE_API_SECRET="your-api-secret"
export SCOPE_API_URL="https://api.scope.example.com"
export SCOPE_AUTH_API_URL="https://auth.scope.example.com"

2. Create Credentials and a Client

from scope_client import ScopeClient, ApiKeyCredentials

# Load credentials from environment variables
credentials = ApiKeyCredentials.from_env()

# Create a client
client = ScopeClient(credentials=credentials)

3. Fetch a Prompt Version

By default, get_prompt_version returns the production version of a prompt:

version = client.get_prompt_version("greeting")

print(version.content) # "Hello, {{name}}! Welcome to {{app}}."
print(version.variables) # ["name", "app"]

4. Render the Prompt

Substitute {{variable}} placeholders with real values:

rendered = version.render(name="Alice", app="Scope")
print(rendered) # "Hello, Alice! Welcome to Scope."

5. Use with an LLM

Pass the rendered prompt to any LLM provider:

import anthropic

# Fetch and render prompt from Scope
version = client.get_prompt_version("summarize")
model = version.get_metadata("model") # e.g. "claude-sonnet-4-5-20250514"
max_tokens = version.get_metadata("max_tokens") # e.g. 1024
rendered = version.render(document=my_document)

# Send to LLM
llm = anthropic.Anthropic()
response = llm.messages.create(
model=model,
max_tokens=max_tokens,
messages=[{"role": "user", "content": rendered}],
)
print(response.content[0].text)

Next Steps

Was this page helpful?