Getting Started

Welcome, connecting your application to InferenceLake is a super simple 2 step process.

Step 1 - Setting API keys and base URL

Vercel AI SDK (coming soon)
1 2 3 4 5 6 from openai import OpenAI client = OpenAI( base_url="https://api.inferencelake.com/openai/", < update base URL api_key="YOUR_OPENAI_KEY" + "YOUR_INFERENCELAKE_KEY" < update API key )

Step 2 - Updating model calls with a model subtype

Vercel AI SDK (coming soon)
1 2 3 4 response = client.chat.completions.create( model="gpt-4/customer-support", < add model subtype messages=[{"role": "user", "content": "Hello!"}] )

And that's it! Your model calls are now routed through the InferenceLake network, calls will be added to your InferenceLake database and will be visible on your dashboard here.