Getting Started
InferenceLake is a data platform for collecting LLM inference logs and building specialized models. Get started by installing the SDK and setting up your first project.
Installation
Install via npm:
npm install inferencelake
Or initialize a new project:
npx inferencelake init
Quick Start
Replace your existing OpenAI calls with InferenceLake proxied models:
// Before
model: "gpt-4"
// After
model: "gpt-4/customer_support"
Configuration
Set your API key:
export INFERENCELAKE_API_KEY=your_key_here
Visit your dashboard to view analytics and manage your models.