Log requests to your DB, optimize, and run experiments. Just two lines of code to get started.
Warehouse every OpenAI and Anthropic request to your PostgreSQL database. Use logs to analyze, evaluate, and generate datasets.
We store a customizable JSON object so you can granularly monitor usage, calculate cost, run evaluations, and fine-tune models.
Enable caching to reduce costs and latency. Get full transparency into OpenAI's Batch and Files APIs using our built-in proxy support.
"We experiment with LLM models, settings, and optimizations. Velvet made it easy to implement logging and caching. And we're storing training sets to eventually fine-tune our own models.
"Velvet gives us a source of truth for what's happening between the Revo copilot, and the LLMs it orchestrates. We have the data we need to run evaluations, calculate costs, and quickly resolve issues."
"Our engineers use Velvet daily. It monitors AI features in production, even opaque APIs like batch. The caching feature reduces costs significantly. And, we use the logs to observe, test, and fine-tune."
Log every request to your database. Secure and compliant.
Store data as JSON to gain deep insights into usage, costs, and more.
Understand API usage to optimize AI features and resolve problems.
Reduce costs and latency with our smart caching system.
Run experiments on test datasets to optimize outputs at scale.
Export datasets for fine-tuning models and other batch workflows.
AI-powered B2B search engine logged 1,500 requests per second.
Use Velvet to identify and export a fine-tuning dataset.
Return results in milliseconds and don't waste calls on identical requests.