Log requests, add caching, and run experiments. Just two lines of code to get started.
Warehouse every OpenAI and Anthropic request to your PostgreSQL database. Use logs to analyze, evaluate, and generate datasets.
We store a customizable JSON object so you can granularly monitor usage, calculate cost, run evaluations, and fine-tune models.
Enable caching to reduce costs and latency. Get full transparency into OpenAI's Batch and Files APIs using our built-in proxy support.
"We experiment with LLM models, settings, and optimizations. Velvet made it easy to implement logging and caching. And we're storing training sets to eventually fine-tune our own models.
"Velvet gives us a source of truth for what's happening between the Revo copilot, and the LLMs it orchestrates. We have the data we need to run evaluations, calculate costs, and quickly resolve issues."
"Our engineers use Velvet daily. It monitors AI features in production, even opaque APIs like batch. The caching feature reduces costs significantly. And, we use the logs to observe, test, and fine-tune."
Log every request to your database. Include custom metadata.
We store data as a JSON object so you can granularly query logs.
Return identical results in milliseconds. Reduce costs and latency.
Unlock extra data so you can query each file inside the batch.
Utilize datasets for model evaluations, prompt testing, and fine-tuning.
Use an editor that automates SQL on top of large datasets.
AI-powered B2B search engine logged 1,500 requests per second.
Use Velvet to identify and export a fine-tuning dataset.
Return results in milliseconds and don't waste calls on identical requests.