Track Vertex AI & Databricks model-serving costs and token usage in GenAI Intelligence
You can now track detailed costs and token usage for Google Vertex AI and Databricks GenAI workloads in GenAI Intelligence. GenAI Intelligence gives you a single, comprehensive view of AI costs and token usage across your stack, including other supported providers like Amazon Bedrock, OpenAI, and Anthropic Claude.
For Databricks, coverage includes:
- Databricks-hosted foundation models
- Custom models on Mosaic AI Model Serving
- Model serving–related network egress
- SQL AI Functions usage
Additionally, cost and token data for these AI providers will populate in GenAI system labels. These system labels standardize your GenAI data into consistent categories (ex. Model, Usage Type, Media Format) across all supported GenAI providers, making it easier for you to drill down into your GenAI costs and usage across providers without going through service-specific SKUs and resources.
To get started, explore GenAI Intelligence and use GenAI system labels to report on and allocate your GenAI spend.
Note: In order to view Databricks GenAI workloads in GenAI Lens, you’ll need to first connect your Databricks account to DoiT Cloud Intelligence.