DoiT Cloud Intelligence™ DoiT Cloud Intelligence™ logo
Back to Homepage
English
  • Deutsch
  • Español
  • Français
  • 日本語
Subscribe to Updates

DoiT Cloud Intelligence™

Powered by Technology, Perfected by People.

Labels

  • All Posts
  • Fix
  • Announcement
  • Improvement
  • home page

Jump to Month

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • March 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • June 2020
  • May 2020
  • April 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
yesterday

Added support for Google Cloud's new Committed Use Discount (CUD) data model

Google Cloud is updating its spend-based Committed Use Discount (CUD) program to a new data model that replaces credit-based offsets with direct discounted costs. 

In response, we have updated DoiT Cloud Analytics to fully support this new schema, ensuring your reporting remains accurate once Google Cloud automatically applies this change on existing spend-based CUDs beginning January 21, 2026.

This update simplifies billing by utilizing "Consumption Models" to represent discounted rates rather than applying separate credits to list prices. While your total costs will not increase, the structure of your billing data, and how CUD-related savings are displayed, will change.

What’s changing:

  • CUD savings display: To align with Google’s new methodology, we are modifying how savings are displayed in Cloud Analytics Reports. Reports will shift from showing savings as an in-line calculation to a distinct savings presentation, reflecting the difference between on-demand prices and your final discounted cost
  • Automatic SKU Conversion: Google Cloud will automatically convert your current CUD SKUs to the new ones. Consequently, any unique processing or presentation of your current SKUs within DoiT will be seamlessly transitioned to the new SKUs.

While this format is optional until Google applies it universally to all customers on January 21, 2026, you can opt into the new CUD format early via the Google Cloud console and start exploring your savings in DoiT Cloud Intelligence.

To learn more about the new CUD data model, submit a support ticket or reach out to your DoiT account team.

Avatar of authorMatan Bordo
Announcement
yesterday

Introducing the DoiT Cloud Intelligence™ CLI

You use DoiT Cloud Intelligence™ to understand cloud spend and usage. The next step is making that insight repeatable and actionable in the systems where work actually happens: scripts, pipelines, and runbooks.

That's why we are introducing the DoiT Cloud Intelligence™ CLI, a straightforward way to interact with DoiT from your terminal and automate the work that keeps teams aligned and accountable.

With DoiT Cloud Intelligence CLI, you can:

  • Shorter time from question to answer. Standardize recurring analyses and data pulls so stakeholders get consistent, up-to-date cloud cost reports without manual exports. 
  • Reliable governance. Apply budgets, annotations, and allocation logic consistently across environments so ownership does not depend on one person remembering “the right way.”

For example, you can use Annotations directly from the CLI to capture the "why" behind the charts (deployments, migrations, incident response, policy changes), so anyone looking at a report later sees context immediately.

Practical examples:

  • Create an annotation automatically when a production deployment completes, so cost deltas are clearly tied to releases.
  • Add annotations when an incident is declared or resolved to connect reliability events to cost changes.
  • Record business events (campaign launches, new customer onboardings) so spend is understood in business context, not just infrastructure terms.

Get started

If you can run a command in a terminal, you can get the value quickly. Install the CLI, authenticate, and then explore the available commands and actions. Docs and full command reference: https://help.doit.com/docs/cli

Avatar of authorVadim Solovey
Announcement
4 days ago

New CloudFlow Template: AWS Commitment Expiration Monitor

The AWS Commitment Expiration Monitor is now available in CloudFlow. This template continuously tracks upcoming expirations of AWS Reserved Instances and Savings Plans across EC2, RDS, Redshift, ElastiCache, OpenSearch, SageMaker, and Database services. It surfaces actionable alerts ahead of expiration to help FinOps and DevOps teams prevent coverage gaps, reduce unplanned on-demand spend, and plan renewals based on real usage signals.

Start using this template by navigating to Operate -> CloudFlow -> Templates

Avatar of authorVadim Solovey
Announcementhome page
a week ago

CloudFlow SQL Node: Turn Any FinOps or CloudOps Question into an Automated Control

Today, we are introducing the CloudFlow SQL node – a way to turn the data you already have into live FinOps and CloudOps automations, using the language your teams already know: SQL.

Instead of exporting data to a BI tool, building a dashboard, and hoping someone remembers to check it, you can now write a SQL query inside CloudFlow and wire the result directly into alerts, workflows, and policies across all of your clouds. The CloudFlow SQL node natively connects to our unified data lake, using the same normalized schema you already use in DoiT Cloud Intelligence™.

While we’re excited to see what you build using the SQL node in CloudFlow, we've helped get you started with several pre-built templates available in the CloudFlow template library (use the Source: Billing Data filter to quickly find them). 

These templates also require no configuration to start running them. Simply click one to open it, and hit publish to start generating meaningful alerts and workflows. Read on for more details about one example you can start implementing:

Example: Time-to-Tag Leakage Analysis (Tag Lag Drift)

Here's a common FinOps problem: even when teams consistently tag resources, tags often appear days after the resource starts incurring costs. The spend generated during that gap shows up as “untagged,” creating leakage that can’t be attributed to the correct owner or workload. Dashboards can display daily untagged spend, but they do not quantify the lag effect or identify services where the lag is worsening.

What the SQL node does:

  1. Scans billing data for each resource and computes the first timestamp where a non-system tag is present
  2. Compares that timestamp to the resource’s earliest usage cost and calculates leakage per resource, per service, per day
  3. Aggregates to daily untagged leakage, total untagged spend, and the percent of spend impacted by tagging delay
  4. Computes trend direction (improving or degrading) over a defined window
  5. Filters out services that do not support cost allocation tags to avoid noise

In CloudFlow, you can then:

  • Trigger an alert only when leakage crosses a threshold or when the trend degrades week over week
  • Notify owning teams of the specific services or resources driving the lag (or create a Jira task)
  • Feed the data into an CloudFlow's LLM node to generate a human-readable explanation of what caused the degradation and which teams should act

This converts “some of our spend is untagged” into an operational metric with direction and ownership.

What you can build next

This example is only a starting point. With the CloudFlow SQL node, you can also design automations for:

  • Budget guardrails and burn rate monitoring at the project, account, or business unit level
  • Policy checks for label compliance are tied directly to your own label taxonomy
  • Identification of noisy SKUs, high-cost regions, or underutilized data services
  • Custom CloudOps SLOs driven by spend and usage patterns, not just uptime

If you already use DoiT Cloud Intelligence, the CloudFlow SQL node is available in your environment as a new node type inside CloudFlow. Start by taking one of your existing FinOps queries and turning it into an automated Flow that runs on a schedule, posts to Slack, opens Jira tickets, or chains into additional nodes such as LLM-based explanations.

Learn more about the SQL Node in DoiT Cloud Intelligence Help Center. We can also help you build your automation free of charge! Just ask at support.doit.com.

Avatar of authorVadim Solovey
Announcementhome page
a week ago

Contact Expert: workload help, one click away

We’ve added a new Contact Expert button to all Workload Intelligence dashboards in DoiT Cloud Intelligence™, so you can turn insights into action without leaving the product. You'll see Contact Expert on dashboards for AWS, Google Cloud, Azure, GenAI, Datadog, Snowflake, Databricks, and MongoDB.

What Contact Expert does

Contact Expert connects you directly with DoiT’s global team of Forward-Deployed Engineers, who specialize in the workload you are looking at. They work with these environments every day and can help you:

  • Interpret what you are seeing in the dashboard in the context of your architecture and roadmap
  • Prioritize optimizations across cost, performance, and reliability
  • Design and validate changes before they hit production
  • Turn recurring issues into automation or guardrails, not just one-off fixes

You can see the scale and depth of our work across customers at our service stats page: https://www.doit.com/stats

How it works

From any supported Workload Intelligence dashboard:

  • Click "Contact Expert" and add any extra context, goals, or constraints
  • Our team receives your request together with the relevant workload view, so you don’t have to re-explain the basics

You get a human expert who can review your environment, propose concrete next steps, and, where appropriate, help you operationalize changes using DoiT Cloud Intelligence™.

Why this matters

DoiT Cloud Intelligence™ is the only FinOps and CloudOps cloud intelligence platform that includes unlimited access to real workload experts as part of the product. There are no extra “consulting hours” to purchase. Our Forward Deployed Engineers become an extension of your team, embedded directly into your day-to-day optimization work. With Contact Expert now available across all major cloud and data workloads, every dashboard in DoiT is not just an insight surface, but a direct path to action.

Avatar of authorVadim Solovey
Improvement
2 weeks ago

Monitor Azure AI costs and token usage in GenAI Intelligence

If you’re using Azure AI to build and run LLM-powered applications, you’ll now see those associated costs in GenAI Intelligence alongside any AI spend from other platforms you use.

GenAI Intelligence gives you a single, comprehensive view of AI costs and token usage across your AI stack, including other supported providers like Amazon Bedrock, OpenAI, and Anthropic Claude.

Azure AI usage will also populate GenAI labels, so you can easily build your own reports and allocations on top of this AI spend data. GenAI labels turn provider-specific data into consistent dimensions (like Model, Feature, Media Format) across your AI stack, making it much easier to break down GenAI costs and usage without digging through SKUs and services for each platform.

To get started, explore GenAI Intelligence and use GenAI system labels to explore and allocate your GenAI spend.

GenAI Intelligence is available on all DoiT Cloud Intelligence™ tiers.


Avatar of authorMatan Bordo
Announcement
2 weeks ago

Introducing Real-Time Cost Anomaly Detection for Google BigQuery On-Demand

We’re excited to roll out a major upgrade to how DoiT helps you stay in control of your Google Cloud spend: real-time anomaly detection for Google BigQuery on-demand. For the first time, you’ll receive alerts about unexpected BigQuery cost spikes in under an hour.

What’s new

Until now, anomaly insights for BigQuery relied on next-day billing-file ingestion. That meant if a bad query ran at 2 PM today, you wouldn’t know until tomorrow.

With our new real-time detection pipeline for BigQuery on-demand, we continuously ingest and analyze live BigQuery usage metadata, flag unusual usage patterns, and send you Slack or email alerts in less than an hour – not the next day.

Why it matters

As DoiT’s BigQuery expert Sayle Matthews will tell you, the risk of unchecked queries racking up significant costs in a short period of time is very high:

“One of the largest issues is seeing how much their bill is at any given moment and being able to alert them when a ‘runaway query’ hits. We have seen some examples where customers have single queries that cost $2,000 USD and run in less than a minute, and of course, these were run multiple times in quick succession. These mistakes lead to massive bills that aren't caught for days or weeks later.”

Real-time detection for BigQuery on-demand means that you can:

  • Catch runaway queries in minutes: Prevent accidental or inefficient queries from racking up costs before anyone notices.
  • Protect against operational mistakes: Get alerted when abnormal query activity starts impacting spend.
  • Strengthen your security posture: Real-time cost changes can signal unauthorized data access or compromised systems.

What you need to do

This feature is available for all DoiT customers with a paid Enhanced, Premium, or Enterprise subscription and a connected Google Cloud account. To enable it, take the following steps:

  1. Locate the service account of interest on the Google Cloud access & features page.
  2. Select the kebab menu (⋮) next to the project connection, and then select Edit.
  3. Select the Real-time Anomalies – BigQuery checkbox to add the feature.
  4. Select Generate gcloud commands.
  5. Follow the instructions displayed in the side panel to update your custom role.
  6. Select Done to enable the feature

Enable BQ real-time anomaly detection


Next steps

Up next, we’ll be releasing the same support for BigQuery reservations to bring the same real-time intelligence across your full BigQuery footprint.

For more information about enabling real-time anomaly detection, consult our Help documentation or raise a support ticket.

Avatar of authorCraig Lowell
Improvement
a month ago

Snowflake Intelligence update: Key-pair authentication now required for new connections + existing connections must migrate by June 2026

For customers sending (or planning to send) Snowflake cost & usage data to DoiT Snowflake Intelligence, key-pair authentication is now supported and required for new connections, and existing password-based LEGACY_SERVICE users must migrate by June 2026.

Snowflake is deprecating password-only LEGACY_SERVICE users, with full removal planned for June 2026. In response, we’ve added support for key-pair authentication. This removes passwords from the flow and gives you stronger security and a cleaner audit trail for your Snowflake connection.

What this means:

  • Starting today, key-pair authentication is the only way you’ll be able to set up the Snowflake Intelligence
  • If you’ve already set up the integration with a LEGACY_SERVICE user, migrate to key-pair authentication before June 2026 to avoid interruption

To get started, view our documentation on setting up and updating your Snowflake connection.

Avatar of authorMatan Bordo
Announcement
a month ago

Introducing Agentic AI in Insights: faster remediation, right where you work

You can now use DoiT's Agentic FinOps AI, directly from any Insight detail page to get guided analysis and ready-to-use actions without leaving the view.

  • Estimate the impact of implementing the Insight, including cost savings and business or technical implications based on the context.
  • Break down the remediation tasks with a step-by-step plan tied to the affected resources.
  • Surface downstream dependencies & hidden costs, callouts for related risks or follow-up work to expect.
  • Estimate effort to execute, with time/complexity guidance that’s persisted in the UI for later reference.
  • Generate a Terraform configuration, then one-click copy option for the snippet to streamline applying the fix.

Why it matters:

  • Cut MTTR by getting impact, steps, and effort in seconds.
  • Reduce handoffs and tabs. Everything you need lives beside the evidence.
  • Improve consistency. AI gives repeatable plans and Terraform you can reuse.


No configuration changes are needed. To get started, select the AI bubble in the bottom right corner of any Insight detail page.

Avatar of authorKarl Kalash
Announcement
a month ago

MongoDB Intelligence — FinOps visibility for Atlas, built into DoiT Cloud Intelligence™

MongoDB provides teams with powerful elasticity, but that flexibility often comes with fragmented visibility into what drives costs. MongoDB Intelligence brings structure to that chaos.

This new module ingests Atlas billing and usage data across organizations, projects, and SKUs, turning raw cost exports into an actionable view of your Atlas estate. You can instantly see how spend breaks down across clusters, backups, and storage, identify which projects are trending up or down, and trace deltas to specific Atlas SKUs like ATLAS_AWS_INSTANCE_* or ATLAS_BACKUP_*.

Analytics for organization-level costs, project distribution, and SKU analytics reveal patterns previously hidden in CSVs, helping FinOps and engineering teams align on the same data when evaluating scaling, tiering, or retention decisions.

On the right, you’ll find an Agentic FinOps AI assistant. Ava isn’t just a chatbot; it’s a reasoning layer that interprets the same data visible on the dashboard. Ask questions like “Which projects or SKUs changed the most this month?” or “Where am I overspending on backups?”, and Ava will analyze cost trends, isolate anomalies, and suggest next best actions.

Setting up MongoDB Intelligence takes only a few minutes — connect your Atlas organization, and DoiT Cloud Intelligence™ will automatically ingest cost and usage data. Follow the step-by-step guide in our Help Center article to enable the integration securely and start visualizing your MongoDB Atlas spend with zero manual exports.

MongoDB Intelligence extends the DoiT Cloud Intelligence™ platform’s FinOps coverage to Atlas, providing your teams with a precise, explainable, and actionable understanding of MongoDB spend. MongoDB Intelligence is available on all DoiT Cloud Intelligence™ tiers

Avatar of authorVadim Solovey