DoiT Cloud Intelligence™ DoiT Cloud Intelligence™ logo
Back to Homepage
English
  • Deutsch
  • Español
  • Français
  • 日本語
Subscribe to Updates

DoiT Cloud Intelligence™

Powered by Technology, Perfected by People.

Labels

  • All Posts
  • Fix
  • Announcement
  • Improvement
  • home page

Jump to Month

  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • March 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • June 2020
  • May 2020
  • April 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
Announcement
today

Understand Spend Changes with Time Period Comparisons

Understanding cloud spend isn’t just about what you’re spending today, it’s about how that spend is changing over time.

Instead of running multiple reports with different time windows, you can now: 

  • Compare spend and usage across two time periods in one view
  • Identify trends and seasonality at a glance
  • Quickly measure the impact of optimizations or configuration changes
  • Spot unexpected increases earlier

With time period comparisons, you can define a primary and secondary time window in the same report. Instantly see total values, absolute change, and percent change, with visual highlights that draw attention to the largest increases and decreases. This makes it easier to understand what changed, how much it changed, and where to focus next.

To get started, open a new or existing report and add a secondary time window. 

For more details about time period comparisons, check out our Help documentation.

Avatar of authorJosh Bonner
Announcement
today

Find What Matters Faster with Favorite Reports

As your library of saved reports grows, quickly getting to the reports you use most can become a challenge. You shouldn’t have to search or scroll every time you want to answer a familiar question or access a frequent report. Now you don’t have to. 

You can now star your favorite reports in the DoiT Console and sort your report list to keep starred reports at the top. With just a few clicks, your most important reports are always front and center, exactly where you need them. Also, when you favorite a report, it’s starred for your user only. Other users won’t see or be affected by your starred reports.

This simple but powerful enhancement makes it easier to stay focused on the insights that matter most, while reducing friction in your day-to-day workflow. 

Work smarter with

  • Faster access to the reports you rely on most
  • Less friction when navigating large report libraries
  • A more streamlined, personalized reporting workflow
  • More time spent on insights, less time on navigation

Get started

  • To favorite a report, select the star icon next the the report name on the report list
  • To view starred reports, filter the reports list by “Favorite == yes”

For more details, consult our Help documentation or raise a support ticket.

Avatar of authorJosh Bonner
Announcement
yesterday

Get Deeper Insight with Multiple-Metric Reports

Understanding cloud spend often means looking at more than one metric at a time, such as how cost relates to usage, list price, or savings.

You can now add multiple metrics, such as cost, usage, list price, and savings, to a single report. This makes it easier to validate data, reduce duplicate reporting, and deliver clearer insights without exporting or stitching together multiple views.

What this enables

  • Compare cost, usage, and list price in one report
  • Validate data more easily across metrics
  • Reduce the need for duplicate or parallel reports
  • Communicate insights more clearly to finance, engineering, and leadership

Getting started

  • To add multiple metrics: open a new or existing report and select “+” next to Metrics

For more details, consult our Help documentation or raise a support ticket.

Avatar of authorJosh Bonner
Announcement
6 days ago

Real-Time Cost Anomaly Detection Now Available for Google BigQuery On-Demand & Reservations

We’re excited to roll out a major upgrade to how DoiT helps you stay in control of your Google Cloud spend: real-time anomaly detection for Google BigQuery, including both on-demand and reservation workloads. Once enabled, you’ll receive alerts about unexpected BigQuery cost spikes in under an hour.

What’s new

Until now, anomaly insights for BigQuery relied on next-day billing-file ingestion. That meant if a bad query ran at 2 PM today, you wouldn’t know until tomorrow.

With our new real-time detection pipeline for BigQuery on-demand, we continuously ingest and analyze live BigQuery usage metadata, flag unusual usage patterns, and send you Slack or email alerts in less than an hour-not the next day.

Why it matters

Real-time detection for BigQuery means that you can:

  • Catch runaway queries in minutes: Prevent accidental or inefficient queries from racking up costs before anyone notices.
  • Protect against operational mistakes: Get alerted when abnormal query activity starts impacting spend.
  • Strengthen your security posture: Real-time cost changes can signal unauthorized data access or compromised systems.

As DoiT’s BigQuery expert Sayle Matthews can attest, the risk of unchecked queries racking up significant costs in a short period of time is very high:

“One of the largest issues is seeing how much their bill is at any given moment and being able to alert them when a ‘runaway query’ hits. We have seen some examples where customers have single queries that cost $2,000 USD and run in less than a minute, and of course, these were run multiple times in quick succession. These mistakes lead to massive bills that aren't caught for days or weeks later.”

Watch this video for a more detailed breakdown of one such example of a runaway query that could have cost a customer thousands in unnecessary charges.

What you need to do

This feature is available for all DoiT customers with a paid Enhanced, Premium, or Enterprise subscription and a connected Google Cloud account. 

You can enable real-time anomaly detection for BigQuery when connecting a new project or editing an existing one. For a new project, take the following steps:

  1. Sign in to the DoiT Cloud Intelligence console, select Integrate from the top navigation bar, and then select Google Cloud.
  2. From the Connect drop-down, select Project.
  3. Select the Real-time Anomalies – BigQuery checkbox. You can expand the feature to view its required permissions
  4. Select Generate gcloud commands
  5. Follow the instructions displayed in the side panel to configure your service account. See Connect a Google Cloud project for more information.
  6. If the connection has been set up successfully, the status of the Real-time Anomalies – BigQuery feature will show Healthy.

To enable the feature on multiple projects, you'll need to repeat the process for each one. To add real-time anomalies to a connected project:

  1. From the Integrate tab, select Google Cloud and find the project you want to enable it on.
  2. Select the kebab menu (⋮) next to the project connection, and then select Edit.
  3. Select the Real-time Anomalies – BigQuery checkbox to add the feature.
  4. Select Generate gcloud commands.
  5. Follow the instructions displayed in the side panel to update your custom role.
  6. Select Done to enable the feature


Next steps

For more information about real-time anomaly detection, consult our Help documentation or raise a support ticket.

Avatar of authorCraig Lowell
Announcement
a week ago

Add Context to Your Cost or Business KPIs in a Click

Data tells you what happened, but it doesn't always tell you why. We know that when you spot a cost anomaly or a usage spike in your cloud cost, usage or your business metrics, you need to capture that context immediately—before it’s lost in the shuffle of daily operations.

Starting today, you no longer need to navigate to a separate settings menu or lose your place in your analysis to leave a note. You can now create annotations directly from your reports by simply selecting a specific data point (using the right-click menu).


Avatar of authorVadim Solovey
Announcement
a week ago

Organize Resources with Labels API

Manage custom metadata for your DoiT Cloud Intelligence™ resources with the new Labels API. Align governance objects, such as annotations, allocations, and reports, directly with your org structure, managed environments, cost centers, and other relevant entities.

Simple Taxonomy. Labels use single-value identifiers (like Production or Engineering) rather than complex key-value pairs. Filter and categorize resources faster with this flat structure.

Key Capabilities

  • Centralized Management: Build a standard label library for your organization.
  • Improve Governance: Label Annotations and Reports to properly classify them.
  • Automate Workflows: Apply labels to new Reports or Annotations automatically during provisioning.

Supported Resources

  • Annotations
  • Allocations
  • Reports
  • Metric
  • Budget
  • Alert

API Reference

Avatar of authorVadim Solovey
4 weeks ago

Added support for Google Cloud's new Committed Use Discount (CUD) data model

Google Cloud is updating its spend-based Committed Use Discount (CUD) program to a new data model that replaces credit-based offsets with direct discounted costs. 

In response, we have updated DoiT Cloud Analytics to fully support this new schema, ensuring your reporting remains accurate once Google Cloud automatically applies this change on existing spend-based CUDs beginning January 21, 2026.

This update simplifies billing by utilizing "Consumption Models" to represent discounted rates rather than applying separate credits to list prices. While your total costs will not increase, the structure of your billing data, and how CUD-related savings are displayed, will change.

What’s changing:

  • CUD savings display: To align with Google’s new methodology, we are modifying how savings are displayed in Cloud Analytics Reports. Reports will shift from showing savings as an in-line calculation to a distinct savings presentation, reflecting the difference between on-demand prices and your final discounted cost
  • Automatic SKU Conversion: Google Cloud will automatically convert your current CUD SKUs to the new ones. Consequently, any unique processing or presentation of your current SKUs within DoiT will be seamlessly transitioned to the new SKUs.

While this format is optional until Google applies it universally to all customers on January 21, 2026, you can opt into the new CUD format early via the Google Cloud console and start exploring your savings in DoiT Cloud Intelligence.

To learn more about the new CUD data model, submit a support ticket or reach out to your DoiT account team.

Avatar of authorMatan Bordo
Announcement
4 weeks ago

Introducing the DoiT Cloud Intelligence™ CLI

You use DoiT Cloud Intelligence™ to understand cloud spend and usage. The next step is making that insight repeatable and actionable in the systems where work actually happens: scripts, pipelines, and runbooks.

That's why we are introducing the DoiT Cloud Intelligence™ CLI, a straightforward way to interact with DoiT from your terminal and automate the work that keeps teams aligned and accountable.

With DoiT Cloud Intelligence CLI, you can:

  • Shorter time from question to answer. Standardize recurring analyses and data pulls so stakeholders get consistent, up-to-date cloud cost reports without manual exports. 
  • Reliable governance. Apply budgets, annotations, and allocation logic consistently across environments so ownership does not depend on one person remembering “the right way.”

For example, you can use Annotations directly from the CLI to capture the "why" behind the charts (deployments, migrations, incident response, policy changes), so anyone looking at a report later sees context immediately.

Practical examples:

  • Create an annotation automatically when a production deployment completes, so cost deltas are clearly tied to releases.
  • Add annotations when an incident is declared or resolved to connect reliability events to cost changes.
  • Record business events (campaign launches, new customer onboardings) so spend is understood in business context, not just infrastructure terms.

Get started

If you can run a command in a terminal, you can get the value quickly. Install the CLI, authenticate, and then explore the available commands and actions. Docs and full command reference: https://help.doit.com/docs/cli

Avatar of authorVadim Solovey
Announcement
a month ago

New CloudFlow Template: AWS Commitment Expiration Monitor

The AWS Commitment Expiration Monitor is now available in CloudFlow. This template continuously tracks upcoming expirations of AWS Reserved Instances and Savings Plans across EC2, RDS, Redshift, ElastiCache, OpenSearch, SageMaker, and Database services. It surfaces actionable alerts ahead of expiration to help FinOps and DevOps teams prevent coverage gaps, reduce unplanned on-demand spend, and plan renewals based on real usage signals.

Start using this template by navigating to Operate -> CloudFlow -> Templates

Avatar of authorVadim Solovey
Announcementhome page
a month ago

CloudFlow SQL Node: Turn Any FinOps or CloudOps Question into an Automated Control

Today, we are introducing the CloudFlow SQL node – a way to turn the data you already have into live FinOps and CloudOps automations, using the language your teams already know: SQL.

Instead of exporting data to a BI tool, building a dashboard, and hoping someone remembers to check it, you can now write a SQL query inside CloudFlow and wire the result directly into alerts, workflows, and policies across all of your clouds. The CloudFlow SQL node natively connects to our unified data lake, using the same normalized schema you already use in DoiT Cloud Intelligence™.

While we’re excited to see what you build using the SQL node in CloudFlow, we've helped get you started with several pre-built templates available in the CloudFlow template library (use the Source: Billing Data filter to quickly find them). 

These templates also require no configuration to start running them. Simply click one to open it, and hit publish to start generating meaningful alerts and workflows. Read on for more details about one example you can start implementing:

Example: Time-to-Tag Leakage Analysis (Tag Lag Drift)

Here's a common FinOps problem: even when teams consistently tag resources, tags often appear days after the resource starts incurring costs. The spend generated during that gap shows up as “untagged,” creating leakage that can’t be attributed to the correct owner or workload. Dashboards can display daily untagged spend, but they do not quantify the lag effect or identify services where the lag is worsening.

What the SQL node does:

  1. Scans billing data for each resource and computes the first timestamp where a non-system tag is present
  2. Compares that timestamp to the resource’s earliest usage cost and calculates leakage per resource, per service, per day
  3. Aggregates to daily untagged leakage, total untagged spend, and the percent of spend impacted by tagging delay
  4. Computes trend direction (improving or degrading) over a defined window
  5. Filters out services that do not support cost allocation tags to avoid noise

In CloudFlow, you can then:

  • Trigger an alert only when leakage crosses a threshold or when the trend degrades week over week
  • Notify owning teams of the specific services or resources driving the lag (or create a Jira task)
  • Feed the data into an CloudFlow's LLM node to generate a human-readable explanation of what caused the degradation and which teams should act

This converts “some of our spend is untagged” into an operational metric with direction and ownership.

What you can build next

This example is only a starting point. With the CloudFlow SQL node, you can also design automations for:

  • Budget guardrails and burn rate monitoring at the project, account, or business unit level
  • Policy checks for label compliance are tied directly to your own label taxonomy
  • Identification of noisy SKUs, high-cost regions, or underutilized data services
  • Custom CloudOps SLOs driven by spend and usage patterns, not just uptime

If you already use DoiT Cloud Intelligence, the CloudFlow SQL node is available in your environment as a new node type inside CloudFlow. Start by taking one of your existing FinOps queries and turning it into an automated Flow that runs on a schedule, posts to Slack, opens Jira tickets, or chains into additional nodes such as LLM-based explanations.

Learn more about the SQL Node in DoiT Cloud Intelligence Help Center. We can also help you build your automation free of charge! Just ask at support.doit.com.

Avatar of authorVadim Solovey