DoiT Cloud Intelligence™ DoiT Cloud Intelligence™ logo
Back to Homepage
English
  • Deutsch
  • Español
  • Français
  • 日本語
Subscribe to Updates

DoiT Cloud Intelligence™

Powered by Technology, Perfected by People.

Labels

  • All Posts
  • Fix
  • Announcement
  • Improvement
  • home page

Jump to Month

  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • March 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • June 2020
  • May 2020
  • April 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
Announcement
6 days ago

Enhance CloudFlow Flows with AWS and GCP CLI Support

When a FinOps or CloudOps process needs to take action, the last mile often happens outside your tooling: someone opens a terminal, runs a one-off CLI command, pastes output into a ticket, and hopes the next person follows the same steps. That breaks auditability and makes repeatable governance hard, especially when you’re promoting artifacts between environments or responding to an incident under time pressure.

CloudFlow now includes a CLI node, so you can run AWS or GCP commands as a first-class step inside a flow, alongside the rest of your automation. This lets you encode terminal-only procedures as an automated, repeatable workflow with a run history, rather than relying on tribal knowledge and manual execution.

For example, a storage expansion workflow can attach an existing EBS volume to an EC2 instance using the same CLI command you already know, but executed as part of the flow. Under the hood, the CLI action maps to the same API sequence you’d expect (validation, state checks, then the attach call), which makes the operation predictable and easier to reason about when troubleshooting.

Similarly, release or data movement workflows that “S3 sync” become explicit, repeatable building blocks. In S3, “move” is actually a copy-and-delete operation, and encoding that once in a flow reduces accidental deviations between operators and environments.

Avatar of authorVadim Solovey
Announcement
a week ago

Build CloudFlow Flows Without Hand-Wiring Cloud API Calls

If you’ve ever tried to automate a real FinOps workflow, you’ve felt the sprawl: thousands of AWS and GCP APIs, inconsistent parameters, and “just one more edge case” before it’s safe to run at scale. The result is usually the same story: a good idea stalls because stitching it together takes longer than the savings it’s meant to unlock.

CloudFlow’s Agent Builder is designed for that moment: when you know the outcome you want (for example, investigate a cost spike and recommend the next action), but you don’t want to build and maintain a custom integration for every system involved.

Agent Builder provides a way to create and manage flows in CloudFlow so you can move from “we should automate this” to a repeatable workflow that can be triggered and governed like any other CloudFlow automation. Instead of designing every branch of logic around individual APIs, you define the flow in natural language once.

In practice, this is meant to shift your work up a level:

  • from integrating every AWS/GCP endpoint yourself,
  • to packaging a reusable “investigate and act” capability you can drop into multiple flows (alerts, ticket enrichment, scheduled checks, remediation handoffs).

Getting Started with Agent Builder

  • Create or open an existing flow and navigate to the Agent Builder page.
  • Ask Agent Build to create a new flow (for example: cost anomaly triage, tagging governance checks, owner resolution).
  • Run it in a controlled scope first (a single account, a single project, or a narrow set of services), then expand.
Avatar of authorVadim Solovey
Announcement
a week ago

Keep CloudFlows deterministic by waiting for cloud resources to be ready

In real workflows, cloud APIs often return before the resource is actually usable. For example, your flow might create an instance and immediately try to attach storage or run a configuration step, only to fail because the instance is still booting or the resource state has not converged.

AWS has native waiters for some APIs, but they cover only a small subset of services and operations. In Google Cloud, you generally don’t get an equivalent out of the box, which pushes teams toward brittle sleep steps and custom retry loops.

You can now add a “waiter” to cloud API actions so CloudFlow pauses until the resource reaches the state you need, then continues automatically. This reduces fragile retry logic and makes multi-step automations behave consistently across AWS and Google Cloud.

A waiter is tied to the specific action you’re running and waits on a defined target state before the next step runs. You enable the waiter on the action, choose what to wait for, and optionally tune the polling behavior for faster or more conservative checks. If the waiter needs extra inputs to verify readiness, you provide those parameters as part of the waiter configuration.

To get started, review the "Add a Waiter" article.

Avatar of authorVadim Solovey
Announcement
a week ago

Reuse shared data across CloudFlow automations with Datastore tables

Cloud automations rarely live in isolation. One flow detects an issue, another enriches context, a third notifies the right team. Without a shared source of truth, you end up duplicating logic or hardcoding mappings that drift over time.

The Datastore node provides your CloudFlow automations with a shared, managed place to store and query structured data, enabling multiple flows to reference the same tables for enrichment, routing, and state. Think of Datastore as a lightweight database you can use directly from flows.

A common pattern is a tag ownership table:

  • You maintain a table that maps tags (or tag patterns) to an owner, team, cost center, escalation channel, and metadata.
  • Any flow can query it to resolve “who owns this?” consistently.

Example: an anomaly flow spots a sudden spike on an untagged workload. It looks up the resource’s tags, queries the ownership table, and then routes the alert to the correct Slack channel. A separate remediation flow uses the same table to decide who can approve a change and where to open a ticket.

In the Datastore node, you can:

  • Get records using filters (including values from previous nodes)
  • Insert records for new rows, including batch inserts
  • Upsert records to keep a table current without duplicates by using a unique key column

Supported column types include Text, Integer, Numeric, Boolean, Date, Timestamp, and JSON.

How to get started

Create a Datastore table for shared mappings (for example, tag-to-owner), then reference it from any flow via a Datastore node query. Use Upsert when you want scheduled flows to keep the table continuously in sync from an authoritative source. 

Avatar of authorVadim Solovey
Announcement
a week ago

Pause CloudFlow runs with a Sleep node for safer automation

Some workflows need a deliberate delay to avoid API throttling or to wait for changes to propagate. For example, if you’re looping through cloud resources or applying IAM updates, a short pause can prevent rate-limit errors and reduce noisy retries.

A common operational pattern is also dev environment scheduling: start in the morning, run all day, then shut down in the evening. Sleep lets you build this into a single flow without external schedulers.

What’s new

You can add a Sleep node to pause a CloudFlow run for a configurable duration, then automatically resume execution. This helps you:

  • Space out provider API calls to reduce throttling risk
  • Add cooldown periods before re-checking or notifying
  • Wait for eventual consistency (for example, new resources or IAM changes becoming effective)
  • Orchestrate time-based actions in one flow, like dev environment start and end-of-day shutdown

How it works

When a run reaches the Sleep node, CloudFlow marks the run as Sleeping and pauses execution until the configured time elapses. After the duration completes, CloudFlow wakes the run and continues to the next node.

How to get started: https://help.doit.com/docs/operate/cloudflow/nodes/sleep

Avatar of authorVadim Solovey
Announcement
2 weeks ago

Keep CloudFlow maintainable by calling shared logic with Sub Flow

As CloudFlows grow, the same steps often get copied into multiple workflows. That makes fixes slower and outcomes inconsistent, because you end up updating the same logic in several places.

Use-case example: you run a daily workflow that detects cost anomalies, then triggers CloudOps remediation. Instead of duplicating “ownership resolution” and “notification formatting” steps across every anomaly flow, rightsizing flow, and incident flow, you put that logic in a single subflow and call it from each parent workflow, so updates propagate everywhere.

What’s new

  • You can now use the Sub Flow node to run one CloudFlow from inside another, so shared logic lives in a single reusable flow.
  • Parent flows can pass inputs as parameters (mapped to the sub flow’s local variables), so the same sub flow works across multiple use cases.
  • Each invocation creates a child run you can open from the run history, making troubleshooting and auditing easier.

You build a reusable flow (the sub flow) and publish it. In your main workflow, add a Sub Flow node, select the flow you want to call, and provide the parameter values for that run. When the parent reaches that node, CloudFlow runs the selected flow as a child run and returns its output back to the parent so the workflow can continue.


Getting started

  • Create the reusable flow you want to call and define the local variables it should accept.
  • In the parent flow, add a Sub Flow node and select the flow to call from the Flow drop-down.
  • Map parameters, then use the Sub Flow node output in downstream nodes.

Read more about subflows in CloudFlow Help Center.

Avatar of authorVadim Solovey
AnnouncementImprovement
2 weeks ago

A smarter, more intuitive Metrics experience

Metrics is one of the most powerful tools in your FinOps toolkit. This latest release brings substantial usability improvements that make building, understanding, and acting on your data faster and easier than ever.

New Allocation previews give you clear visibility into the individual building blocks that make up your Metric, so you can understand exactly what's contributing to your data, not just the end result. This means no more jumping back and forth between Allocations and Metrics to piece together the full picture; everything you need is right there in context.

Metrics - Allocation preview

AI-powered allocation creation takes the manual effort out of the process. Instead of building allocations from scratch, you can now express your intent and let AI generate them for you; a meaningful step toward a more intelligent, streamlined workflow.

We've also made it easier to manage Metric variables. You can now remove variables you no longer need, and the experience for adding new ones has been streamlined so it gets out of your way.

Rounding out the update: direct links to use a metric in reports mean fewer clicks between insight and action, and improved interactions around generating previews make the whole experience feel more responsive and fluid.

Avatar of authorBrad Rutland
Announcement
3 weeks ago

New: CloudFlow Dashboard Widget

The CloudFlow Widget is a new dashboard widget type that lets you visualize data from your CloudFlow and Datastore tables alongside your existing dashboards and reports. Build operational dashboards from the data your CloudFlow flows already produce — all without leaving the platform.

First, if you're not familiar with CloudFlow, it is a no-code FinOps automation platform. It is fully integrated with DoiT Cloud Intelligence™ to automate cost optimization opportunities and enforce governance in real time. You can think of it as a platform that automates repetitive FinOps tasks.

What's New

You can now add one or more CloudFlow widgets to any dashboard. Each widget connects to a CloudFlow node or Datastore table and renders the data in the visualization of your choice:

  • Table — sortable, column-selectable data grid
  • Column / Stacked Column — grouped or stacked bar charts with dual-axis support
  • Line — trend lines over any dimension
  • Combo (Column + Line) — mix bar and line series on the same chart
  • Metric — a single headline number with sparkline trend and percentage change

Configuration is fully guided — select your data source, pick a display type, and the widget walks you through the rest. Widgets auto-refresh and show a "last updated" timestamp so your data is always current.

Use Case: Tagging Hygiene Dashboard

Imagine you have a CloudFlow that runs daily to analyze your cloud billing data for untagged resources. With the new widget, you can build a complete Tagging Hygiene dashboard:

  1. "Tagging Coverage Change (30d)" — a single headline number showing how many percentage points your tagging coverage shifted over the last 30 days, with a sparkline trend and a ∧/∨ indicator so you know immediately if coverage is improving.
  2. "Taggable vs. Untagged Costs" — a grouped bar chart comparing taggable and untagged costs for the current 30-day window against the previous 30 days, so you can see whether untagged spend is shrinking.
  3. "Untagged Share %" — a trend line showing the percentage of spend that remains untagged, current vs. previous period.
  4. "Time-to-Tag Leakage" — a sortable grid listing accounts with persistent tagging gaps: days with leakage, last leak date, max daily leak, and total untagged cost per service.
  5. "New SKUs w/o Tags" — another table surfacing newly discovered SKUs that have no tags, with project ID, first-seen date, SKU description, and 7-day cost.

All five widgets appear on a single dashboard, even though their underlying data comes from different flows. They auto-refresh, giving your FinOps team a self-updating command center for tagging compliance, built entirely from a CloudFlow you already have.

Getting Started

  1. Open any dashboard and click Edit → Add Widget
  2. Select CloudFlow from the widget list
  3. Choose your data source (Flow Step or Datastore Table) and a display type
  4. Save — your data appears immediately

You can add as many CloudFlow widgets as you need, each connected to a different flow, step, or table.

Avatar of authorVadim Solovey
Announcement
a month ago

All multicloud recommendations in one place – now including Azure Advisor

All multicloud recommendations in one place – now including Azure Advisor

DoiT Insights now also includes Azure Advisor recommendations, alongside AWS Trusted Advisor, AWS Compute Optimizer, Google Cloud Recommender, and DoiT Cloud Intelligence™ Deep-Waste Insights recommendations, giving teams a single place to manage optimization across every cloud.

Instead of jumping between provider consoles, you can now identify, prioritize, and act on multicloud optimization opportunities from one unified experience.

With this update, Insights brings all recommendations together into a normalized, multicloud view, including:

  • Azure Advisor
  • AWS Trusted Advisor and Compute Optimizer
  • Google Cloud Recommender
  • DoiT Cloud Intelligence™ Deep-Waste Insights

All recommendations are categorized consistently across cost optimization, performance, reliability, security, and operational excellence, making it easier to compare impact and focus on what matters most.

But Insights shouldn’t just stop at recommendations, either. As with all of your existing Insights, Azure Advisor Insights allow you to take immediate steps to investigate and remediate the problem through three potential avenues:

  • Launch an automated flow to automatically take the recommended action
  • Create a Jira ticket to assign the task within your engineering team
  • Raise a support ticket to get help from a DoiT Forward Deployed Engineer

Enabling Azure Advisor Insights

You can enable the Azure Advisor integration in your environment from the ‘Integrate’ tab in the DoiT Cloud Intelligence console. 

To learn more about the process, see our Help documentation, or raise a support ticket to speak to a DoiT expert.



Avatar of authorCraig Lowell
Announcement
a month ago

Build cloud cost allocations with natural language prompts

Allocations let you map cloud costs to the right teams and stakeholders, ensuring every dollar is assigned a clear owner. This transparency is the foundation for driving true cost accountability across your company. 

And now, building allocations in DoiT Cloud Intelligence is as simple as having a conversation. You can create complex allocations effortlessly using natural language prompts in your Agentic AI side panel.

For example, you can write a prompt like:

Allocate my spend using the values in the 'app' tag to define the groups. For each group, also capture resources in any Google Cloud projects or AWS accounts that contain those same application names.

And as shown below, using the above prompt we can build an allocation group consisting of nine allocations, one for each of our apps.

To get started, try your first prompt in DoiT Cloud Intelligence or watch our 2-min. demo video of this in action:


Avatar of authorMatan Bordo