Skip to main content
All CollectionsSolution guidesUnderstand how the systems workMetadata analysis and management
Optimize Your Salesforce Flow Architecture for Better Performance and Scalability
Optimize Your Salesforce Flow Architecture for Better Performance and Scalability

Flows; Well-architected; Flow architecture; Performance; Scalability

Updated over a week ago

Why improve Flow Architecture?

Salesforce flows have become the key and default declarative automation tool on the Salesforce platform. Many business-critical operations, as well as productivity-enhancing processes, are powered by flows. When flows are well-architected, they enhance productivity, ensure data integrity, and streamline operations.

However, poorly designed or outdated flows can lead to significant issues such as slow performance, frequent failures, and difficult maintenance.

Flows without fault paths are particularly vulnerable to errors, which can result in data inconsistencies and broken processes. Overly complex flows, which haven't been broken down into composable sub-flows, are difficult to change and test, hindering your agility.

By adhering to Salesforce’s Well-Architected framework, you can avoid these challenges and build flows that are efficient, resilient, and scalable.

When to review Flow Architecture?

Given the strategic importance of flows, it's essential to review their architecture regularly—at least three times a year, aligned with Salesforce’s major releases. Each release updates the API version, but flows do not automatically upgrade to the latest version. After just one year, a flow could be three versions behind, missing critical bug fixes, performance improvements, and new features.

Beyond regular reviews, you should also consider flow architecture improvements when:

  • You've never reviewed your flows at scale

  • Flows are frequently added or modified in your Org

  • You face frequent flow failures

Prerequisites

  • Salesforce Metadata Management license

  • A Salesforce Org synced into Metadata Dictionary (can be Production or a Sandbox, provided the Sandbox was refreshed rather recently)

MetaField definitions

During flow architecture review you will need to assess insights provided by Elements to decide on required actions. You will need to create MetaFields for your flows.

Create following 2 MetaFields with flow optimization in mind.

MetaFields definitions

  1. Complexity review (picklist):

    • Values:

      • No Action Required

      • Simplify

      • Break into Subflows

      • Rebuild as Orchestrator

    • Purpose: Assess whether the flow’s complexity is justified or not, and if not what action needs to be taken.

  2. Flow Optimization Review (picklist):

    • Values:

      • Overlapping logic

      • Overlapping triggers

      • Needs asynchronous logic

    • Purpose: Determine if multiple flows need to be merged, consolidated, or optimized for performance by introducing asynchronous paths.

Steps to improve Salesforce Flow Architecture using Elements

Step 1: Scan the current Flow health with Analytics 360

Before you start auditing individual flows, it is a good idea to start by reviewing the aggregate health and quality posture of your flow architecture.

Open Analytics 360. Then select 'Automation health' -> Automation overview' dashboard. Evaluate key areas to understand your flows’ architecture and identify areas that need improvement.

Flows by type

Analytics 360 provides a breakdown of all flows in your core Org (excluding managed packages) by type. This includes classification like 'record-triggered', 'screen flow', 'platform-event-triggered', 'auto-launched', 'schedule-triggered', 'no-trigger', 'orchestrator' and others.

This analysis shows how your org relies on different flow types and reveals patterns in automation. A high proportion of record-triggered flows, for instance, indicates heavy reliance on automation for data changes, while a low usage of orchestrations could suggest missed opportunities to simplify complex processes.

Flows by fault coverage

This is a custom metric calculated by Elements and it scores each flow on what % of possible fault paths have been created. If a flow doesn't have any DML or action elements that require a fault path, then the score is set as 'not applicable'.

Fault paths are critical for error handling, especially in data operations or system integrations, where failures without fault paths can result in cascading errors or data corruption. Documenting every possible fault path is a pattern recommended by Salesforce's Well-Architected framework.

Flows by complexity

Flows with high complexity scores are harder to maintain, difficult to change, and more prone to errors. Complexity can arise from flows performing multiple functions or having overly complicated decision paths. The flow complexity breakdown for your Org reveals how many flows could potentially be broken down into smaller, composable sub-flows.

Flows by API version

Every Salesforce release, a new API version of the Salesforce product is introduced, with new features, bug fixes, and various improvements. But Salesforce doesn’t automatically update the API version of flows.

Automatic updates could disrupt custom logic and cause errors in existing functionality. By allowing manual upgrades, Salesforce ensures admins and developers have the time to test their custom code and flows in a sandbox before going live.

Therefore it is your responsibility to ensure your flows are running on the latest version of Salesforce and that you have access to the latest bug fixes, enhancements, improvements, and features. Over time, if your flows lag behind, it leads to inconsistent behaviour, varied performance, and ultimately errors.

In the technical debt dashboard in Analytics 360, you can check the API version breakdown for all your flows.

Step 2: Create a custom view for Flows

Within the Metadata Dictionary, create a custom view of metadata for flows with following attributes:

  • Name

  • API name

  • Sub-type (e.g., record-triggered, screen, auto-launched, orchestration etc.)

  • Complexity score level (this is numeric value of the complexity)

  • Total complexity (this is the complexity category, like 'High')

  • API version

  • Fault path coverage

  • Immediate run (defines if a flow has before-save logic)

  • Asynchronous run (defines if a flow has after-save logic)

This view provides a detailed snapshot of your flows, helping you identify which ones need immediate attention.

The custom view of your flows also allows you to spot compounded issues, such as a flow that is highly complex, has no fault path coverage, and is running on an outdated API version.

Step 3: Identify overly complex flows that can be simplified

'Complexity' is neither good or bad. It all depends on the business and technical context. You may have a flow running a unique, business critical price calculation, that is innovative in nature and source of competitive advantage for your business. And in such a case, 'complexity' is expected.

However, Salesforce's Well-Architected framework identifies following patterns for flows:

Flow patterns

  • Flows are organized in a hierarchical structure consisting of a main flow and supporting subflows

  • Each flow serves a single, specific purpose

  • Complex sequences of related data operations are created with Orchestrator (instead of invoking multiple subflows within a monolithic flow)

  • Subflows are used for the sections of a processes that need to be reused across the business

When those design patterns are not met, then you would expect to see flows which are complex or highly complex.

Here are proposed steps to identify flows that could be simplified:

Step 3.1: Find complex flows with sub-flows that can be turned into a Flow Orchestrator

Salesforce Flow Orchestrator provides a clear, modular way to manage complex, multi-step processes by breaking them into distinct stages, making it easier to track progress and manage each step. It improves fault handling, allowing for more granular error management and retries, and supports asynchronous processing for long-running operations, reducing the risk of performance bottlenecks.

In order to identify complex, monolithic flows that could be improved by a transformation to a flow orchestrator, follow these steps:

  • Filter your custom view of metadata to only show 'no trigger' flows (these can be invoked as sub-flows)

  • Bulk-select listed flows and open the dependency grid using the context menu. That will show you your no-trigger flows and then which other flows use them. Each relationship will be shown as a single row.

  • Using 'Dependent API name', identify a flow that is showing up multiple times, meaning it is calling multiple sub-flows.

  • Using created MetaField 'Complexity review' classify the found flow as 'Rebuild as Orchestrator'

In the example above, we have identified a flow called PB Opportunity Closed Won - Create Renewal Oppty which is categorized as highly complex and is calling multiple sub-flows in a synchronous operation. This would be an ideal candidate for orchestration.

Step 3.2: Identify flows that are complex due to re-used, duplicated logic

Elements scores flow complexity based on number of flow elements and assigning a numeric score to each type of element. In other words, the more blocks a flow has, and the more loops, decisions, and subflows it uses, the higher the complexity.

Salesforce's Well-Architected framework advocates for composability in automation design. Having logic repeated across different flows is considered an anti-pattern and it also contributes to flow complexity.

Here is how you can use Elements to quickly identify flows that could be using duplicated logic.

  • Create a new custom view of metadata, this time listing:

    • Metadata type: Standard Object and Custom Object

    • Columns: Label, API name

  • Bulk-select listed objects (100 at a time) and open the dependency grid using the context menu. That will show you all automations and report types using those objects.

  • In the dependency explorer grid, find column titled 'Dependent type' (it is 3rd from the right). Set filter to 'contains': 'flow'. That will filter dependent metadata to only show flows.

  • Review the values for columns: 'Write', 'Read', and 'Relationship description'. They hold information about whether the flow takes data from the object (e.g. record lookup), writes data into object (e.g. create record, update record, delete record), and in what elements is the object referenced. You may use that information to look for patterns.

We recommend that you download the listed dependencies and upload it to ChatGPT or another generative AI with ability to interpret CSV files. Explain the columns, their meaning, and ask it to identify any potential patterns.

The CSV contains no business or client sensitive data, so there shouldn't be any concerns about sharing the file with an AI model.

  • For flows that have been identified as having duplicate logic, document MetaFields:

    • Complexity review: Break into Subflows

    • Flow Optimization Review: Overlapping logic

Step 3.3 Review flow definitions

At this point, you have identified any flows that can be re-built as flow orchestrators or can be broken down into sub-flows, to avoid duplicated logic across multiple flows.

The only thing that remains is to audit the remaining flows with complexity classified as High or Extremely High. That requires manual inspection of the flow logic, used elements, and identifying opportunities for simplification.

  • Sort your custom view of metadata by complexity score level 'from highest to lowest' value. Your most complex flows will appear now at the top of the list.

Tip: You may want to prioritize flows with API version in the low 50's or 40's. That is because every Salesforce release (new API version) introduces new features on flows.

Flows that have not been updated since they were built years ago are not using many new elements that help with complex logic and batch processing. They are most likely candidates for simplification.

  • Go through the listed flows one by one. Open the flow in Salesforce by clicking on a blue cloud icon in the right panel. Then open the most recent flow version to analyze its structure.

  • For flows that you have identified as having unnecessary complexity, document Complexity review as either:

    • 'Simplify': if the logic needs to be re-built using modern elements), or

    • 'Break into subflows': if the flow has been found to do multiple business processes

Step 4: Optimize record-triggered flows per object

One of the most controversial discussions in Salesforce ecosystem in recent years has been around a question: How Many Record-Triggered Flows Should You Have Per Object?

The answer? It depends.

The most popular answers are:

  • Have as many record-triggered flows as you need for your business requirements, but make them small and set up specific entry / filter criteria on start element

  • Have no more than three, one for each type of available trigger:

    • Before create or update

    • After create or update

    • Before delete

Ultimately, the number of record-triggered flows on your objects should be part of your business architecture strategy. But it should be an intentional and consistent design principle.

You can use Well-Architected and blogs to help you come up with your own design principles for record-triggered flows.

Here is how you can use Elements to understand your record-triggered flow architecture and ensure it meets your design standards:

  • Create a new custom view of metadata, this time listing:

    • Metadata type: Standard Object and Custom Object

    • Columns: Label, API name

  • Bulk-select listed objects (100 at a time) and open the dependency grid using the context menu. That will show you all automations and report types using those objects.

  • In the dependency explorer grid, apply following filters:

    • 'Dependent type' (3rd from the right): set filter to 'contains': 'flow'.

    • 'Trigger action' (5th from the right): set filter to 'is not empty'

  • You have a list of all record-triggered flows across selected objects. Look for:

    • Multiple flows triggered on the same object

    • Trigger Action (this will tell you what record operation triggers the flow)

    • Trigger Type (identifies flow as either before or after save flow)

      If you find flows that trigger on the same object and in the same way, open them in Salesforce and check if they have any specific entry conditions.

  • For flows that have been identified as having overlapping triggers, document MetaField:

    • Flow Optimization Review: Overlapping triggers

Step 5: Identify flows that should be using asynchronous logic

Asynchronous processes are requests which do not execute in real time but rather execute separately later. Asynchronous operations are put in a queue and are executed one at a time.

Salesforce recommends that flows involving external system callouts or long-running processes use asynchronous paths in order to avoid timeouts and transaction limits. Synchronous operations are generally recommended when the automation is necessary for the user to receive the outcome in-real-time.

Elements can help you identify candidates among your flows that could use asynchronous logic:

  • Sort your custom view of metadata by complexity score level 'from highest to lowest' value. Your most complex flows will appear now at the top of the list.

  • Filter your custom view of metadata to only show flows where 'asynchronous run' has value 'No'. Your view should look similar to this:

  • Review each complex flow one by one. Look at its description, any documentation left in Elements right panel, finally open the flow in Salesforce and investigate its logic. Understand if the flow's purpose is to provide the user with an outcome in real time or not.

  • For flows that do not need to provide immediate results, and are complex and have long running processes, categorize them using created MetaField:

    • Flow Optimization Review: Needs asynchronous logic

Step 6: Identify suboptimal error handling in flows

Salesforce's Well-Architected framework specifies that all flows should be consistently using fault paths. But screen flows are singled out as especially in need of using fault paths, so that users can receive educational error messages helping them to troubleshoot issues themselves when possible.

Elements can help you identify flows that are lacking fault paths by helping you act on fault path coverage score:

  • Sort your custom view of metadata by complexity score level 'from highest to lowest' value. Your most complex flows will appear now at the top of the list.

  • Filter your custom view of metadata to only show flows where:

    • 'Subtype' is screen flow

    • 'Fault coverage' is less than 90 (score is from 0 to 100 representing % score)

It is very likely that most if not all of your flows have no fault paths created. This is because this is a 'relatively' new feature that was introduced by Salesforce only in recent years.

Because flows with no fault paths simply need to be extended with appropriate fault paths, this can be actioned simply by creating stories for your backlog. More on that, continue to step 8.

Step 7: Prioritize Flow Optimization Using a Matrix

You have reviewed and identified all optimization actions for your flows. However, chances are that you do not have the time to improve all of the flows at once. So how to prioritize the needed enhancements to improve scalability and performance of your flows?

Many flows will have compounded issues that make them particularly risky or inefficient. E.g. a flow that is both highly complex, is on outdated API version, has no fault paths, does not use asynchronous logic, and coordinates multi-step logic with many sub-flows.

Flows with high business criticality and multiple technical issues should be addressed first to maximize the impact of optimization efforts.

Step 8: Take Action

After the review is finished, you will end up with a list of flows classified by complexity review and flow optimization review. You can apply filters to show all the flows that match the same action, for instance:

  • Apply filter on 'Complexity review' to is 'Rebuild as Orchestrator' to see all the flows that you have identified as needing to be migrated to flow orchestrator

  • Apply filter on 'Complexity review' to 'Break into Subflows' and 'Flow optimization review' to 'Overlapping logic' to see all the flows that you have identified as needing sub-flow due to re-using similar logic

  • Apply filter on 'Flow optimization review' to 'Overlapping triggers' to see all the record-triggered flows that need to be consolidated due to overlapping triggers

  • Apply filter on 'Flow optimization review' to 'Needs asynchronous logic' to see all the complex flows that need to be re-written to run in asynchronous mode

  • Apply filter on 'Complexity review' to 'Simplify' to see all the complex flows that need to be refactored using new logic

Custom views of metadata come equipped with many single and bulk operations. You can raise user stories and document tasks to break down complex flows, remove hard-coded values, increase API version, and improve error handling through fault paths.

You can then pick up those stories from your backlog and deliver then when there is capacity.

However, chances are that a lot of flows will face unique combination of problems. And when you are working on optimizing a single flow, it is best to address all found issues together.

Make sure the acceptance criteria on raised stories against flows reflect the specific issues found, for instance:

  • Break down the flow into simpler, composable units

  • Introduce flow orchestrator to handle complex data operations.

  • Ensure all elements have fault paths to manage errors.

  • Update the flow to the latest API version (e.g., API version 61)

Did this answer your question?