Skip to main content

Agent Instruction Diagrams : Patterns and Anti-Pattern Checks

Agent Design; Agent Instructions; Quality Check; AI Recommendations ; Agent Check

Updated over a week ago

What Are Agent Instruction Logic Checks?

Agent Instruction Logic Checks (or 'Agent check', for short) are automated validations designed to help you ensure the logic and structure of your AI agent instructions are correct before deployment.

These checks run in the background and use a set of rules and validations to surface common mistakes or ambiguities in your instruction diagrams—saving you time and reducing the risk of unexpected AI behaviour.

This tool is especially helpful if you're:

  • Defining new Agent Instructions for the first time.

  • Updating or migrating complex instructions.

  • Troubleshooting inconsistent AI responses.

Permissions

To use this feature:

  • You must be a Diagram Editor

  • Your user or space must have Elements AI license enabled

  • The diagram must be in draft mode

Agent Instruction Logic Checks is only available in Agent Instruction Diagrams.

What the Logic Checks Validate

Agent Instruction Logic Checks automatically evaluate the clarity, structure, and logic of your instruction diagrams. These checks are designed to reduce ambiguity, enforce best practices, and improve AI execution reliability.

1. Uniqueness of Input Conditions

Pattern: When you instruct an AI Agent to do something under a certain conditions, you have to ensure those conditions are unique within the scope of the entire instruction set. In the context of Agent Designer, that means that every flowline should represent a distinct, verifiable condition.

Antipattern: Repeating semantically similar lines many times, like “If data is missing, then…” and “If data is incomplete, then....” will confuse your AI Agent. Because the conditions are semantically the same, the AI Agent is told it should perform either of the specified instructions under the same circumstances, meaning it will sometimes choose wrongly.

Example: AI has a tendency to try to complete a task, at any cost. If it does not have all required inputs, it will hallucinate (i.e. make things up). Notice how in the example below, we have two conditions that sound almost exactly the same. AI Agent may get confused from time to time when to do which action, if the entry conditions are the same.


2. Avoiding Hard-Coded References

Agent reasoning engines (like Atlas in Agentforce) autonomously determine which available actions to execute. Attempting to force the agent's behavior by referencing internal names—such as flow API names, Apex classes, or IDs—won’t work.

Pattern: Describe what you want done, not how it’s technically implemented. If your AI Agents should use an action to create a case, tell it to create a case.

Antipattern: Instructing your AI Agent to use and invoke specific flow, apex or action in general using its API name or ID. Agents don’t reason by metadata, they reason by meaning. If you want to ensure that your AI Agent chooses the right action at the right time, you need to both specify very clear, unambigous instructions when to perform that action, and also provide clear name and description for available actions.

3. Use The Right Tool For The Job

Remember that in Agent Instruction Diagrams we have three shapes:

  • "AI Agent" (System) for general reasoning tasks performed by or on behalf of the agent.

  • "Action" (System) for deterministic tasks like database operations, API calls, or email sends.

  • "Prompt Template" (System) for complex or abstract AI operations involving interpretation or language understanding.

Pattern: AI Agents use a combination of tools. They reason on their own, based on provided instructions, and can invoke deterministic actions, and generative/probabilisitc prompt templates. Each one of those tools serves a specific purpose and they are not interchangable. Therefore in your Agent Instruction Diagrams you must use “AI Agent” for reasoning, “Action” for system logic, “Prompt Template” for interpretation.

Antipattern: Assigning deterministic tasks (like sending an email or calculating discounts) to the AI Agent instead of a system Action. Or requesting complex interpretation and content generation without using a prompt template.

4. Affirmative over Negative Phrasing

Pattern: Tell the Agent what to do, not what not to do. While the guardrails are important to specify what an Agent must never do, you want to have only a few of those. You need to guide an AI Agent to understand what you need done. For instance, instead of saying “Do not send email if…”, rephrase it into actionable guidance, e.g. “Send email only if…”

Antipattern: A lot of instructions phrased as negative statements will ultimately confuse your AI Agent.

5. Contrasting Instructions

Pattern: Each instruction should be logically consistent. For instance, “If a customer submits a support ticket marked "Urgent" and includes the keyword “outage,” then immediately escalate the ticket to Tier 2 support and notify the incident response team via Slack.

Antipattern: Embedding both “if yes, do this” and “if no, do that” inside the same instruction box. For instance, “If an invoice is marked as "paid in full", then send a confirmation email to the customer, but also flag the invoice for overdue follow-up.

6. Use of Examples

Pattern: To help guide your AI Agent, and to provide it with semantic understanding of your business and use-cases, enrich your instructions with examples. That way the AI Agent will be able to more accurately match customer interactions to expected behaviours. This is also important if your AI Agent needs to provide or capture unstructured data and then pass it in structured format onto the determinic action for further processing.

Antipattern: Leaving it up to the AI Agent to interpret and reason every interaction with the user or business language.

7. Language Consistency

Pattern: Use consistent terminology across your diagram. If your agent deals with cases, always refer to cases, and not ‘issue, ‘customer problem’, ‘ticket’ etc. Semantic consistency makes it easier for AI Agent to reason properly.

Antipattern: Calling the same object a “case,” “ticket,” and “issue” may be confusing to an AI Agent. The longer the instructions, the longer the conversation, the more the AI Agent will struggle with maintaining logical consistency.

8. Separation of Deterministic Logic

Pattern: AI Agents do not replace your flows or Apex, they use them. Large language models that underpin AI Agents, are usually not great at calculations and maths (or at least, they cannot produce perfect results consistently). Any deterministic rules (if this, then that), calculations, database operations should be handled by a deterministic action.

Antipattern: “Do not process refunds older than 30 days” inside an Agent reasoning step. AI Agent has no understanding of what ‘today’ is, and is not most reliable when counting. Logical checks like that should be done by carefully designed actions invoked by an AI Agent.

9. Simplification of Complex Steps

Pattern: Keep each instruction focused and simple.

Antipattern: “Look up customers, verify subscriptions, apply discounts, send email.” Even if that invokes the same, single action, the way instructions are phrased as multi-step may confuse the AI Agent.

10. Confidence Threshold Guidance

Pattern: This is one of the most important rules to solve hallucinations and improve quality of output. Define how confident an AI Agent should be, given all the instructions, knowledge, and data input, to act or provide a response. And build in handling steps what an AI Agent should do or what to ask for if the level of confidence is below that threshold. For instance, ‘“...only proceed if at least 90% confident, otherwise ask for clarification”.

Antipattern: Leaving it to the Agent to guess when it’s “sure enough.”

How to Start a Review of Your Instruction Diagram

Open your Agent Instruction Diagram. Make sure you are in the draft version and you have entered the 'edit' mode on the diagram.

In the right panel, open the 'Insights' tab, then in the 'Logic' sub-tab click on the text button 'Review diagram'.

You will see a notification and progress of recommendations being generated for your business process diagram.

Reviewing the Recommendations

Once the review finishes, recommendations appear in a new tab on the right. Each suggestion will highlight:

  • The part of the diagram it relates to.

  • The proposed improvement.

  • An explanation of why the change is useful.

Actioning Recommendations

  • Accept all recommendations.

  • Reject all.

  • Accept or reject individually.

Changes you accept are applied instantly. If you change your mind, you can undo the change using the standard undo button in the Diagram interface.

What Happens Next?

  • Accepted suggestions improve your diagram immediately.

  • Rejected or outdated suggestions disappear from the list.

  • You can run another review later to catch any newly added or edited content.

If no suggestions are found, you’ll see a confirmation that your diagram meets UPN standards—great job!

Did this answer your question?