Documentation
HomeAPISign In
  • Getting Started
    • Overview
      • Core Concepts
      • Building your First Workflow
    • API Reference
  • Your Data
    • Brand Kits
    • Knowledge Bases
      • Add Data
        • Upload Files
        • Web Scrape
        • Import from Google Drive
        • Import from SQL Database
        • Import from Shopify
      • Knowledge Base Search
      • Knowledge Base Metadata
      • Knowledge Base API
  • Building Workflows
    • Workflow Concepts
      • Workflow Inputs
        • Input Types
      • Workflow Outputs
      • Variable Referencing
      • Liquid Templating
    • Workflow Steps
      • AI
        • Prompt LLM
          • Model Selection Guide
          • Prompting Guide
        • Transcribe Audio File
      • Web Research
        • Google Search
        • Web Page Scrape
      • Code
        • Run Code
        • Call API
        • Format JSON
        • Run SQL Query
        • Write Liquid Text
      • Flow
        • Condition
        • Iteration
        • Human Review
        • Content Comparison
        • Error
      • Data
        • Read from Grid
        • Write to Grid
        • Search Knowledge Base
        • Write to Knowledge Base
        • Get Knowledge Base File
      • AirOps
        • Workflow
        • Agent
      • Image & Video
        • Generate Image with API
        • Search Stock Images
        • Fetch Stock Image with ID
        • Resize Image
        • Screenshot from URL
        • Create OpenGraph Image
        • Create Video Avatar
      • SEO Research
        • Semrush
        • Data4SEO
      • Content Quality
        • Detect AI Content
        • Scan Content for Plagiarism
      • Content Processing
        • Convert Markdown to HTML
        • Convert PDF URL to Text
        • Group Keywords into Clusters
      • B2B Enrichment
        • Hunter.io
        • People Data Labs
      • CMS Integrations
        • Webflow
        • WordPress
        • Shopify
        • Contentful
        • Sanity
        • Strapi
      • Analytics Integrations
        • Google Search Console
      • Collaboration Integrations
        • Gmail
        • Google Docs
        • Google Sheets
        • Notion
        • Slack
    • Testing and Iteration
    • Publishing and Versioning
  • Running Workflows
    • Run Once
    • Run in Bulk (Grid)
    • Run via API
    • Run via Trigger
      • Incoming Webhook Trigger
      • Zapier
    • Run on a Schedule
    • Error Handling
  • Grids
    • Create a Grid
      • Import from Webflow
      • Import from Wordpress
      • Import from Semrush
      • Import from Google Search Console
    • Add Columns in the Grid
    • Run Workflows in the Grid
      • Add Workflow Column
      • Run Workflow Column
      • Map Workflow Outputs
      • Review Workflow Run Metadata
    • Review Content in the Grid
      • Review Markdown Content
      • Review HTML Content
      • Compare Content Difference
    • Publish to CMS from Grid
    • Pull Analytics in the Grid
    • Export as CSV
  • Copilot
    • Chat with Copilot
    • Edit Workflows with Copilot
    • Fix Errors with Copilot
  • Monitoring
    • Task Usage
    • Analytics
    • Alerts
    • Execution History
  • Your Workspace
    • Create a Workspace
    • Folders
    • Settings
    • Billing
    • Use your own LLM API Keys
    • Secrets
    • Team and Permissions
  • Chat Agents (Legacy)
    • Agent Quick Start
    • Chat Agents
    • Integrate Agents
      • Widget
      • Client Web SDK
  • About
    • Ethical AI and IP Production
    • Principles
    • Security and Compliance
Powered by GitBook
On this page
  • How to Test Your Workflow
  • Test One Step
  • Test your entire Workflow
  • Error Handling Best Practices
  • 1. Context Window Limit
  • 2. Prompt Formatting Errors

Was this helpful?

  1. Building Workflows

Testing and Iteration

Best practices for testing and iterating on your workflow

Last updated 2 months ago

Was this helpful?

Testing and iteration are fundamental steps to creating a production-ready Workflow. In this section, we will outline the best practices for testing and iterating on your workflow.

How to Test Your Workflow

There are two ways to test your workflow: one step at a time or all steps at once.

When you test one step at a time, the value of each step will be updated. When you test all steps at once, the values of each step will only be updated if the execution is successful. However, the values will not be updated if you encounter an error. As a result, the best practice is to always test one step at a time until your workflow is production-ready

Test One Step

  • Begin at the Start step: to begin testing a workflow, you must add values to the start step

  • Click on the arrow of the first step: this will execute the first step only

  • Use the logs to understand the step's output in order to reference it in future steps: logs are critical to understanding the data structure of the step's output - use this output to help you reference this step in subsequent steps

  • Design the next step(s): continue designing your workflow by referencing the previous steps using the pink variable helper button

  • Continue testing each subsequent step: after designing a step, click the play button to test the step. Continue this process until you have a production-ready workflow

Test your entire Workflow

There are two options for testing your workflow end-to-end:

Use the Test Workflow feature when your Workflow is ready

  • Click Test Workflow: click Test Workflow in the upper right-hand corner

  • Input your test values: add the test values you want to execute

  • Execute: click Execute to test the entire workflow

Error Handling Best Practices

1. Context Window Limit

Every LLM model has a maximum context window, which means there is a maximum number of tokens (or characters) you can have in your prompt before it errors out.

1 token generally corresponds to ~4 characters and ~3/4 of one word.

  1. To handle edge cases where your prompt exceeds the token limit, you can use the following Liquid syntax to slice your inputs or step outputs into a shorter chunk: {{step_1.output | slice: 0, 10000}}where 10000 is replaced by the number of characters you want to chunk.

2. Prompt Formatting Errors

Many production Workflows of large language models depend on consistent output formatting from the model. To ensure consistency in your workflow, consider the following methods to handle unexpected output from the prompt:

  1. Prompting Techniques:

    • To prevent the model from introducing it's answer with a phrase such as "Sure, here is a...", you can add a command in both the System and User such as "Output your answer in the following format, and do not include any additional commentary:"

    • If your output is HTML or JSON, to prevent the model from starting with ```html or ```json, you can tell the model "Start with #..." or "Start with {, and end with } and do not include ```json in your response"

  2. Code Parsing:

    • If you're still struggling with consistent output, or you want to handle extreme cases where it may deviate from the expected output, you can use code to parse out any text that precedes your first character.

When creating a workflow for production, there are edge cases you should test for and handle in your workflow before running at scale. Also, remember, you can always ask for the you run into while testing.

To ensure your prompt stays within the context window limit, you can approximate the number of tokens you will use by copying and pasting your prompt into the following OpenAI token counter:

Copilot's help to Fix Errors
https://platform.openai.com/tokenizer
Cover

Option 1: Click "Test Worfklow" from the Inputs step

Cover

Option 2: Click "Test Workflow" on the top right of the Studio