Skip to content
  • There are no suggestions because the search field is empty.

Components in Workflow builder

The Vince Live workflow builder lets non‑developers build and automate business processes by combining components on a canvas. 

Overview of components

 

1. Trigger

Workflows can be triggered manually, on a schedule or by external events. Manual triggers present a user interface for entering input parameters; fields can be reordered, renamed, given tool‑tips and assigned types such as date or text. Scheduled triggers run at specific intervals (minutes, hours, days, months or years) and require all necessary input values to be defined in the configuration Event/webhook triggers start workflows automatically when changes occur in tables or when webhooks are received; they share the same no‑input limitation as scheduled triggers.

A trigger defines how a workflow starts.

Mode Use case & configuration
Manual Presents user‑interface fields for data entry before running the workflow. Fields marked From Client in the API mapping appear in the trigger. Users can drag and drop fields to change the order, rename the field IDs and add tool‑tips for guidance. The execution button label, header and description can be customised.  
Scheduled Runs automatically at specified times. Users configure minute, hour, day, month and year schedules and can specify specific values or intervals for each. Because no user input is possible, default values for all component inputs must be defined in the workflow configuration.  
Event / Webhook Triggers the workflow when an event occurs, such as a new or updated record in a Vince Live table, or when a webhook call is received. Users define conditions for the event and cannot supply manual input—necessary values must be included in the workflow configuration  
Trigger Component – Manual Setup Guide

Feature Overview

  • Manual triggers ensure all required data is collected before a workflow runs.

    Key benefits:

  • Reduces execution errors with default values and tooltips
  • Customizable layout and labels improve user experience
  • What You’ll Need Before You Start

    1. Access to Workflow Builder
    2. At least one configured M3 API component
    3. Input fields marked with Source = Client

Step‑by‑Step: Configure Manual Trigger

  1. Enable Manual Trigger
  2. Open the Workflow Builder
  3. Click Create Workflow → Select Workflow Template → Trigger
  4. By default, the Manual trigger option appears
    1. Add Client Input Fields

    2. In the M3 API component, set Source = Client for all desired inputs (e.g. Item Number, Configuration Code)

    3. These fields will appear in the Trigger → Manual screen

  5. Organize and Customize the Form

    You can rearrange fields by drag-and-drop to optimize the layout.

    Click the Edit icon next to a field to customize:

    • Label – Rename to user-friendly terms
    • Placeholder Text – Add hints inside the field
    • Tooltip – Show extra info when users hover
    • Default Value – Pre-fill commonly used entries
    • Field Type – Choose Required data type

    Mandatory fields (like Item Number, if required by your program) will show a disabled delete icon and cannot be removed. Optional fields (like Configuration Code) show an enabled delete icon and can be removed.

    Deleting a field removes it from the manual screen and the M3 API mapping. Restore by clicking Add Fields, then drag it back in.

  6. Customize UI Text and Execution Details

    To enhance clarity:

    • Change the execution button text to something more instructive (e.g., Submit Request, Run Process)
    • Update the heading and description at the top of the manual form to explain the workflow’s purpose
    This helps reduce confusion and guides the user effectively
  7. Configure Execution Context

    •  In the M3 API component:

      • Click Configure Execution and set:

        • M3 Company
        • M3 Division
  8. What to Expect After Setup

    1. All Client sourced fields appear in the Manual screen
    2. Custom labels, tooltips, and default values show up in the form
    3. Mandatory fields are fixed; optional ones can be removed or restored
    4. When users click Execute, the workflow runs using the provided inputs
    5. Use Client Preview to test and review the execution screen layout
  9. Tips & Best Practices
    1. Always test your setup using Client Preview
    2. Use short, meaningful field labels and tooltips
    3. Save after each configuration step
    4. Pre-fill known values for speed and accuracy
    5. Only make fields mandatory if absolutely required for execution
  10. Troubleshooting & FAQs

    Q: Why can’t I delete some fields?

    A: They’re marked as mandatory by the Program/Transaction and required for the workflow to run.

    Q: What happens if I delete a field?

    A: It’s removed from both the trigger screen and the M3 API. You can re-add it from Add Fields.

    Q: Can I change the heading or execution button label?

    A: Yes — customize both to match the workflow’s context and guide users better.

    Q: How can I preview what users will see?

    A: Click Client Preview to see the real execution layout as users would.

Trigger Component – Scheduler Setup Guide

This guide shows you how to automatically run workflows on a set schedule in Vince Live — just like setting an alarm for your business tasks.

    • What Is the Scheduler?

      The Scheduler lets you run workflows automatically at specific times, without manual effort. It's ideal for routine tasks like nightly imports, weekly reports, or hourly data syncs.

When Should I Use It?

Use the scheduler when you want to:

  • Automate data imports or exports
  • Run reports at a fixed time
  • Reduce manual work and human error

Example Scenarios

Scenario Example Schedule
Import new data Every day at 2:00 AM
Generate weekly report Every Monday at 9:00 AM
Sync with other systems Every hour, on the hour
  • Click Save.

     

Step-by-Step: Set Up a Scheduled Workflow

Open the Workflows Module

  1. Log in to Vince Live.
  2. Click Workflows in the top menu.
  3. Click Create New Workflow.
  4. Give it a name and select the environment etc...
  5. Under Trigger, choose Schedule

Configure the Schedule

  1. Set the Start and End date/time.
  2. Choose Automatic Time Zone or select one manually.
  3. Choose how often it should run:
Frequency Example Format
Yearly Every year > Every month > On day > Time
Monthly Every month > On day > Time
Weekly Every week > On weekday > Time
Daily Every day > Time
Hourly Every hour > At minute
Every Minute Every minute



Add Workflow Components

    • Add components as needed:
      • M3 API with Program & Transactions
      • Excel (map required input and output fields)
      • File Loader to fetch files automatically.
      • Email - Add an Email step at the end of your workflow to receive the scheduled WF output.
For Imports / Exports:
    • Set filters and input fields to use Constant as source
    • Go to Input Fields.
    • For each field, set Source = Constant / Excel / From API / Tag / Current Date.

      Scheduled workflows can’t take manual input. You need to fill the data by selecting source as mentioned above.

  • Test with sample data before going live
  • Use Constant values in scheduled workflows
  • Don’t forget to click Save after each step
  • Make sure the workflow is active
  • If you change trigger type to Manual, it won’t run automatically

 

FAQs

Q: Why isn’t my workflow running?

A: Make sure the workflow is active and all required fields are saved.

Q: Can I use client-supplied values in a scheduled run?

A: No. Scheduled workflows can’t accept user input — use Constant instead.

Q: What happens if I switch from Schedule to Manual?

A: The workflow will stop auto-running.

 

2. Rest API

The REST API component is designed to connect to any endpoint and make API requests using various methods. It is highly versatile, and capable of interfacing with any available service, offering extensive possibilities for integration. This document will explain how to configure and use the REST API component in a workflow.

What is an API?

An API (Application Programming Interface) allows different applications to communicate with each other. It's like a messenger that takes your request, tells the target system what you want to do, and then brings the response back to you.

REST API Component

REST API component is a tool that you can use to send requests to other systems and receive responses. You can think of it as a way to ask other applications/services for information or to perform an action.

Configuring the REST API Component

When setting up the REST API component, you will need to provide some key information. Let's break these down:

  1. Endpoint
    • The URL where you want to send your request. Think of it as the address of the place you are sending your message to.
  2. Method
    This tells the system what kind of request you are making. Common methods include:
      • GET: To retrieve data.
      • POST: To send new data.
      • PUT: To update existing data.
      • DELETE: To remove data.
  3. Headers
    Extra information you send along with your request. For example, headers can include details about the format of the data you are sending or receiving.
  4. Body
    The actual data you want to send in your request. This is typically used with methods like POST and PUT.
  5. Connection ID
    A special identifier that links to another part of our system called a "Connection". The Connection contains authentication details that tell the system how to log in and gain access to the endpoint.
Connections and Connection IDs
What is a Connection?

A Connection is like a saved set of login details that our system can use to access different services. These details can include things like usernames, passwords, or special tokens.

Using Connection IDs

When you configure the REST API component, you will specify a Connection ID. This ID refers to a Connection that you or another user has set up beforehand. Here’s how it works:

  1. Setup Connection
    • First, you will create a Connection with the necessary login details. This could be for a service that requires OAuth (a common way to log in securely) or basic authentication (simple username and password).
  2. Configure REST API Component
    • When setting up the REST API component, you will enter the Connection ID of the Connection you want to use. This tells the system to use those saved login details when making the request.
  3. Automatic Authorization
    • During the execution of the workflow, the system will automatically use the login details from the specified Connection. It will add the necessary authorization information to your request, so you don’t have to do it manually.

NOTE: It is not mandatory to set a Connection ID in REST API if there is no authentication needed to access the configured endpoint resources.

Example Configuration

Let’s walk through an example to make it clearer.

Example: GET Request to Fetch User Data
  1. Endpoint: https://api.example.com/users
    • This is the URL where we are sending the request.
  2. Method: GET
    • We want to retrieve data, so we use the GET method.
  3. Headers: Content-Type: application/json
    • We are specifying that we expect the response to be in JSON format.
  4. Body: (Empty)
    • Since we are using GET, we don’t need to send any data in the body.
  5. Connection ID: CONNECTION-12345
    • This is the ID of a Connection that has the necessary login details.
What Happens Next?
  • When the workflow runs, the system will look at the Connection ID CONNECTION-12345, find the saved login details, and use them to access the https://api.example.com/users endpoint.
  • The system will automatically add any required authorization information to the request headers.
  • The request is sent, and the system will receive and process the response.
Sample configuration for REST API to connect to M3 Service
{
"connectionId": "CONNECTION-1c9219742f924da897687d9288f70b87",
"endpoint": "<https://infor-m3ce/DEMO/m3api-rest/v2/execute>",
"method": "post",
"headers": {
"content-type": "application/json"
},
"body": {
"program": "MMS200MI",
"maxReturnedRecords": 100,
"transactions": [{
"transaction": "LstItmWhsByWhs",
"record": {
WHLO: "001"
},
"selectedColumns": [
"WHLO",
"ITNO",
"ITDS"
]
}]
}
}

For Each

For each gives the possibility to run multiple calls (in sequence) to the target endpoint, if you specify an array of records to use in the forEach property.

Add the forEach property with a JSONpath to your array:

{
"version": 2,
"forEach": "body.records", <---- append to top for easier readability
"endpoint": "<https://postman-echo.com/post?query=>&someInfo=",
"method": "post",
"headers": {
"content-type": "application/json"
},
"bodyPath": "$" <---- sends the root of the CURRENT step, it does NOT contain a default body
}

There are a few things to note when using forEach . Using the example above, let’s break it down:

  • The forEach property references the input from the previous step, meaning we expect the previous step to have the property records in its body
  • The endpoint containing a reference to body.id will look for the id property in the current record.
    • The root It is automatically wrapped in in a body property, hence the body.id reference.
  • The bodyPath will reference the current record root, meaning it does NOT wrap in a body like the endpoint
Output of For Each

Using forEach changes the output format of the Rest API component from an object, to an array of objects.

For a single request, the output object looks like so:

{
"headers": {
/* headers returned in the response */
},
"body": {
/* body returned in the response */
}
}

When using forEach the output is an array of all response objects:

[
{
"headers": {
/* headers returned in the response */
},
"body": {
/* body returned in the response */
}
},
{
"headers": {
/* headers returned in the response */
},
"body": {
/* body returned in the response */
}
},
...more records...
]

This means that you have access to any failure messages, status code and other properties relevant for further processing or retry attempts.

Summary

The REST API component in our workflow system allows you to connect to different endpoints and make API requests. By configuring properties like the endpoint, method, headers, and body, you can specify what kind of request you want to make. The Connection ID links to saved login details, making it easy to handle authentication automatically. This simplifies the process of making secure and authorized API requests within your workflows.

 

3. M3 API component

The M3 API component interacts with Infor M3 APIs. It allows users to search for available APIs and configure transactions for listing, adding, updating or deleting data. Multiple APIs/transactions can be combined by dragging and dropping fields between them. Execution order can be controlled by specifying rules—transactions can be set to execute only if the previous transaction succeeds or fails.

Advanced features include:

  • Order configuration workflows – start and end runs allow a one‑time initial transaction (e.g., generate a batch number) followed by head, line and address transactions.
  • Sorting and grouping – spreadsheets can be sorted or grouped so that order lines with the same warehouse or dates are processed together.

M3 API Component – User Guide

Overview

The M3 API component allows workflows to interact with Infor M3 through API transactions. It supports multiple transactions in sequence, with options for dependencies, grouping, and flexible field input sources.

Important:

  • Only in import workflows, all configuration options (dependencies, grouping, field configuration) are available.

1. Adding the M3 API Component
  1. In your workflow, click + and choose M3 API.
  2. Select whether this is for import or export data flow.
    • If import, you can use features like Generic Filter before M3 API.
    • If export, some features are limited.

2. Configuring Transactions

You can add one or more M3 API transactions.

  1. Select API Transaction
    • Click the transaction card and choose the desired API program and transaction (e.g., OIS017MI > UpdBasePrice).
  2. Set Input Fields
    • Each field can take values from different sources (see below).
  3.  Field Configuration
    • Click the settings icon next to a field to open the configuration overlay.

Available Sources:

  • Client – Value provided by the user at execution time (manual runs only; not available in scheduled workflows).
  • API – Takes value from a field in the output of a previous transaction.
  • Excel – Pulls value from the Excel data source in the workflow.
  • Constant – Fixed value provided during configuration (required for scheduled workflows if runtime input is not possible).
  • Current Date – Automatically uses today’s date.
  • Tag – Uses a value from a tenant-level tag configured for the executing user.

Special Notes:

  • In first transaction of a workflow, API source is not available.
4. Dependencies (Import Workflows Only)

Dependencies allow one transaction to run based on the success or failure of another.

Example:

  • Transaction 1: Change Base Price
  • Transaction 2: Add Base Price (runs only if Change Base Price fails)

To configure:

  1. Click the three-dot menu on the dependent transaction.
  2. Choose Set dependency.
  3. Select the triggering transaction and condition (runs successfully or fails).
5. Group By (Import Workflows Only)

Groups multiple records into one transaction call if they share the same field values.

Example:

If multiple order lines have the same warehouse and item details, grouping can create one order instead of multiple.

To configure:

  1. Open field configuration.
  2. Check Group by this field for all grouping criteria.
6. Additional Actions
  • Copy – Duplicate the current transaction with the same configuration.
  • Set Critical – Mark a transaction’s output message as the overall workflow message.
7. Start and End Runs

You can define which transaction starts the workflow and which ends it.

To configure:

  1. Click Define start-end run.
  2. Choose start and/or end transaction.
8. Execution-Time Behavior
  • If any fields are Client sourced, execution will pause for user input on the run screen.
  • If workflow is scheduled, use Constant, Current Date, Tag, or Excel instead of Client.
  • Tags are resolved at runtime using the executing user’s tag values.
9. Error Handling
  • If a transaction fails, check Logs for detailed API responses.
  • Dependencies can automatically trigger alternate transactions on failure.
  • If both primary and dependent fail, failure messages appear in transaction-level logs.
10. Best Practices
    • Use Group By to optimize performance by reducing repeated transactions.
    • Always test with a small dataset before running large imports.
    • For scheduled workflows, confirm all required inputs are Constant, Current Date, Tag, or Excel or otherwise auto filled.

 

4. Excel component

The Excel component provides a spreadsheet designer for mapping input and output fields and integrating data with Excel:

  • Using fields – after selecting APIs, all input/output fields are available in the right side‑panel. Fields can be dragged onto the spreadsheet designer individually or in bulk.
  • Designing and editing – fields may be rearranged by dragging between existing columns; right‑click provides options to delete cells or just the field content. The Options tab controls whether both field ID and description are displayed and allows the sheet name to be set. Users may change a field’s description to match business terminology.
  • Importing spreadsheets – an existing spreadsheet can be imported. Vince Live will automatically map fields if the Excel file contains field IDs. When data is imported back to M3, success or failure messages are returned to the spreadsheet.
  • Mapping input and output fields – first map input fields to indicate where the workflow reads data from Excel, then map output fields to define where result data is written.
  • Workflow building tip – it is recommended to export data first and design the spreadsheet manually, then import it for configuring import APIs.
Control Spreadsheet toggle

The Control Spreadsheet toggle in the Excel component ensures column mappings remain consistent. When enabled, Vince Live validates that each mapped column in the execution spreadsheet matches the workflow definition; changes such as inserting or renaming columns cause an error. The toggle is off by default and can be turned on in both existing and new workflows. The feature prevents errors when columns are added, deleted or renamed and is particularly useful for maintaining long‑term workflow integrity.

Enforce Sheet Verification toggle

For workflows with multiple worksheets, the Enforce Sheet Verification toggle ensures that the active sheet used during execution matches the sheet specified in the configuration. When enabled, executing the workflow on the wrong sheet generates a warning. The toggle is off by default for both new and existing workflows and can be enabled during configuration. This feature reduces the risk of data corruption when users switch between worksheets.

Save backup of output toggle

The Save Backup of Output toggle controls whether an Excel file is generated when executing workflows from the VXL add‑in. By default, the toggle is off and the output is only loaded into Excel via the add‑in. If enabled and the workflow is executed from VXL, a backup file appears in the Vince Live summary section. Executing the workflow directly from Vince Live generates the Excel output regardless of toggle state.

 

5. M3 Filter component

The M3 Filter component filters the output of an M3 API to reduce the number of records processed downstream. Users add the component after the API and before Excel or other steps. Filters are configured by dragging fields from the API transaction into the filter configuration table and selecting the filter type (e.g., Equal, Not Equal, Greater Than, Is Blank), the source (From client or Constant) and a logical operator when multiple conditions exist. At least one field must be configured, otherwise an error appears when saving.

During execution, if the source is From client, users must supply values on the execution screen; a checkbox allows users to ignore the filter. When the source is Constant, the configured value is applied automatically and does not appear in the execution UI. Additional features include disabling certain options for date fields and the ability to delete or modify filters.

An example use case is filtering stock zones between a lower and upper range: dragging the Stock Zone field twice, configuring the first as “Greater Than Equal” and the second as “Less Than Equal”, setting both sources to From client, and selecting the logical operator AND. During execution, users supply the range values and only records within the range are processed.

M3 Filter: Choose Exactly What Data You Want to Use

M3 Filter Component – Setup Guide

    • Feature Overview

      The M3 Filter component lets you filter the data going in or out of your M3 workflows in Vince Live.

      👉 Think of it like setting up a smart search filter in M3: You define the conditions—such as “Item Number starts with A” or “Warehouse equals 001”—and only those matching records are processed.

      You can use this feature to control:

      • What records are retrieved (in Export workflows)
      • What records are processed (in Import workflows)
    • Why This Feature Is Important

      The M3 Filter helps you focus on just the data that matters.

      • Automatically filter M3 data without manual sorting
      • Allow users to enter filter values at runtime
      • Apply fixed rules using constants
      • Reduce errors by enforcing required input
    • What You’ll Need Before You Start

      Make sure you have:

      • A Vince Live workflow (Export or Import)
      • An M3 API component added to the workflow
      • At least one M3 transaction configured inside the API
    • Step-by-Step: Set Up the M3 Filter
      • Add to Your Workflow

        1. Open your workflow in Vince Live.

        2. If using a template, the M3 Filter may already be added.

        3. If starting from scratch:

          • Click the ➕ icon to add a component.
          • Add the M3 API component (e.g., OAS017MI with List Base Price).
          • Save the M3 API.
        4. Click ➕ again and select M3 Filter.

          (This appears only after the M3 API is saved.)

      • Configure Filter Fields

        1. Open the M3 Filter and click Show Available Fields.
        2. You’ll see:
          • Output fields for Export workflows
          • Input fields for Import workflows
        3. Drag fields into the Filter Items section.

        For each filter:

        • Comparison Type – Choose how the field should be matched
        • Source – Where the filter value comes from:
          • Client – The user enters a value at runtime
          • Constant – A fixed value hidden from the user
      • Comparison Types Explained

        💡Tip: Hover over the field icon to see if it’s a string, number, or date.

        Comparison Type Use When You Want To…
        Equal Match an exact value (e.g., Warehouse = "001")
        Not Equal Exclude a specific value
        Greater Than Values above a number or date
        Less Than Values below a number or date
        Greater Than Equal Values equal to or above the input
        Less Than Equal Values equal to or below the input
        Is Blank Include empty fields
        Is Not Blank Include non-empty fields
        Contains Field includes part of the value
        Does Not Contain Exclude values containing a keyword
      • Set Logic and Defaults

        1. Combine multiple filters using AND or OR.
        2. To filter a range:
          • Drag the same field twice
          • Use "Greater Than or Equal To" and "Less Than or Equal To"
          • Combine with AND
      • UI Shortcuts

        • Use the search bar to quickly find fields
        • Click the trash icon to delete a filter
        • Hover over a field’s icon to view its data type
    • Filter Behavior at Runtime

      On the Trigger Screen

      (When using Client source):

      • Set a default value (optional)
      • Mark the filter as Mandatory if it must be filled
      ▶ On the Execution Screen
      • End users enter filter values for Client fields
      • Users can tick Ignore Filter to temporarily skip it
    • Common Use Cases
      Scenario Recommended Setup
      Always export from Warehouse 001 Use Constant: Warehouse = 001
      Let users pick an item number Use Client with a default
      Filter by date range Drag the same field twice (From & To), use AND
      Skip filter during testing Enable Ignore Filter in Execution Screen
Tips and Best Practices
  • Add the M3 API before the M3 Filter—it won’t appear otherwise
  • Use Client for user-entered values
  • Use Constant to silently enforce filters
  • Combine filters with AND for tighter control
  • Use Ignore Filter for testing or temporary skips
In VXL Live (Excel Plugin)
  • Client filters = user inputs in Excel
  • Constant filters = silently applied in the background
Troubleshooting & FAQs
Q: Why isn’t the M3 Filter showing up?

A: Ensure the M3 API component is added and saved first.

Q: How can users skip a filter?

A: They can check Ignore Filter during execution.

Q: How do I enforce a filter permanently?

A: Use Constant or mark the Client field as Mandatory in the Trigger Screen.

Q: Do filters behave differently in Export vs Import?

A: Yes:

  • Export: Filters apply to output data
  • Import: Filters apply to input data

 

6. Generic Filter component

The Generic Filter is tied to Excel and decides whether each record should be processed based on a value in an Excel column. A typical use case is to execute only rows marked “OK” in a particular column. To configure the filter, select the Excel column and choose the filter type (Equal, Not Equal, Less Than, etc.). The filter value can be set to a Constant or obtained From client; in the latter case the value entered on the execution screen overrides the default. Data types (Number, String or Boolean) must be specified, and when multiple conditions are used, an AND or OR operator must be selected. The generic filter applies to the whole workflow and cannot target specific APIs.

Generic filter - Setup Guide

Overview

The Generic Filter allows you to control which records from an Excel file are processed during workflow execution, based on conditions applied to a selected column.

It is commonly used to process only rows that match specific criteria, such as “OK” in a status column.

Key Points:

  • Available only for Import Workflows.
  • Must be configured before the M3 API component.
  • Applies to the entire workflow, not to individual APIs.

 

Configuration

1. Select Excel Column

Choose the column in the Excel input file that will be evaluated by the filter.

2. Define Filter Criteria
  • Comparison Type: Equal, Not Equal, Less Than, Greater Than, Less Than or Equal, Greater Than or Equal, Contains, etc.
  • Data Type: Number, String, or Boolean.
  • Multiple Conditions: If more than one condition is added, select an AND or OR operator.

3. Set Filter Value Source

  • From Client:
    • Value is provided at runtime in the Workflow Execution screen.
    • If a value is entered during execution, it overrides the default.
    • If no value is entered, only the input field will appear for manual entry.
  • Constant:
    • Value is defined during configuration and cannot be changed at runtime.
    • The input field will not be shown during execution.

Special Rules

Scheduled Workflows

For scheduled workflows, the filter source must be Constant, since runtime input is not possible.

Ignore Filter Option
  • Available only for From Client source.
  • Allows workflow execution without applying the filter.
  • If the filter is not ignored but no value is entered, an error message will be shown.
Runtime Behavior
  • For From Client filters, the value entered in the execution screen will be used.
  • For Constant filters, the preconfigured value will always be used.
  • If the filter conditions are not met, the record will be excluded from processing.
Example Use Case

Goal: Process only rows with “OK” in the Status column.

  • Select Status as the column.
  • Choose Equal as the comparison type.
  • Data type: String.
  • Source: Constant with value OK.
  • The workflow will execute only those records where Status = OK.
Best Practices

 

    • Clearly name the Excel column headers to match filter conditions.
    • Use From Client when flexibility is needed for ad-hoc runs.
    • Use Constant for scheduled workflows to avoid missing runtime inputs.
    • When using multiple conditions, plan your AND/OR logic carefully to avoid unexpected results.

7. File Loader component

The File Loader component loads Excel files from a user’s computer or from OneDrive/SharePoint into the workflow. It supports only Excel files.

  • Manual file selection – when executing a workflow, users can choose an Excel file from their computer or from OneDrive/SharePoint by enabling a toggle. This works for any workflow containing an M3 API step and requires no extra configuration. Access to OneDrive/SharePoint files requires an administrator to grant the necessary Microsoft Entra permissions.
  • Automated folder configuration – for scheduled or unattended workflows, a File Loader step can automatically fetch the latest file from a specified SharePoint site. Beforehand, a OneDrive connection must be created. During configuration, add the File Loader step after the trigger, choose “OneDrive”, select the connection, and specify the SharePoint folder. The workflow will retrieve the most recent file from that folder for processing.

 

8. Code component

The Code component allows custom logic within workflows. It is executed in a secure environment with limited access to Node.js packages and helper functions. Native console functions such as are disabled.

Available packages include axios for HTTP requests. Helper functions provided by Vince Live simplify working with connections and context:

  • wraps axios and automatically adds authentication headers based on a specified connection. Example usage shows retrieving a connection via , creating an authorized axios instance and performing a GET request.
  • retrieves context such as environment details, connection objects, variables, tenant and user information. Example code demonstrates retrieving various contexts and returning them as part of a response.
  • (concept) – offers functions to save, list, update and retrieve files in the tenant’s AWS S3 storage. Examples show saving a file from data or saving from a URL with configurable content type, file key and public access settings.

The Code component enables advanced integrations—e.g., processing data, calling external services or manipulating files—beyond what built‑in steps can do.

 

9. Converter component

The Converter component transforms data between formats. Two predefined sub‑components exist:

  • JSON‑to‑CSV converter – triggered with a JSON payload, it converts the data to CSV. An example shows converting an array of objects with , , and nested fields into a CSV string.
  • CSV‑to‑JSON converter – the reverse; it accepts CSV input and produces JSON. The example provided shows converting CSV lines for vehicles into a JSON array.

These converters are implemented as REST APIs and can be integrated into workflows via REST API steps for format transformation.

Easily transform data between JSON, CSV, and XML — no code required.

Think of it as a universal translator for your data formats.

What it does

The Converter component helps you reformat data inside your workflow.

Whether you need to send data as CSV, parse XML from an external service, or standardize inputs in JSON — Converter handles it all.

By default, the component converts XML → JSON. You can change the source and target formats anytime.

💡 Drop it anywhere in your flow where data format needs to change — it just works.

When to use it

Use the Converter when:

  • You’re integrating with services that require specific formats
  • You want to clean and transform data for internal processing
  • You need to avoid writing code for basic format conversions
Common examples
Scenario Setup
Convert a webhook’s XML response into JSON for a Code step Source: XML → Target: JSON (default)
Export form data to a spreadsheet Source: JSON → Target: CSV
Send CSV data to an XML-based legacy system Source: CSV → Target: XML
Before you begin

Make sure you have:

  • An upstream step that produces data (e.g., Trigger, Code, API)
  • A known source format (JSON, XML, or CSV)
  • Your desired target format
How to set it up
1. Add the Converter
  1. Click + in your workflow
  2. Select Converter from the list
  3. The default setup is XML → JSON
2. Choose your formats
  • Source: Select the format of incoming data
  • Target: Select the format you want to output

Supported format combinations:

Source → Target Supported?
XML → JSON (default)
JSON → CSV
JSON → XML
CSV → JSON
CSV → XML
XML → CSV
3. (Optional) Configure advanced settings

Depending on your selected formats, you may see additional options.

For CSV
  • Delimiter field (e.g., ,)
  • Delimiter wrap (e.g., ")
  • Trim values, Trim header fields, Prepend header
For XML
  • Root element (e.g., records)
  • Attribute prefix
  • Ignore namespace, Ignore attributes

💡 Hover over each setting in the UI for helpful tooltips.

4. Save and connect next steps

Click Save. The converted result is now available in prevStep.body and can be used by any step that follows — like Code blocks, API calls, or Email.

What happens next
  • The Converter runs when your workflow does
  • It uses the previous step’s output and transforms it on the fly
  • Your next step automatically receives the converted data

No manual handling. No code. Just plug and play.


💡 Quick-start shortcuts
Need to... Do this
Turn XML into JSON Add Converter → Leave default settings
Create a downloadable CSV Set Source: JSON → Target: CSV
Parse CSV file for API call Set Source: CSV → Target: JSON
Tips & best practices
  • Use a Code step before the Converter if data cleanup is needed
  • Test each format pairing with sample inputs
  • Don't forget to click Save after changes
  • You can chain multiple Converters for complex scenarios (e.g., JSON → CSV → XML)
FAQs

Q: Why is XML → JSON selected by default?

A: It’s the most common case when working with external webhooks and APIs.

Q: My output is empty — what’s wrong?

A: Make sure the previous step returned data and that the Source format matches it.

Q: Can I re-convert the same data again later in the flow?

A: Absolutely. Add another Converter step where needed.

 

10. Table Component

The Table component creates custom tables within Vince Live to store output data from workflows. During workflow design, users add the Table component and configure it via the right‑side panel. The Table Settings section requires a unique table name and offers three record creation types: Upsert (insert or update), Replace (delete all existing records before inserting) and Append (insert new records; timestamps ensure uniqueness).

In the Select Fields section, the output fields from selected transactions are displayed and can be dragged onto the table. Users can modify the data type and name of each field and must designate at least one field as the primary key. Additional columns can be added between existing ones via the ellipsis menu (feature under development) and fields can be removed when necessary. After configuration, saving the workflow creates the table; it will appear in Table Management after the workflow is executed. Unique table names and primary keys are mandatory.

Table Component – Full Setup & Management Guide

  1. Feature Overview

    The Table Component lets you create, manage, and interact with structured data tables right inside your platform.

    Think of it like a spreadsheet built into your app — searchable, sortable, and fully configurable.

    Admins can access and manage all tables by default. Regular users can only access tables if permissions are granted.

    You can also create custom tables directly within workflow creation, configure their structure, and map output data to them.

  2. Why This Feature Is Important

    Use the Table Component to:

    • Organize and view data in a structured format
    • Upload or map data via workflows or Excel files
    • Enforce data integrity with configurable rules
    • Control access with granular permissions
    • Capture real-time workflow outputs for analysis
  3. What You’ll Need Before You Start
    • Admin access to see the Tables tab by default
    • User role with custom table access for regular users
  4. Step-by-Step: Set Up the Feature
    • Access or Enable the Feature

      • Log in as an admin
      • Click Tables in the top navigation menu
      • You’ll be taken to the Table Management screen
    • Create a Table in Table Management

      1. Click Create Table
      2. Enter a Table Name (e.g., Inventory)
      3. Choose an Update Type: Upsert, Append, or Replace
      4. Click Add Columns to define your table structure:
        • Name, Data Type (text, number, etc.), Primary Key, etc.
      5. Click Save to finalize table creation

      💡 Tip: Table names must be unique.

    • Create a Table in Workflow Creation

      1. Add a Table component in your workflow
      2. Click the Table component to configure
      3. The screen shows a Custom table panel with:
        • Show available fields (reveals API transaction fields)
        • Table info button (opens table configuration popup)
      4. Click Table info:
        • Enter Table Name (required)
        • Enter Description (optional)
        • Select Update Mode: Upsert (default), Append, or Replace
        • Click Apply to save settings
      5. Click Show available fields:
        • Displays output fields from prior components (e.g., M3 API)
        • Drag fields from the Available Fields list to the Columns panel
      6. Edit each column:
        • Click the Edit icon beside a field
        • A dialog will display:
          • Internal Column Name (non-editable)
          • Description (editable label)
          • Data Type (default: Text)
          • Primary Key checkbox
        • Click Apply to confirm changes
      7. Click Save to complete the configuration
      8. Execute the workflow to store data into the custom table
        • The table will now appear in Table Management
    • Use the Actions Menu

      Click the Actions menu for each table to:

      Option What It Does
      Column Info View the table's structure (read-only)
      Edit Table Modify columns (except primary key)
      Upload Excel Import data from a .xlsx file
      Show Data View current table entries
      Copy Table Duplicate the table structure and data
      Delete Content Remove all data but keep the table
      Delete Table Permanently delete the table
  5. Common Use Cases

    You should use this feature when…

    Scenario Example Setup
    Import product data Upload Excel with columns like Item Number, Status
    Track workflow outputs Use Append to log results over time
    Sync with external systems Use Upsert for smart updates without duplicates
  6. Tips and Best Practices
    • Enable Enforce Data Types to prevent invalid data
    • Use Advanced Configuration for computed or restricted values
    • Always test your Excel file before uploading
    • Set a Primary Key in all workflow-based tables
  7. Troubleshooting and FAQs

    Q: Why can't I see the Tables tab?

    A: You might not have the necessary permissions. Contact your admin.

    Q: Why won’t my table save?

    A: Make sure the table name is unique and all required fields are filled.

    Q: Can I change the update type later?

    A: No, the update type is fixed once the table is created.

    Q: What's the difference between Upsert, Append, and Replace?

    A:

    • Upsert: Adds new or updates changed records (no duplicates)
    • Append: Adds all records, even duplicates
    • Replace: Clears old data, inserts only new data

    Q: When does the workflow-created table appear in Table Management?

    A: Only after the workflow is saved and executed successfully.

 

Table updater

Table Updater is a workflow component in Vince Live that allows you to update existing tables with data coming from any source—REST APIs, Code, or Transform steps etc.

It was introduced because the original Table component was built only for M3 APIs, making it difficult to use with REST APIs or other inputs.

How it Works

  1. Add the Table Updater to a workflow.
  2. In the Table dropdown, all available tables are listed. Search and select the table you want to update.
  3. Connect the Table Updater to any component (Rest API, Code, Transform, etc.).
  4. On execution, the data from the workflow step is written to the selected table.
    • Updates or inserts are based on the table’s primary key.
    • Data types are enforced as per the table definition.
Key Benefits
  • Open to multiple sources.
  • Keeps tables updated automatically on workflow execution.
  • Schema driven – ensures correct data types and key handling.
Example
  • Workflow: Trigger → Transform → Table Updater.
  • The Table Updater is pointed to table User details.
  • On execution, transformed user data (FirstName, Email, UserID, etc.) is updated in the table.

Use Table Updater whenever your data source is not strictly M3 API or when you need flexibility to integrate multiple sources.

 

11. Transform component

The Transform component manipulates JSON data using JSONata expressions. It allows users to change, filter or search through JSON received from previous steps.Because the component operates only on JSON, it is typically used after API or REST steps.

Key concepts:

  • JSON is a structured, machine‑readable format for storing and exchanging data.
  • JSONata is a powerful query and transformation language that allows searching, filtering, aggregation and restructuring of JSON data. Examples demonstrate extracting book titles (), filtering books by category () and calculating totals (). More advanced expressions can merge datasets by assigning variables and iterating through arrays.

Users provide a JSONata expression in the component configuration to transform the incoming data to the desired structure. The result is passed to subsequent workflow steps.

Transform Component – Reference Guide
1. What Is the Transform Component?

The Transform component helps you restructure, filter, or extract specific data from JSON responses produced by earlier steps in your workflow (e.g., REST API, M3 API, Code).

It uses JSONata — a powerful expression language for working with JSON — to apply custom transformations.

🎯 Think of it as a smart filter or translator that prepares your data for the next step.

2. When Should You Use It?

Use the Transform step when you need to:

  • Clean or trim large API responses
  • Extract only key fields from a JSON structure
  • Flatten deeply nested JSON
  • Reformat JSON for downstream steps (Excel, Email, M3, etc.)
3. Setup Requirements

Before using the Transform component:

  • You should view and understand the JSON structure (Execution Logs help!)
  • JSONata knowledge is helpful (basic examples provided below)
4. Step-by-Step: How to Use Transform
  • Add the Transform Step
    • Click + Add Step
    • Choose Transform from the list
    • Write a JSONata Expression
    • Paste your JSONata expression into the Transform editor.
Example 1 – Extract Items

$.body.results.records.{"Item Number": ITNO, "Status": STAT}

Example 2 – Filter Prices Over 20

$.items[price > 20]

Preview and Save

After saving, the transformed result becomes the input for all following steps.

You can preview the output in:

  • Execution Logs → Transform Output
Practical Use Cases
Scenario Example Flow
Filter valid purchase orders API → Transform (STAT = 20) → Excel
Extract user emails API (user data) → Transform → Email
Merge datasets (items + suppliers) Two APIs → Transform merges by supplierId
Flatten complex JSON for Email REST API → Transform → Email (HTML table)
JSONata: Learning the Basics
🧾 Sample Input

{
"store": {
"book": [
{ "category": "fiction", "title": "Harry Potter", "price": 29.99 },
{ "category": "science", "title": "Cosmos", "price": 19.99 }
],
"bicycle": { "color": "red", "price": 99.99 }
}
}

🎯 Common Expressions
Goal Expression Result
Get book titles store.book.title ["Harry Potter", "Cosmos"]
Filter fiction books store.book[category="fiction"] Filtered JSON
Sum book prices $sum(store.book.price) 49.98
Advanced Use Case – Merging Datasets
Dataset 1 – Items

{
"items": [
{ "id": 1, "name": "T-Shirt", "supplierId": 301, ... },
{ "id": 2, "name": "Jeans", "supplierId": 302, ... }
]
}

Dataset 2 – Suppliers

{
"suppliers": [
{ "id": 301, "name": "Fashion House Ltd." },
{ "id": 302, "name": "Style World Inc." }
]
}

JSONata Expression

(
$items := items;
$suppliers := suppliers;
$items.{
"id": id,
"name": name,
"price": price,
"supplier": $suppliers[id = supplierId].name
}
)

This merges supplier names into the item dataset using matching supplierId.

Troubleshooting & Best Practices
Issue Solution
Transform step output is empty Check previous step output is valid JSON
Don’t know what path to write Use Execution Logs → Raw Output to inspect
Expression works incorrectly Test in JSONata Playground
Nested data not accessible Use path syntax like data.items[0].price
Want to reuse a filtered array Use variables like ($x := ...)
Tips
  • Use .{} to customize field names
  • Use := to define reusable arrays
  • Use $ to reference the full input
  • Use log previews to iterate safely

Summary
Concept Description
Transform Component to shape JSON data in your workflow
JSONata Expression language to write rules for filtering, transforming, and reshaping JSON
Use Cases Clean, extract, flatten, reformat, merge JSON
Best Practice Always test with actual logs, iterate using small expressions

Activate component settings

There are 4 component settings needed to activate the data lake step

  • Endpoint – the data lake endpoint of the M3 environment to which this workflow will select data.
  • Body – the data lake sql statement
  • connectionId – A reference to the authentication credentials needed to call the endpoint
  • saveResultToS3 – instructions to save or not save the query result to an S3 “bucket” in AWS

Detail steps below.

  1. Body – the SQL statement to execute. This can be hard‑coded or parameterised. When using parameters, they must be defined in a preceding Transform step using JSONata expressions, and the workflow must be set to use spreadsheet data.
  2. Connection ID – refers to the connection containing authentication credentials for the Data Lake endpoint.
  3. saveResultToS3 – always set to because tenants do not have access to the AWS S3 storage used by the Data Lake service.

The Data Lake step runs the SQL query and returns the result to the next step, typically Excel or Transform.

 

12. EMAIL

The Email component in the workflow builder enables automated sending of email messages during workflow execution. It can be placed at any point in the process to deliver notifications, send reports, or share data outputs (in Excel or JSON formats) with specified recipients.

Adding the Email Component to a Workflow
  1. Open the Workflow
    Navigate to the workflow where you want to insert the email step.

  2. Insert a New Step

    • Click the ( + ) icon at the desired position in the sequence.

    • In the Add workflow step menu, select Email.

Configuration

When you add the Email component, the Email sender configuration panel appears.

1. Recipient Addresses
  • To: Enter one or more primary recipient email addresses.

  • CC / BCC: Optionally, add CC or BCC recipients by clicking Add cc addresses or Add bcc addresses.

  • Use the ( + ) icon to add multiple addresses in each field.

2. Attachment Options

You can choose to attach data from previous workflow steps:

  • None: No attachment.

  • Excel file: Sends workflow data in Excel format.

    • Source step for data: Choose whether the data comes from an API step or an Excel step in your workflow.

  • Data in JSON format: Sends workflow data in JSON format.

3. Content
  • HTML Toggle: Enable if you want the email body to support HTML formatting.

  • Subject: Enter the subject line for the email.

  • Body: Write the email’s main content.

    • If HTML is enabled, you can use HTML tags for styling and formatting.

4. Save
  • Click Save to store your email step configuration.

Execution Flow

When the workflow runs:

  1. The Email step waits for preceding steps to complete.

  2. It composes the message using your configured subject, body, and attachments.

  3. The email is sent to all specified recipients.

  4. The workflow execution timeline displays:

    • Start and completion time for the email step.

    • Success confirmation or any errors encountered.

Example Use Cases
  • Automated Report Distribution: Send daily or weekly reports generated earlier in the workflow.

  • Alert Notifications: Notify stakeholders of process failures or exceptions.

  • Data Sharing: Share API or Excel output directly with relevant recipients.

Best Practices & Tips
  • Validate Addresses: Ensure all email addresses are correct to avoid delivery failures.

  • Clear Subject Lines: Make subjects descriptive so recipients immediately understand the email purpose.

  • Attachment Size: Keep attachments within the allowed size limit for your email server/provider.

  • HTML Formatting: Use HTML sparingly to ensure compatibility across different email clients.

  • Testing: Run a test execution with your own email address before sending to stakeholders.

 

13. SMS

The SMS component in the workflow builder allows you to automatically send text messages (SMS) to one or more recipients as part of an automated process. It can be placed at any step in the workflow, enabling real-time alerts, status updates, or notifications triggered by preceding steps.

Adding the SMS Component to a Workflow
  1. Open the Workflow
    Navigate to the workflow where you want to add the SMS step.

  2. Insert a New Step

    • Click the ( + ) icon at the desired location in the workflow sequence.

    • From the Add workflow step menu, select SMS.

  3. Configure Execution (Optional)

    • After placing the SMS step, click the three-dot menu on the step and select Configure execution to adjust advanced options.

Configuration

When you add the SMS component, you will be presented with the SMS Configuration panel.

1. Recipient Phone Number
  • Enter the phone number in international format (e.g., +1234567890).

  • You can add multiple recipients by clicking the ( + ) icon next to the field.

2. Message
  • Type your SMS message in the text area.

  • The message length is limited to 160 characters (standard SMS size).

  • If the message exceeds 160 characters, it may be split into multiple SMS messages depending on your provider.

3. Save
  • Click Save to confirm your configuration.

Execution Flow

When the workflow runs:

  1. Trigger

    • The SMS step will only execute after all previous steps in the workflow have successfully completed.

  2. Message Dispatch

    • The system sends the SMS message to all configured recipients.

  3. Execution Log

    • The workflow execution timeline will display the SMS step along with completion time and status (e.g., “SMS completed in 765 ms”).

    • Any errors in sending will be indicated in the log.

Example Use Cases
  • Order Status Notifications – Automatically send customers an update when their order status changes.

  • System Alerts – Notify technical teams when a system check fails.

  • Workflow Completion Updates – Inform stakeholders when a report is generated or data is processed.

Best Practices & Tips
  • Validation: Always test your workflow with a known working phone number before going live.

  • Short & Clear Messages: Keep SMS content concise and to the point to avoid splitting messages.

  • Error Handling: Consider adding conditional checks before the SMS step to avoid sending incorrect or unnecessary notifications.

  • Compliance: Ensure messages comply with local SMS sending regulations.

  • Notifications – workflows can be configured to send notifications when errors occur or when tasks complete. A “Notify on error” option can be enabled in the workflow settings to send an email to a specified address if the workflow fails. Alerts and notifications can also be configured using the Email component described earlier.
  • SMS step – an SMS component is listed but currently has no documentation. It is expected to allow sending SMS messages when available.

 

14. Data Lake

Vince Live’s workflow builder provides a modular, low‑code approach to automating data flows between Infor M3, Vince Live and external services. By combining triggers, data acquisition components (M3 API, REST API, Data Lake, File Loader), data manipulation components (Filters, Transform, Converter, Code) and delivery components (Excel, Table, Email, notifications), users can build sophisticated workflows without programming skills. Governance features such as labeling, sheet verification and backup toggles help maintain integrity and traceability of data processing.

What Is Data Lake?

Data Lake is a secure, cloud-based repository containing a copy of your M3 data. It allows you to:

  • Query and analyze large datasets using SQL-style commands
  • Combine M3 data with external systems such as Salesforce or Shopify
  • Generate detailed reports, identify anomalies, and find missing data

Benefits for End Users

Benefit What It Means for You
Extensive Data Access Query across your M3 environment for in-depth analysis.
Cross-System Analysis Integrate multiple data sources for a complete picture.
Familiar Querying Use SQL-like syntax without needing deep technical skills.

Limitations to Keep in Mind

  • Read-Only – You can extract and view data, but cannot change it in Data Lake.
  • Not Real-Time – Data represents a snapshot, not live updates.

These mean Data Lake is ideal for finding issues—but not for fixing them directly.

How Vince Enhances Data Lake

Vince bridges the gap between insight and action:

  • Search & Identify: Use Data Lake queries to locate missing or inconsistent data.
  • Edit in Excel: Make updates in a familiar interface.
  • Push Changes to M3: Send updates back to M3 via Vince workflows and APIs.
  • Automate: Schedule workflows, set up alerts, and run regular data checks.

Configuring and Running Data Lake Workflows in Vince Live

Prerequisites
  • Access to Vince Live and Workflows permissions
  • Connected M3 tenant
  • VXL Excel add-in
Creating a Workflow
  1. In Vince Live, go to Workflows.
  2. Click New Workflow, add name/description.
  3. Add the VXL label if running from Excel.
  4. Add a Trigger step (Manual or Scheduled).
  5. Add a Data lake step:
    • Endpoint: https://<host>/<ENV>/DATAFABRIC/compass/v2/jobs
    • Body: Your SQL query
    • connectionId: M3 connection ID
    • saveResultToS3: false
  6. Add an Transform step to define output.
  7. Add mandatory details and save.
Running the Workflow

Manual Run: From Workflows, click Execute, fill parameters, click Execute, then download results from Execution Logs.

Scheduled Run: Set Trigger to Schedule, define frequency/time zone, save and activate.

Excel (VXL) Run: Sign in to VXL in Excel, select the workflow, enter parameters, click Run.

Monitoring & Troubleshooting
  • View Workflow statistics for run history, success/failure counts, and logs.
  • Common fixes:
    • Verify endpoint and connection ID.
    • Ensure saveResultToS3 is false.
    • Add Transform step for Excel parameters.

 

15. Data governance: labels, backup and verification

Vince Live provides several governance features:

  • Labeling – workflows can be tagged with labels to facilitate sorting and filtering. The VXL label must be added to allow running the workflow from the Excel add‑in.
  • Backup of Excel files – enabling the “Save backup of output” toggle automatically stores a copy of every output Excel file when a workflow is executed from VXL Live, ensuring data is retained for auditing.
  • Sheet and column verification – the Control Spreadsheet and Enforce Sheet Verification toggles ensure that worksheets and column mappings remain consistent, preventing accidental execution on the wrong sheet or with mismatched column headers.
  • Groups and listing – grouping workflows and the workflow listing page allow administrators to organize, search and manage workflows, assign labels and view recent modifications. Listing cards display component icons, number of groups and labels, and allow actions such as run, edit, copy or delete; permissions determine which actions are available.